Azure SQL Managed Instance setup and performance thoughts

I am not going to describe what SQLMI are and how they compare to the other SQL offering on Azure here, but if you need to know more about MI this is a great starting place https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance. Yet, this is a very viable and cool option to host databases in Azure.

Also, as you are looking to test this out and If you don’t want to integrate the MI to existing vnet, you can look at this quick start template https://github.com/Azure/azure-quickstart-templates/tree/master/101-sql-managed-instance-azure-environment

 

Getting  a SQL MI ready

This said, getting the MI installed is not like any other resources or PaaS in that instance, and you will need:

  1. Configure Virtual Network where Managed Instance will be placed.
  2. Create Route table that will enable Managed Instance to communicate with Azure Management Service.
  3. Optionally create a dedicated subnet for Managed Instance (or use default one that is created when the Virtual Networks is created)
  4. Assign the Route table to the subnet.
  5. Double-check that you have not added something that might cause the problem.

 

Vnet configuration is the network container for all your stuff, this said, the MI shall not be on a subnet that has anything else. And so, creating a subnet for all of your Mis makes sense. Also, there cannot be any Service Endpoints attached to the subnet either. If you want to have only one subnet in your Virtual Network (Virtual Network blade will enable you to define first subnet called default), you need to know that Managed Instance subnet can have between 16 and 256 addresses. Therefore, use subnet masks /28 to /24 when defining your subnet IP ranges for default subnet. If you know how many instances you will have make sure that you have at least 2 addresses per instance + 5 system addresses in the default subnet.

 

The route table will allow the MI to talk to the Azure Management Service. This is required because Managed Instance is placed in your private Virtual Network, and if it cannot communicate with Azure service that manages it will become inaccessible. add new resource “Route table”, and once it is created for to Routes blade and add a route “0.0.0.0/0 Next Hop Internet route”. This route will enable Managed Instances that are placed in your Virtual Network to communicate to Azure Management Service that manages the instance. Without this, the Managed Instance cannot be deployed.

 

Rules for the subnet

  • You have a Managed Instance Route table assigned to the subnet
  • There should be no Networks Security Groups in your subnet.
  • There should be no service-endpoint in your subnet.
  • There are no other resources in subnet.

 

Altogether:

  • Virtual Network should have Service Endpoints disabled
  • Subnet must have between 16 and 256 IP addresses (masks from /28 to /24)
  • There should be no other resources in your Managed Instance subnet.
  • Subnet must have a route with 0.0.0.0/0 Next hop internet
  • Subnet must not have any Network Security Group
  • Subnet must not have any service endpoint

 

More on configuring your SQLMI on MSDN (or whatever the new name is:) ) https://blogs.msdn.microsoft.com/sqlserverstorageengine/2018/03/14/how-to-configure-network-for-azure-sql-managed-instance/

 

Access and restores

And then what you ask? You need to connect and play for the database stuff?

SQLMI are private by default and the easy way is to connect from a VM within the same Vnet, connect using SMSS or from your app running next to the SQLMI, like the usual architecture stuff right?

But wait, there is more scenarios here! https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-connect-app

 

sqlmi1

 

Don’t be fooled by the version number here, this is not your regular v12 MSSQL (aka 2014). Those MSSQL azure flavours follow a different numbering and are actually more like a mixture between 2017 and 2019 (As of today!)

But if you thought you could restore a DB from a backup (.bak) file you can, but it will have to be from a BLOB storage container of some sort as SQLMI can only understand that device type.

Or use DMS https://docs.microsoft.com/en-us/azure/dms/tutorial-sql-server-to-managed-instance

 

Performance

Because SQLMI is managed and under SLA, all databases are fully logged with a throttling(https://blogs.msdn.microsoft.com/sqlcat/2018/07/20/storage-performance-best-practices-and-considerations-for-azure-sql-db-managed-instance-general-purpose/ ) to ensure the cluster can ship and replay the logs in a good amount of time. When doing more intensive operations such as insert millions of records this can play a role slowing this down.

 

The answer to that is: over-allocated storage! Indeed, behind the DB files are blobs with various iops capabilities and just like the managed disks, your disk iops for your DBs come with the amount of storage too. The tiers are >512; 513-1024; 1024-2048… so even if you have a small DB, you might want to go with more space on the instance and grow your DB (and log) files to the maximum right away – you pay for it after all!. More on premium disks here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage#scalability-and-performance-targets

 

Tip! remember that the tempDB is not fully logged and lives on local SSD for the Business Critical version of SQLMI. Use this one if want untethered speed.

Also of interest a script to measure iops https://github.com/dimitri-furman/managed-instance/blob/master/MI-GP-storage-perf/MI-GP-storage-perf.sql

 

Advertisements

Azure Cloud, IOPS, DTU and VM equivalence, PaaS vs IaaS MSSQL

I am building the next generation of Data Warehousing for a client and because the cloud’s sky is the limit it doesn’t mean they can afford to spend. Millions (25 million) of records need to be analyzed to produce meaningful reports and dashboard.

Cranking the power all the way up gives us the information we need in seconds. The highest database tier offered is something call P11. But what behind P11? Did I say this PaaS option was expensive?

First, it is not readily translatable into regular machines as the various tiers use a mix of computing, memory and disk IOPS and as per Microsoft: “a blended measure of CPU, memory, and data I/O and transaction log I/O…”. This blend unit is the Database Transaction Unit aka DTU.

Based on the performance trending it comes to something like this (courtesy of What the heck is a DTU?):

Number Cores IOPS Memory DTUs Service Tier Comparable Azure VM Size
1 core, 5% utilization 10 ??? 5 Basic Standard_A0, barely used
<1 core 150 ??? 100 Standard S0-S3 Standard_A0, not fully utilized
1 core up to 4000 ??? 500 Premium – P4 Standard_DS1_v2
2-3 cores up to 12000 ??? 1000 Premium – P6 Standard_DS3_v2
4-5 cores up to 20000 ??? 1750 Premium – P11 Standard_DS4_v2

Notice that this is a comparable size…and that P11 DTU is about CAD$15,000/month!!! And while PaaS is great, the price tag is a little too much for the common mortals getting into DWH.

Anyhow, trying to recreate a cheaper P11 I went on a picked a VM size of E8s_v3, because it said  “Great for relational database servers, medium to large caches, and in-memory analytics”. More sizes here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes 

Size vCPU Memory: GiB Temp storage (SSD) GiB Max data disks Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) Max uncached disk throughput: IOPS / MBps Max NICs / Expected network bandwidth (Mbps)
Standard_E8s_v3 8 64 128 16 16,000128 (200) 12,800 / 192 4 / 4,000

And so to host my data, I am adding the best disk I find on Azure:

This one, because it is closer to the size I need.
P20 disk is 512 GB, it can achieves 2300 IOPS and up to 150 MB/s.

When I could have gone for the highest:
P30 disk is 1024 GB, it can achieves 5000 IOPS and up to 200 MB/s. And only comes in 1TB size

The key thing* to understand is that the VM tier will limit the IOPS of what you connect to it.

And so here, while my disk is rated at 2300 IOPS and 150 MB/s, the machine specifications are going to limit me at 16,000 IOPS (no problem here) / but only 128 MB/s which is ok because P20 is only 150 MB/s anyway.

Once the VM is brought up online and the disk attached, a quick benchmark will match the specifications given:

iaas iops

 

 

135 MB/s for 150 MB/s advertised…

But wait, is this fast? My laptop gives me way more

laptop iops.png

How to attach faster disks to my VMs then?  What if I want to create a VHD with 600 GB and 400 MB/s of throughput?

You will not obtain such throughput if you just create a 600 GB VHD, because Azure will create a 600 GB VHD on a P30 Disk, and then you will have only 200 MB/s.

To achieve that, you should use stripping, and to do that, you can proceed with different ways:

  • Create 2 600 GB VHDs. Azure will create them using P30 disks. Then you use your stripping tool (Storage spaces) to create a 1200 GB volume. This volume will permit 400 MB/s and 10000 IOPS. But in this case, you will have 600 un-needed GB
  • Create 3 VHDs with 200 GB each. Azure will create them using P20 disks. Then you use your stripping tool (Storage spaces) to create a 600 GB volume. This volume will permit 450 MB/s (150 MB/s *3) and 6900 IOPS (2300 IOPS *3).

Wait, I need more! What if I want to create a VHD with 600 GB and 600 MB/s of throughput? Unfortunately, we can’t just dream, and ask Azure to do it, not till today. In fact, the maximum throughput possible is 512 MB/s, we can’t do better.

Ok. You have created the stripped volumes and you are still not getting what you want? Remember the key thing I mentioned above * ? The total data storage, the IOPS and the throughput are limited by the VM series and size. Each Azure Virtual Machine type is limited by a number of disks (total storage size), a maximum IOPS (IOPS) and a maximum throughput (Throughput). For example, you  may achieve a 128 MB/s only in a Standard_E8s_v3.  All the other VM types will throttle your IOPS or throughput when you reach the threshold.

When I was looking at the Memory optimized server size, there are Storage optimized VM sizes.

Size vCPU Memory: GiB Temp storage (SSD) GiB Max data disks Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) Max uncached disk throughput: IOPS / MBps Max NICs / Expected network bandwidth (Mbps)
Standard_E8s_v3 8 64 128 16 16,000128 (200) 12,800 / 192 4 / 4,000
Standard_L8s 8 64 32 32 40,000 / 400 10,000 / 250 4 / 8,000

And so from the disk creation steps above, you will also want to get a VM that will be able to provide you with the throughput you need.

#azure, #database, #dtu, #iaas, #mssql, #paas, #sql

After a migration to Exchange Online Exchange Autodiscover SCP got in the way

There was once a customer who was on Exchange 2013 on-prem for their email needs, at some point they decided (thanks to Microsoft enticing pricing) to go with Office 365 and migrate all of their mailboxes and email needs to the cloud.

After the Exchange servers were put out of the equation, users and new outlook set up started seeing some uninteresting error/warning messages:

Troubleshooting Certificate Mismatch Warnings in Outlook ...

And so, while moving to Office 365, all of the DNS entries (internal and external) for autodiscover had been changed to autodiscover.outlook.com but the old one: autodiscover.contoso.com – in theory, the on-prem exchange server CAS – would still be around and generate the message. So where would this on-prem reference come from?

While on-prem, Exchange would have been configured to add entries to Active Directory and here in the name of a Service Connection Point (SCP) object and as the domain joined machine want to auto-configure Outlook, they would end up finding of that older reference.

Autodiscover functional process

The solution is to remove this from AD! There are multiple ways to prevent Outlook from contacting the local Exchange server first…

Using Exchange Management Shell (EMS)
The preferred way is to use the Exchange Management Shell to clear the entry for the Client Access server from the SCP.

[PS] C:\Windows\system32>Get-ClientAccessServer | fl *uri*
AutoDiscoverServiceInternalUri : https://webmail.contoso.com/autodiscover/autodiscover.xml
AutoDiscoverServiceInternalUri : https://webmail.contoso.com/autodiscover/autodiscover.xml
[PS] C:\Windows\system32>Set-ClientAccessServer -Identity cas-ex1 -AutoDiscoverServiceInternalUri $null
[PS] C:\Windows\system32>Set-ClientAccessServer -Identity cas-ex2 -AutoDiscoverServiceInternalUri $null

ADSIEdit
If the above method can no longer be used a low-level AD editor as EDSIEdit can be used to remove the SCP manually. The full path of the SCP is:

CN=ServerName,CN=Autodiscover,CN=Protocols,CN=ServerName,CN=Servers,CN=Exchange Administrative Group (FGH124FG788IYF),CN=Administrative Groups,CN=OrganizationName,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=DomainName,DC=Suffix

image

This object to remove has the Class type serviceConnectionPoint.

Some useful tools to help out finding out any autodiscover issues:
SARA (microsoft Support And Recovery Assistant): https://diagnostics.office.com
Some reference from technet (yes I still call it technet): https://technet.microsoft.com/en-us/library/bb124251.aspx
More about SCPs:https://msdn.microsoft.com/en-us/library/office/dn467397(v=exchg.150).aspx

#exchange, #office-365, #outlook

Copy windows features from a server to another

Sometimes you want to create the (almost) same server where you do not yet Chef or CF or some sort of DSC. The best resort is to use what you have: get-windowsfeature

Imagine you want to configure Server B from Server A and obviously those are microsoft windows servers…

#On server A
#export features
> Get-WindowsFeature | ? { $_.Installed -AND $_.SubFeatures.Count -eq 0 } | Export-Clixml .\serverA.xml
#copy the feature file over
> cp .\serverA.xml ‘\\serverB\c$\Files’

#On server B
PS C:\Files> ls
Directory: C:\Files
Mode LastWriteTime Length Name
—- ————- —— —-
-a—- 6/8/2017 4:15 PM 510824 ServerA.xml
PS C:\Files> Import-Module Servermanager
PS C:\Files> Import-Clixml .\ServerA.xml | Add-WindowsFeature
Success Restart Needed Exit Code Feature Result
——- ————– ——— ————–
True Yes SuccessRest… {Application Server, .NET Framework 4.5, W…
WARNING: You must restart this server to finish the installation process.

Voila, another posh timesaver.

#clone, #get-windowsfeature, #powershell, #server, #windows-2

SCCM2012 (R2) new application creation fails

I had recently migrated my DBs from one volume to another volume due to space concerns, all was successful and life was good 🙂

However the other day, I wanted to create a new application and got the following “unknown” error.

The SMS Provider reported an error connecting to the ConfigMgr site database server. Verify that the SQL Server is online and that ConfigMgr site server computer account is an administrator on the ConfigMgr site database server.

SmsAdminUI.log would something unknown as well.

Description = “CSspConfigurationItem: SQL_ERROR”;
File = “e:\\qfe\\nts\\sms\\siteserver\\sdk_provider\\smsprov\\sspconfigurationitem.cpp”;
SQLMessage = “*** Unknown SQL Error!”;

Scratched my head a few times and started DDGing as the error was pretty self explanatory and found the following KB/Blog entry.

Basically, after such a DB files location move the SQL TRUSTWORTHY setting gets reset and the dbowner may change.

I hope on the MSSQL and executed the following queries to save the day – well only the creation of new packages.

ALTER DATABASE CM_CIE SET TRUSTWORTHY ON;
EXEC sp_changedbowner ‘sa’;

Then tried to create a new application and voilà…

#2012, #2012r2, #application, #creation, #error, #mssql, #package, #sccm, #system-center

Uninstall GP2010 and installation of GP2015

This document describes how to uninstall GP2010 and installation of GP2015.

Prerequisite: local admin rights to uninstall and install software on the machine

  1. Uninstall GP2010 following components
    1. GP2010, Mekorma MICR 2010, Integration Manager for Microsoft Dynamics GP 2010, Dexterity Shared Components 11.0 (64-bit)
    2. Remove the following folders
      1. C:\Program Files (x86)\Microsoft Dynamics\GP2010
      2. C:\Program Files (x86)\Common Files\microsoft shared\Dexterity
  2. Restart the computer
  3. Install GP2015 (includes dexterity 14) as usual.

Uninstall using WMIC

note that Mekorma not playing nice with wmic or msiexec – must uninstall manually.

wmic call Msiexec GUID
product where name=”Microsoft Dynamics GP 2010″ call uninstall /nointeractive {DC90A0A6-2D90-493E-8D13-D54AD123B9FD}
product where name=”Integration Manager for Microsoft Dynamics GP 2010″ call uninstall /nointeractive {FAFD8B80-E75F-4557-85F3-67B8D7A14E8F}
product where name=”Dexterity Shared Components 11.0 (64-bit)” call uninstall /nointeractive {F5459EB2-A662-4EB3-AD94-E771DC2F542A}
product where name=”Mekorma MICR 2010″ call uninstall /nointeractive {A45282DB-59DC-4A5D-9E1F-08A225D81A44}
To run on several nodes at the same time:
wmic:root\cli>/failfast:on /node:@”c:\temp\trainingwks.txt” product where name=”Microsoft Dynamics GP 2010″ call uninstall /nointeractive

#dynamics, #gp, #gp2010, #gp2015

Managing Certificates using Powershell

Because of my recent work with ADFS I was looking for a way to automate most of the certificate configuration by scripts. The usual run-books I write would usually include the use of the mmc and a bunch of screenshot to accompany them.

The answer is that powershell management for Certificates is there and here are some examples:

 

#Powershell exposes certs under cert:\
PS C:\> Get-PSDrive
Name Used (GB) Free (GB) Provider Root CurrentLocation
—- ——— ——— ——– —- —————
A FileSystem A:\
Alias Alias
C 14.37 45.29 FileSystem C:\
Cert Certificate \
D FileSystem D:\
Env Environment
Function Function
HKCU Registry HKEY_CURRENT_USER
HKLM Registry HKEY_LOCAL_MACHINE
Variable Variable
WSMan WSMan
PS C:\> cd cert:
PS Cert:\> dir localmachine
Name : TrustedPublisher
Name : ClientAuthIssuer
Name : Remote Desktop
Name : Root
Name : TrustedDevices
Name : CA
Name : REQUEST
Name : AuthRoot
Name : TrustedPeople
Name : My
Name : SmartCardRoot
Name : Trust
Name : Disallowed
Name : AdfsTrustedDevices

#Browsing through the stores is pretty intuitive
PS Cert:\> dir Cert:\LocalMachine\My
Directory: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
———- ——-
E31234DEF282437D167A64FD812342B650C20B42 CN=XXXXa
8912343319B07131C8FD1234E250DC67CBE08D7A CN=XXXX
69AD2C21912340919D186503631234A6F0BE9F7F CN=*.xxx.ca,XXX..

#Exporting a cert is something a little less intuitive
PS Cert:\> $ExportCert = dir Cert:\LocalMachine\Root | where {$_.Thumbprint -eq “892F212349B07131C12347F8E250DC67CBE08D7
A”}
PS Cert:\> $ExportCryp = [System.Security.Cryptography.X509Certificates.X509ContentType]::pfx
PS Cert:\> $ExportKey = ‘pww$@’
PS Cert:\> $ExportPFX = $ExportCert.Export($ExportCryp, $ExportKey)
PS Cert:\> [system.IO.file]::WriteAllBytes(“D:\Temp\CertToExportPFXFile.PFX”, $ExportPFX)

#same mess for importing

  1. Define The Cert File To Import

$CertFileToImport = “D:\Temp\CertToImportPFXFile.PFX”

  1. Define The Password That Protects The Private Key

$PrivateKeyPassword = ‘Pa$$w0rd’

  1. Target The Cert That Needs To Be Imported

$CertToImport = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 $CertFileToImport,$PrivateKeyPassword

  1. Define The Scope And Certificate Store Within That Scope To Import The Certificate Into
  2. Available Cert Store Scopes are “LocalMachine” or “CurrentUser”

$CertStoreScope = “LocalMachine”

  1. For Available Cert Store Names See Figure 5 (Depends On Cert Store Scope)

$CertStoreName = “My”
$CertStore = New-Object System.Security.Cryptography.X509Certificates.X509Store $CertStoreName, $CertStoreScope

  1. Import The Targeted Certificate Into The Specified Cert Store Name Of The Specified Cert Store Scope

$CertStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)
$CertStore.Add($CertToImport)
$CertStore.Close()

For import/export, I’d recommend using code from here: http://poshcode.org/?lang=&q=import%2Bcertificate

 

#certificate, #certs, #manage, #pfx, #stores