Azure SQL Managed Instance setup and performance thoughts

I am not going to describe what SQLMI are and how they compare to the other SQL offering on Azure here, but if you need to know more about MI this is a great starting place https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance. Yet, this is a very viable and cool option to host databases in Azure.

Also, as you are looking to test this out and If you don’t want to integrate the MI to existing vnet, you can look at this quick start template https://github.com/Azure/azure-quickstart-templates/tree/master/101-sql-managed-instance-azure-environment

 

Getting  a SQL MI ready

This said, getting the MI installed is not like any other resources or PaaS in that instance, and you will need:

  1. Configure Virtual Network where Managed Instance will be placed.
  2. Create Route table that will enable Managed Instance to communicate with Azure Management Service.
  3. Optionally create a dedicated subnet for Managed Instance (or use default one that is created when the Virtual Networks is created)
  4. Assign the Route table to the subnet.
  5. Double-check that you have not added something that might cause the problem.

 

Vnet configuration is the network container for all your stuff, this said, the MI shall not be on a subnet that has anything else. And so, creating a subnet for all of your Mis makes sense. Also, there cannot be any Service Endpoints attached to the subnet either. If you want to have only one subnet in your Virtual Network (Virtual Network blade will enable you to define first subnet called default), you need to know that Managed Instance subnet can have between 16 and 256 addresses. Therefore, use subnet masks /28 to /24 when defining your subnet IP ranges for default subnet. If you know how many instances you will have make sure that you have at least 2 addresses per instance + 5 system addresses in the default subnet.

 

The route table will allow the MI to talk to the Azure Management Service. This is required because Managed Instance is placed in your private Virtual Network, and if it cannot communicate with Azure service that manages it will become inaccessible. add new resource “Route table”, and once it is created for to Routes blade and add a route “0.0.0.0/0 Next Hop Internet route”. This route will enable Managed Instances that are placed in your Virtual Network to communicate to Azure Management Service that manages the instance. Without this, the Managed Instance cannot be deployed.

 

Rules for the subnet

  • You have a Managed Instance Route table assigned to the subnet
  • There should be no Networks Security Groups in your subnet.
  • There should be no service-endpoint in your subnet.
  • There are no other resources in subnet.

 

Altogether:

  • Virtual Network should have Service Endpoints disabled
  • Subnet must have between 16 and 256 IP addresses (masks from /28 to /24)
  • There should be no other resources in your Managed Instance subnet.
  • Subnet must have a route with 0.0.0.0/0 Next hop internet
  • Subnet must not have any Network Security Group
  • Subnet must not have any service endpoint

 

More on configuring your SQLMI on MSDN (or whatever the new name is:) ) https://blogs.msdn.microsoft.com/sqlserverstorageengine/2018/03/14/how-to-configure-network-for-azure-sql-managed-instance/

 

Access and restores

And then what you ask? You need to connect and play for the database stuff?

SQLMI are private by default and the easy way is to connect from a VM within the same Vnet, connect using SMSS or from your app running next to the SQLMI, like the usual architecture stuff right?

But wait, there is more scenarios here! https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-connect-app

 

sqlmi1

 

Don’t be fooled by the version number here, this is not your regular v12 MSSQL (aka 2014). Those MSSQL azure flavours follow a different numbering and are actually more like a mixture between 2017 and 2019 (As of today!)

But if you thought you could restore a DB from a backup (.bak) file you can, but it will have to be from a BLOB storage container of some sort as SQLMI can only understand that device type.

Or use DMS https://docs.microsoft.com/en-us/azure/dms/tutorial-sql-server-to-managed-instance

 

Performance

Because SQLMI is managed and under SLA, all databases are fully logged with a throttling(https://blogs.msdn.microsoft.com/sqlcat/2018/07/20/storage-performance-best-practices-and-considerations-for-azure-sql-db-managed-instance-general-purpose/ ) to ensure the cluster can ship and replay the logs in a good amount of time. When doing more intensive operations such as insert millions of records this can play a role slowing this down.

 

The answer to that is: over-allocated storage! Indeed, behind the DB files are blobs with various iops capabilities and just like the managed disks, your disk iops for your DBs come with the amount of storage too. The tiers are >512; 513-1024; 1024-2048… so even if you have a small DB, you might want to go with more space on the instance and grow your DB (and log) files to the maximum right away – you pay for it after all!. More on premium disks here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage#scalability-and-performance-targets

 

Tip! remember that the tempDB is not fully logged and lives on local SSD for the Business Critical version of SQLMI. Use this one if want untethered speed.

Also of interest a script to measure iops https://github.com/dimitri-furman/managed-instance/blob/master/MI-GP-storage-perf/MI-GP-storage-perf.sql

 

Advertisements

Netflix and opendns are not friends – netflix support sucks.

For some reason I had not opted to let the Netflix app on my Android TV and one day with an Android update it decided it was going to be using 4.0.4 build 1716 instead.

Then Netflix would not be able to load any video with somewhat getting stuck at 25% with the lovely error of tvq-pm-100 3.1-52.

Life after work ended. The evening entertainment was ruined for ever and I was not the same again. The downwards spirals was inevitable.

I chatted with Netflix, spends hours on the phone going through the meaningless scripted troubleshoot – had I restart my TV box? Log off and back on? clear the cache? reset the appliance? nothing I was on the verge of video deprivation.

The most intriguing aspect was the competent Netflix staff would say: well as it is not us, it must your network provider. Yet not able to say what getting stuck at 25% could mean. Where are the good old logs telling what is going on when you need them?

I then read on a forum that the Netflix Android TV app would rely on Google DNS to geo-triangulate you and spy on you.

In order to protect my household I had opted long ago for opendns to block the doubleclick and other webspam of the universe without issues in the previous versions of Netflix.

In the end, changing the DNS setting on that Android TV to use Google’s infamous DNS 8.8.8.8 and 8.8.4.4 to see Netflix videos loading at lightning speed and that very same Android TV box could again spy on me at will.

Thanks to Google’s sneakiness the end of the world was avoided.

#google, #netflix, #netflix-sucks, #opendns, #privacy, #support-sucks

Uninstall GP2010 and installation of GP2015

This document describes how to uninstall GP2010 and installation of GP2015.

Prerequisite: local admin rights to uninstall and install software on the machine

  1. Uninstall GP2010 following components
    1. GP2010, Mekorma MICR 2010, Integration Manager for Microsoft Dynamics GP 2010, Dexterity Shared Components 11.0 (64-bit)
    2. Remove the following folders
      1. C:\Program Files (x86)\Microsoft Dynamics\GP2010
      2. C:\Program Files (x86)\Common Files\microsoft shared\Dexterity
  2. Restart the computer
  3. Install GP2015 (includes dexterity 14) as usual.

Uninstall using WMIC

note that Mekorma not playing nice with wmic or msiexec – must uninstall manually.

wmic call Msiexec GUID
product where name=”Microsoft Dynamics GP 2010″ call uninstall /nointeractive {DC90A0A6-2D90-493E-8D13-D54AD123B9FD}
product where name=”Integration Manager for Microsoft Dynamics GP 2010″ call uninstall /nointeractive {FAFD8B80-E75F-4557-85F3-67B8D7A14E8F}
product where name=”Dexterity Shared Components 11.0 (64-bit)” call uninstall /nointeractive {F5459EB2-A662-4EB3-AD94-E771DC2F542A}
product where name=”Mekorma MICR 2010″ call uninstall /nointeractive {A45282DB-59DC-4A5D-9E1F-08A225D81A44}
To run on several nodes at the same time:
wmic:root\cli>/failfast:on /node:@”c:\temp\trainingwks.txt” product where name=”Microsoft Dynamics GP 2010″ call uninstall /nointeractive

#dynamics, #gp, #gp2010, #gp2015

Managing Certificates using Powershell

Because of my recent work with ADFS I was looking for a way to automate most of the certificate configuration by scripts. The usual run-books I write would usually include the use of the mmc and a bunch of screenshot to accompany them.

The answer is that powershell management for Certificates is there and here are some examples:

 

#Powershell exposes certs under cert:\
PS C:\> Get-PSDrive
Name Used (GB) Free (GB) Provider Root CurrentLocation
—- ——— ——— ——– —- —————
A FileSystem A:\
Alias Alias
C 14.37 45.29 FileSystem C:\
Cert Certificate \
D FileSystem D:\
Env Environment
Function Function
HKCU Registry HKEY_CURRENT_USER
HKLM Registry HKEY_LOCAL_MACHINE
Variable Variable
WSMan WSMan
PS C:\> cd cert:
PS Cert:\> dir localmachine
Name : TrustedPublisher
Name : ClientAuthIssuer
Name : Remote Desktop
Name : Root
Name : TrustedDevices
Name : CA
Name : REQUEST
Name : AuthRoot
Name : TrustedPeople
Name : My
Name : SmartCardRoot
Name : Trust
Name : Disallowed
Name : AdfsTrustedDevices

#Browsing through the stores is pretty intuitive
PS Cert:\> dir Cert:\LocalMachine\My
Directory: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
———- ——-
E31234DEF282437D167A64FD812342B650C20B42 CN=XXXXa
8912343319B07131C8FD1234E250DC67CBE08D7A CN=XXXX
69AD2C21912340919D186503631234A6F0BE9F7F CN=*.xxx.ca,XXX..

#Exporting a cert is something a little less intuitive
PS Cert:\> $ExportCert = dir Cert:\LocalMachine\Root | where {$_.Thumbprint -eq “892F212349B07131C12347F8E250DC67CBE08D7
A”}
PS Cert:\> $ExportCryp = [System.Security.Cryptography.X509Certificates.X509ContentType]::pfx
PS Cert:\> $ExportKey = ‘pww$@’
PS Cert:\> $ExportPFX = $ExportCert.Export($ExportCryp, $ExportKey)
PS Cert:\> [system.IO.file]::WriteAllBytes(“D:\Temp\CertToExportPFXFile.PFX”, $ExportPFX)

#same mess for importing

  1. Define The Cert File To Import

$CertFileToImport = “D:\Temp\CertToImportPFXFile.PFX”

  1. Define The Password That Protects The Private Key

$PrivateKeyPassword = ‘Pa$$w0rd’

  1. Target The Cert That Needs To Be Imported

$CertToImport = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 $CertFileToImport,$PrivateKeyPassword

  1. Define The Scope And Certificate Store Within That Scope To Import The Certificate Into
  2. Available Cert Store Scopes are “LocalMachine” or “CurrentUser”

$CertStoreScope = “LocalMachine”

  1. For Available Cert Store Names See Figure 5 (Depends On Cert Store Scope)

$CertStoreName = “My”
$CertStore = New-Object System.Security.Cryptography.X509Certificates.X509Store $CertStoreName, $CertStoreScope

  1. Import The Targeted Certificate Into The Specified Cert Store Name Of The Specified Cert Store Scope

$CertStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)
$CertStore.Add($CertToImport)
$CertStore.Close()

For import/export, I’d recommend using code from here: http://poshcode.org/?lang=&q=import%2Bcertificate

 

#certificate, #certs, #manage, #pfx, #stores

ADFS Proxy Trust certificate on WAP doesn’t auto renew

Once upon a time, the web application proxy for ADFS proxy started throwing error.

The Remote Access Management console could not do much complaining with an error code “the operation stopped due to an unknown general error” as always really helpful message.

Looking at the logs, the WAP was also complaining about establishing its trust with the ADFS server.

Fairly enough the ADFS proxy was also complaining about the trust saying that the proxy trust certificate had expired.

Back to the WAP and surely enough it was. However from the GUI I could not find any way to recreate the trust and had to use my DuckDuckGo powers.

So I found that the wizard had to be tricked for reinitialization prior to doing anything as in http://channel9.msdn.com/Events/MEC/2014/USX305

HKLM\Software\Microsoft\ADFS\ProxyConfigurationStatus

We need to set the ProxyConfigurationStatus REG_DWORD to a value of 1 (meaning “not configured”) instead of 2 (“configured”). Once that change is made, re-open the GUI. No reboot is required.

The Remote Access Manager should now allow you to re-run the configuration wizard.

I still don’t know why it would not renew, but given that the certification of the trust goes by every 2 weeks I will seen pretty soon.

#adfs, #certificate, #proxy, #wap

Viewing queues in Exchange 2013 with powershell

Now that Microsoft have changed all the GUI management I struggled to locate the queue viewer. As it turns out it is NOT part of the Exchange admin center (https://localhost/ecp). This tool is part of the Exchange Toolbox, you will find with your management package for Exchange and the queue viewer works like before.

But obviously one would prefer powershell to do so, right!

Get-Queue and Get-QueueDigest will be you friends. You would need to know your DAG prior to that…

>Get-DatabaseAvailabilityGroup

Name             Member Servers                                      Operational Servers
----             --------------                                      -------------------
MY-DAG1         {MY-TOR-EX2, MY-TOR-EX1}

>Get-QueueDigest -Dag MY-dag1

GroupByValue                      MessageCount DeferredMess LockedMessag StaleMessage Details
ageCount     eCount       Count
------------                      ------------ ------------ ------------ ------------ -------
[10.77.77.12]                     227          0            0            0            {MY-TOR-EX2\66427, MY-TOR-EX...
Submission                        1            1            0            0            {MY-TOR-EX2\Submission}

#dag, #exchange, #queue

graylog2 server not listening on ports 514 and 12201

I have managed to get the graylog2 server  v1.2.2 running with their virtual appliance.

Everything seems to work just fine, except that the graylog server instance
was not listening on the ports defined in graylog2.conf.

In netstat I see the graylog java process associated to the ports 12201 and
514, yet they are not in state LISTEN, and any log messages i send to my
machine on 12201 as gelf via udp are not picked up.

I read the getting started documentation for the setup from bottom up again but could not find anything.

Message inputs are the Graylog parts responsible for accepting log messages. They are launched from the web interface (or the REST API) in the System -> Inputs section and are launched and configured without the need to restart any part of the system.

I added those from the System>Input screen – boom – it started listening.

Oh well.

Looking for a good tutorial to setup graylog? have a look there.

#514, #graylog, #log, #syslog, #udp