SaaS application integration What do you need to ask?

Don’t let your application govern you. When selecting a SaaS and application as a service, ask the right questions to the prospective vendor.

What is your disaster recovery plan?
Most SaaS vendors have a disaster recovery plan, but not all plans are created equal. Some mistakenly believe taking regular backups constitutes disaster recovery.
Make sure your SaaS vendor has a solid plan that covers a recovery timeline, routine testing, and geographic isolation. In other words, if there is an earthquake, is that going to wipe out all of your data centres?
If the SaaS vendor doesn’t provide recovery services, do they allow your IT to access the data and back it up?

What if you go out of business? What is your data-out policy?
Often we think of catastrophic events in the form of natural disasters, but a vendor going out of business can do just as much damage. Having access to that data is non-negotiable no matter what happens outside your control.
There must be a data-out clause in your contract that allows you access to your data at any time, especially if the business is not going well.

Do you take my security seriously?
In the digital work, security is paramount and it doesn’t take a lot for your company to get on the front page of the news.
If you find it difficult to know which security features are most important, bring in your IT department for guidance.

You will want to find an answer to the following:
Residency of the data – under which country/province legislation will your data be governed by?
Encrypting data – when stored with the vendor data must be encrypted with a key you can control to ensure only you can view it. Ask about their zero-trust policy.
Secure data transmission and storage – not only when stored but also when the data is in transit between the users and systems, it needs to be secured. Ask about the protocols in place.
Access restrictions – who has access to what? Where it is to the data or to the data centres hosting your data ensure you are comfortable with strangers around your data.
Staff training, Sensitive Information handling – if provided access, will support or maintenance staff know what to do shall they stumble upon sensitive information. Most vendors will train their staff to ensure they know what to do and are under oath to not mishandle or leak your data.
Secure practices and certifications – the industry has various certifications that vendors need to achievement to show their business is secure. Demand those!
Regular monitoring and scanning – similarly you might want to know what was the result of their last security penetration testing. How did they score? are they transparent? Will they allow your IT team to perform their own tests?

How scalable is your product?
It is one thing to watch a flawless demo or run through a proof of concept without a glitch. But can the application withstand what the real world throws at it? Unfortunately, it is tough to know the answer to this until the real world happens.
For example, if one of the other clients of the service provider executes a huge project, is that going to negatively impact security? It is smart—and absolutely appropriate—to inquire about how well the vendor can scale their product to meet demands, and how quickly those demands will be met.

There has to be a strong bond to the clearly announced SLAs and SLOs. The vendor should clarify the following:
Response time in case of emergency (service interruption) – when things go down, how fast can they answer you or provide you with a status? What guarantees do they have to ensure the system to be available to you and your team?
Response time in case of non-urgent question or problem (utilization or configuration) – even when not critical, you want to know how your business is a priority to them.
Is there a guaranteed resolution time? – If so, what is it?
What is the support escalation ladder (what are the various levels and their roles?) – when the above SLA or SLO above are defined they are well known. As lack of transparency is a bad sign.
Are SLA breach backed by penalties? You are paying money for this, how about getting some back shall the promised service be not available?

Do You Offer Robust Integration?
Sometimes it is easy to forget that to ensure productivity, the new systems need to be well integrated for them to allow for automation and ease of use for higher adoption. In general, think about all of the interfaces this application is going to have to ensure as little as possible human intervention.

Single Sign-On and authentication – the vendor must support the use of our credentials to login with platforms like Azure Active Directory (SAML 2.0)
Automated user provisioning – with all systems, the user-based need to be created and can be easily maintained with seamless provisioning from platforms like Azure Active Directory (CIM)
Document Storage within Office 365 (Teams/SharePoint Online) – ease of access and security is best when documents don’t need to go outside of your corporate boundaries.
If the app is part of any financial systems, is it able to speak that language and present the information so that the recipient system can process?
Also, most organization have some Business Intelligence data lake where the data is processed to create funky reports and dashboards. It is very important for your executive team to be able to view important scorecard using that single system. Can the vendor allow pull access to something like Cognos Analytics? Can it export the relevant data to another location?

Azure SQL Managed Instance setup and performance thoughts

I am not going to describe what SQLMI are and how they compare to the other SQL offering on Azure here, but if you need to know more about MI this is a great starting place Yet, this is a very viable and cool option to host databases in Azure.

Also, as you are looking to test this out and If you don’t want to integrate the MI to existing vnet, you can look at this quick start template


Getting  a SQL MI ready

This said, getting the MI installed is not like any other resources or PaaS in that instance, and you will need:

  1. Configure Virtual Network where Managed Instance will be placed.
  2. Create Route table that will enable Managed Instance to communicate with Azure Management Service.
  3. Optionally create a dedicated subnet for Managed Instance (or use default one that is created when the Virtual Networks is created)
  4. Assign the Route table to the subnet.
  5. Double-check that you have not added something that might cause the problem.


Vnet configuration is the network container for all your stuff, this said, the MI shall not be on a subnet that has anything else. And so, creating a subnet for all of your Mis makes sense. Also, there cannot be any Service Endpoints attached to the subnet either. If you want to have only one subnet in your Virtual Network (Virtual Network blade will enable you to define first subnet called default), you need to know that Managed Instance subnet can have between 16 and 256 addresses. Therefore, use subnet masks /28 to /24 when defining your subnet IP ranges for default subnet. If you know how many instances you will have make sure that you have at least 2 addresses per instance + 5 system addresses in the default subnet.


The route table will allow the MI to talk to the Azure Management Service. This is required because Managed Instance is placed in your private Virtual Network, and if it cannot communicate with Azure service that manages it will become inaccessible. add new resource “Route table”, and once it is created for to Routes blade and add a route “ Next Hop Internet route”. This route will enable Managed Instances that are placed in your Virtual Network to communicate to Azure Management Service that manages the instance. Without this, the Managed Instance cannot be deployed.


Rules for the subnet

  • You have a Managed Instance Route table assigned to the subnet
  • There should be no Networks Security Groups in your subnet.
  • There should be no service-endpoint in your subnet.
  • There are no other resources in subnet.



  • Virtual Network should have Service Endpoints disabled
  • Subnet must have between 16 and 256 IP addresses (masks from /28 to /24)
  • There should be no other resources in your Managed Instance subnet.
  • Subnet must have a route with Next hop internet
  • Subnet must not have any Network Security Group
  • Subnet must not have any service endpoint


More on configuring your SQLMI on MSDN (or whatever the new name is:) )


Access and restores

And then what you ask? You need to connect and play for the database stuff?

SQLMI are private by default and the easy way is to connect from a VM within the same Vnet, connect using SMSS or from your app running next to the SQLMI, like the usual architecture stuff right?

But wait, there is more scenarios here!




Don’t be fooled by the version number here, this is not your regular v12 MSSQL (aka 2014). Those MSSQL azure flavours follow a different numbering and are actually more like a mixture between 2017 and 2019 (As of today!)

But if you thought you could restore a DB from a backup (.bak) file you can, but it will have to be from a BLOB storage container of some sort as SQLMI can only understand that device type.

Or use DMS



Because SQLMI is managed and under SLA, all databases are fully logged with a throttling( ) to ensure the cluster can ship and replay the logs in a good amount of time. When doing more intensive operations such as insert millions of records this can play a role slowing this down.


The answer to that is: over-allocated storage! Indeed, behind the DB files are blobs with various iops capabilities and just like the managed disks, your disk iops for your DBs come with the amount of storage too. The tiers are >512; 513-1024; 1024-2048… so even if you have a small DB, you might want to go with more space on the instance and grow your DB (and log) files to the maximum right away – you pay for it after all!. More on premium disks here:


Tip! remember that the tempDB is not fully logged and lives on local SSD for the Business Critical version of SQLMI. Use this one if want untethered speed.

Also of interest a script to measure iops


Netflix and opendns are not friends – netflix support sucks.

For some reason I had not opted to let the Netflix app on my Android TV and one day with an Android update it decided it was going to be using 4.0.4 build 1716 instead.

Then Netflix would not be able to load any video with somewhat getting stuck at 25% with the lovely error of tvq-pm-100 3.1-52.

Life after work ended. The evening entertainment was ruined for ever and I was not the same again. The downwards spirals was inevitable.

I chatted with Netflix, spends hours on the phone going through the meaningless scripted troubleshoot – had I restart my TV box? Log off and back on? clear the cache? reset the appliance? nothing I was on the verge of video deprivation.

The most intriguing aspect was the competent Netflix staff would say: well as it is not us, it must your network provider. Yet not able to say what getting stuck at 25% could mean. Where are the good old logs telling what is going on when you need them?

I then read on a forum that the Netflix Android TV app would rely on Google DNS to geo-triangulate you and spy on you.

In order to protect my household I had opted long ago for opendns to block the doubleclick and other webspam of the universe without issues in the previous versions of Netflix.

In the end, changing the DNS setting on that Android TV to use Google’s infamous DNS and to see Netflix videos loading at lightning speed and that very same Android TV box could again spy on me at will.

Thanks to Google’s sneakiness the end of the world was avoided.

#google, #netflix, #netflix-sucks, #opendns, #privacy, #support-sucks

Uninstall GP2010 and installation of GP2015

This document describes how to uninstall GP2010 and installation of GP2015.

Prerequisite: local admin rights to uninstall and install software on the machine

  1. Uninstall GP2010 following components
    1. GP2010, Mekorma MICR 2010, Integration Manager for Microsoft Dynamics GP 2010, Dexterity Shared Components 11.0 (64-bit)
    2. Remove the following folders
      1. C:\Program Files (x86)\Microsoft Dynamics\GP2010
      2. C:\Program Files (x86)\Common Files\microsoft shared\Dexterity
  2. Restart the computer
  3. Install GP2015 (includes dexterity 14) as usual.

Uninstall using WMIC

note that Mekorma not playing nice with wmic or msiexec – must uninstall manually.

wmic call Msiexec GUID
product where name=”Microsoft Dynamics GP 2010″ call uninstall /nointeractive {DC90A0A6-2D90-493E-8D13-D54AD123B9FD}
product where name=”Integration Manager for Microsoft Dynamics GP 2010″ call uninstall /nointeractive {FAFD8B80-E75F-4557-85F3-67B8D7A14E8F}
product where name=”Dexterity Shared Components 11.0 (64-bit)” call uninstall /nointeractive {F5459EB2-A662-4EB3-AD94-E771DC2F542A}
product where name=”Mekorma MICR 2010″ call uninstall /nointeractive {A45282DB-59DC-4A5D-9E1F-08A225D81A44}
To run on several nodes at the same time:
wmic:root\cli>/failfast:on /node:@”c:\temp\trainingwks.txt” product where name=”Microsoft Dynamics GP 2010″ call uninstall /nointeractive

#dynamics, #gp, #gp2010, #gp2015

Managing Certificates using Powershell

Because of my recent work with ADFS I was looking for a way to automate most of the certificate configuration by scripts. The usual run-books I write would usually include the use of the mmc and a bunch of screenshot to accompany them.

The answer is that powershell management for Certificates is there and here are some examples:


#Powershell exposes certs under cert:\
PS C:\> Get-PSDrive
Name Used (GB) Free (GB) Provider Root CurrentLocation
—- ——— ——— ——– —- —————
A FileSystem A:\
Alias Alias
C 14.37 45.29 FileSystem C:\
Cert Certificate \
D FileSystem D:\
Env Environment
Function Function
Variable Variable
PS C:\> cd cert:
PS Cert:\> dir localmachine
Name : TrustedPublisher
Name : ClientAuthIssuer
Name : Remote Desktop
Name : Root
Name : TrustedDevices
Name : CA
Name : AuthRoot
Name : TrustedPeople
Name : My
Name : SmartCardRoot
Name : Trust
Name : Disallowed
Name : AdfsTrustedDevices

#Browsing through the stores is pretty intuitive
PS Cert:\> dir Cert:\LocalMachine\My
Directory: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
———- ——-
E31234DEF282437D167A64FD812342B650C20B42 CN=XXXXa
8912343319B07131C8FD1234E250DC67CBE08D7A CN=XXXX
69AD2C21912340919D186503631234A6F0BE9F7F CN=*,XXX..

#Exporting a cert is something a little less intuitive
PS Cert:\> $ExportCert = dir Cert:\LocalMachine\Root | where {$_.Thumbprint -eq “892F212349B07131C12347F8E250DC67CBE08D7
PS Cert:\> $ExportCryp = [System.Security.Cryptography.X509Certificates.X509ContentType]::pfx
PS Cert:\> $ExportKey = ‘pww$@’
PS Cert:\> $ExportPFX = $ExportCert.Export($ExportCryp, $ExportKey)
PS Cert:\> [system.IO.file]::WriteAllBytes(“D:\Temp\CertToExportPFXFile.PFX”, $ExportPFX)

#same mess for importing

  1. Define The Cert File To Import

$CertFileToImport = “D:\Temp\CertToImportPFXFile.PFX”

  1. Define The Password That Protects The Private Key

$PrivateKeyPassword = ‘Pa$$w0rd’

  1. Target The Cert That Needs To Be Imported

$CertToImport = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 $CertFileToImport,$PrivateKeyPassword

  1. Define The Scope And Certificate Store Within That Scope To Import The Certificate Into
  2. Available Cert Store Scopes are “LocalMachine” or “CurrentUser”

$CertStoreScope = “LocalMachine”

  1. For Available Cert Store Names See Figure 5 (Depends On Cert Store Scope)

$CertStoreName = “My”
$CertStore = New-Object System.Security.Cryptography.X509Certificates.X509Store $CertStoreName, $CertStoreScope

  1. Import The Targeted Certificate Into The Specified Cert Store Name Of The Specified Cert Store Scope


For import/export, I’d recommend using code from here:


#certificate, #certs, #manage, #pfx, #stores

ADFS Proxy Trust certificate on WAP doesn’t auto renew

Once upon a time, the web application proxy for ADFS proxy started throwing error.

The Remote Access Management console could not do much complaining with an error code “the operation stopped due to an unknown general error” as always really helpful message.

Looking at the logs, the WAP was also complaining about establishing its trust with the ADFS server.

Fairly enough the ADFS proxy was also complaining about the trust saying that the proxy trust certificate had expired.

Back to the WAP and surely enough it was. However from the GUI I could not find any way to recreate the trust and had to use my DuckDuckGo powers.

So I found that the wizard had to be tricked for reinitialization prior to doing anything as in


We need to set the ProxyConfigurationStatus REG_DWORD to a value of 1 (meaning “not configured”) instead of 2 (“configured”). Once that change is made, re-open the GUI. No reboot is required.

The Remote Access Manager should now allow you to re-run the configuration wizard.

I still don’t know why it would not renew, but given that the certification of the trust goes by every 2 weeks I will seen pretty soon.

#adfs, #certificate, #proxy, #wap

Viewing queues in Exchange 2013 with powershell

Now that Microsoft have changed all the GUI management I struggled to locate the queue viewer. As it turns out it is NOT part of the Exchange admin center (https://localhost/ecp). This tool is part of the Exchange Toolbox, you will find with your management package for Exchange and the queue viewer works like before.

But obviously one would prefer powershell to do so, right!

Get-Queue and Get-QueueDigest will be you friends. You would need to know your DAG prior to that…


Name             Member Servers                                      Operational Servers
----             --------------                                      -------------------
MY-DAG1         {MY-TOR-EX2, MY-TOR-EX1}

>Get-QueueDigest -Dag MY-dag1

GroupByValue                      MessageCount DeferredMess LockedMessag StaleMessage Details
ageCount     eCount       Count
------------                      ------------ ------------ ------------ ------------ -------
[]                     227          0            0            0            {MY-TOR-EX2\66427, MY-TOR-EX...
Submission                        1            1            0            0            {MY-TOR-EX2\Submission}

#dag, #exchange, #queue

graylog2 server not listening on ports 514 and 12201

I have managed to get the graylog2 server  v1.2.2 running with their virtual appliance.

Everything seems to work just fine, except that the graylog server instance
was not listening on the ports defined in graylog2.conf.

In netstat I see the graylog java process associated to the ports 12201 and
514, yet they are not in state LISTEN, and any log messages i send to my
machine on 12201 as gelf via udp are not picked up.

I read the getting started documentation for the setup from bottom up again but could not find anything.

Message inputs are the Graylog parts responsible for accepting log messages. They are launched from the web interface (or the REST API) in the System -> Inputs section and are launched and configured without the need to restart any part of the system.

I added those from the System>Input screen – boom – it started listening.

Oh well.

Looking for a good tutorial to setup graylog? have a look there.

#514, #graylog, #log, #syslog, #udp

Lync Server 2013 Cumulative Update KB 2809243

because I am having a hard time getting it. It is KB 2809243.

and remember to do 1 and 2 as per the full instructions.

#cu, #cumulative-update, #lync, #skype-for-business

SQL execution took too long on vCenter – SQL execution took too long: INSERT INTO VPX_EVENT_ARG WITH (ROWLOCK)

One day I woke with several warning alerts from foglight saying it got disconnected from a vCenter and reconnected almost right away but then flipping on and off all the time making foglight a bit spammy.

Looking at my vServer logs, I found out a repetitive peculiar message about queries taking too long while looking around the DB is undersized, and DB processes being responsive.

A few KB searches after, I read up that dbo.VPX_EVENT and dbo.VPX_EVENT_ARG tables are too large and must be truncated. (KB 2020507)

As per KB…

  1. Take a back-up of your current database. Do not skip this step.
  2. Use SQL Management Studio to connect to the vCenter Server database.
  3. Select New Query.
  4. Enter these queries in the new window and click Execute:

    Note: Execute each query separately and wait for the query to complete prior to running subsequent queries. Depending on the size of the tables, the process may take up to 15 minutes.

    • EXEC sp_msforeachtable “ALTER TABLE ? NOCHECK CONSTRAINT all”
    • USE name_of_the_vcdb
      TRUNCATE TABLE vpx_event_arg;
      DELETE FROM vpx_event;
    • Exec sp_msforeachtable @command1=”print ‘?'”, @command2=”ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all”

Then all went back to normal after a restart of all services involved eventhough the KB title did not seem very related to my case.

#execution, #sql, #vcenter, #vmware