Author: René Hézser

Renew AWS SessionToken and store values in Azure KeyVault

Why do you need this?

Using temporary session tokens sounds like a good way to e.g. import data from S3 in Azure Data Factory, like it is described here Copy data from Amazon Simple Storage Service (S3) – Azure Data Factory | Microsoft Docs. Azure Data Factory can use secrets stored in Azure KeyVault for authentication (see here Store credentials in Azure Key Vault – Azure Data Factory | Microsoft Docs).

Anyway, whatever you use case is, you might want to use secrets stored in KeyVault to access AWS resources 🙂

Description of the solution

I’ve created a sample Azure Function that updates the session token every hour (or manually) on GitHub.

Architecture overview (not pretty, but hopefully readable)

The sample code is available in this repository ReneHezser/RH-TokenRefresh-Function: This sample contains an Azure Function (actually two: one is called via Timer every hour, the other one is for manual trigger via HTTP) that uses an AWS user to create Session Tokens. (github.com).

Count provisioned devices by DPS

This post shows a way to find out how many IoT (Edge) devices have been provisioned by a specific enrolment group within the last x minutes.

The solution could be much simpler if I just wanted to know how many devices are registering themselves. In this case the build in metrics are enough to get that information.

IoT Hub Metrics

The use case required a more sophisticated solution that is able to reflect the tenants, identified by tags.

Solution Architecture

Device Provisioning Service

Different Enrolment Groups separate devices in this scenario into Tenants. To be able to identify the customers, an initial tag CustomerId is added to the enrolment group. It is then applied to the devices that are create by DPS in the IoT Hub.

{
  "tags": {
    "CustomerId": "AnotherCustomer"
  },
  "properties": {
    "desired": {}
  }
}

This tag can then be used for e.g. message enrichment. I’ve written previously about using it: https://www.hezser.de/blog/2020/05/13/properties-for-iot-messages-in-azure-stream-analytics/ (opens in a new tab)

The metrics from DPS did not allow me to distinguish the tags/customers. But IoT Hub will make them available and offers events for newly created devices.

IoT Hub

Within IoT Hub I created an event subscription, that passed on all necessary events to an EventHub.

Event Subscription in IoT Hub

The event will include the device twin, which has been prepopulated with the tags specified in the enrolment group.

Device Twin in IoT Hub

As seen in the architecture diagram, Event Grid has been connected to an Event Hub. #plugandplay 😉 It will fire an event with the documented schema: Azure IoT Hub and Event Grid | Microsoft Docs

Event Hub

Why the additional Event Hub? Event Grid cannot be used as input for an Azure Stream Analytics Job and Event Hub is the universal connector in this case.

You can use the smallest tier (which is Basic) as there is not a lot of events flowing through it. The default 2 partitions is also fine.

Stream Analytics Job

I chose Stream Analytics for the further analysis of the events, because it offers an out-of-the box functionality for queries on time windows: Introduction to Azure Stream Analytics windowing functions | Microsoft Docs

As you can see, the query is pretty simple and can be adjusted easily.

Azure Stream Analytics Job Query

The example uses a blob storage as output, but you can choose to write to an Azure Function or whatever you want to do with the know how that one customer has onboarded lots of devices in a short period of time.

Azure IoT Edge on constraint devices

Introduction

In this post I would like to show some tweaks you can (and might need to) apply to influence the behavior of your IoT Edge device, when it comes to message retention on devices that are limited in resources.

The setup of this scenario is not uncommon, as it uses a module to retrieve telemetry from machines, parses them in another module and sends the messages to an IoT Hub.

The problem

After a while the device is not sending data anymore and is not accessible via SSH. The logs reveal lots of message still in the queue.

picture with logfile lines like Cleaned up messages from queue for endpoint iothub and messages from message store
Lots of messages in the queue in edgeHub logs

But why? And how can I find out what causes the problem?
Spoiler: Disk full 🙁

Troubleshoot

Looking at logfiles helps a lot – if you have access to the logfiles. Fortunately IoT Edge can expose data in the Prometheus exposition format for the edgeHub and edgeAgent. These endpoints are enabled by default for IoT Edge 1.0.10 (upgrade to this version if you haven’t) and can be enabled for 1.0.9.

The data can then be uploaded to Log Analytics for further analysis and to create alerts with a sample metrics-collector module.

For analyzation and to display the metrics, you can use a Workbook in Azure Monitor.

Azure Monitor Workbook with edgeHub log extract

In this particular case I could see that the available disk space was going down, down, down until the whole device did not respond anymore (no SSH access possible, no data sent to Azure).

What to change?

Adding more space to the disk was not an option. Other solutions needed to solve the issue. There are 2 options I looked at and adjusted to be a better fit for the usage scenario and resource limitation.

  1. The Time to live setting defines how long messages will be kept on the device: Operate devices offline – Azure IoT Edge | Microsoft Docs (which is set to 2h per default).
  2. The not so obvious Rocks DB size configures the size of the logfiles: https://github.com/Azure/iotedge/issues/2431#issuecomment-582089419

After tweaking the settings, the following graph shows that now the device cleans up data before the disk runs full.

I can not give you values for you particular setup. You’ll need to figure them out for your setup depending on the amount of messages going though the Edge device and hardware sizing. Here are some pointers to settings which you might want to investigate, if you hit a similar problem on your devices:

RocksDB sizes

The above image shows setting for RocksDB (orange: 512MB, blue, 128MB, green 256MB). With the default setting the device is running out of disk space.

What can I do to prevent the device crashing?

Well, it depends 🙂 You can find a setting from the above that will prevent a full disk for a known scenario. But if you don’t know which modules with which setting is deployed?

In this case an alarm for low disk space is an option. It then needs to trigger a function that calls a method on the device to restart the edgeHub. This will clear the cache.

Azure IoT Edge not starting

Sometimes a permission denied is a permission denied 🙁

[INFO] - Starting Azure IoT Edge Security Daemon
[INFO] - Version - 1.0.10~rc1
[INFO] - Using config file: /etc/iotedge/config.yaml
[INFO] - Configuring /var/lib/iotedge as the home directory.
[INFO] - Configuring certificates…
[INFO] - Transparent gateway certificates not found, operating in quick start mode…
[INFO] - Finished configuring provisioning environment variables and certificates.
[INFO] - Initializing hsm…
[INFO] - Finished initializing hsm.
[INFO] - Provisioning edge device…
[INFO] - Starting provisioning edge device via manual mode using a device connection string…
[INFO] - Manually provisioning device "iotedgedevice" in hub "iothub.azure-devices.net"
[INFO] - Finished provisioning edge device.
[INFO] - Initializing the module runtime…
[INFO] - Initializing module runtime…
[INFO] - Using runtime network id azure-iot-edge
[WARN] - Could not initialize module runtime
[WARN] - caused by: Container runtime error
[WARN] - caused by: error trying to connect: Permission denied (os error 13)
[ERR!] - The daemon could not start up successfully: Could not initialize module runtime
[ERR!] - caused by: Could not initialize module runtime
[ERR!] - caused by: Container runtime error
[ERR!] - caused by: error trying to connect: Permission denied (os error 13)

This is the output I got viajournalctl -u iotedge -f on a testinstallation.

For troubleshooting purpose I looked at the https://docs.microsoft.com/en-us/azure/iot-edge/troubleshoot guide. But nothing solved my problem. Then I disabled http and mqtt support as of https://docs.microsoft.com/en-us/azure/iot-edge/production-checklist. Still not starting.

Finally I got it up and running by creating a docker group, adding iotedge to it and changed the group ownership of the /var/run/docker.sock file sudo chown root:docker /var/run/docker.sock

This post is meant to be found via search engines if you (or me again) has the same startup problems.

My context: Ubuntu 20.04 with snap installed docker

Properties for IoT Messages in Azure Stream Analytics

In this post I want to show how to use properties that are added to messages that IoT devices are sending to Azure IoT Hub in Stream Analytics. And while talking about properties, let’s even use message enrichment 🙂

Stream Analytics Architecture

Sample Message

The green properties will be added by the Message enrichment feature of IoT Hub, as the data is not most likely not known on the IoT device or does not need to be transferred with each message.

{
  "body": {
    "messageId": 2300,
    "temperature": 28,
    "humidity": 66
  },
  "enqueuedTime": "2020-05-08T09:55:24.886Z",
  "properties": {
    "temperatureAlert": "false",
    "CustomerName": "Microsoft Deutschland GmbH",
    "CustomerId": "4711"
  }
}

Sample IoT Device

This message is sent by a sample C# client. I used this one: https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/iot-hub/Samples/device/MessageSample

The code that sends the message with the alert property has been adjusted to this:

string dataBuffer = $"{{\"messageId\":{count},\"temperature\":{_temperature},\"humidity\":{_humidity}}}";
using (var eventMessage = new Message(Encoding.UTF8.GetBytes(dataBuffer)))
{
    eventMessage.Properties.Add("temperatureAlert", (_temperature > TemperatureThreshold) ? "true" : "false");

Configure IoT Hub

Device Twin

In most cases the IoT (Edge) device does not know which customer it is associated, as it does not need to know. For further processing of the data – or for device management – this information is relevant. Therefore we add this information to the device twin in Azure IoT Hub.

 "version": 3,
  "tags": {
    "customer": {
      "id": "4711",
      "name": "Microsoft Deutschland GmbH"
    }
  },
  "properties": {

The property names do not need to match the desired properties that will be added via message enrichment. You can choose a structure that fits best.

Message Enrichment

We want to add the customer number and id from the device twins to the message before it is being passed along to an endpoint.

Message Enrichment settings in IoT Hub

As you can see the name of the property that is added does not need to match the name of the twin properties. Make sure you add the message enrichment to the right endpoint(s). You can decide to add different properties to messages that are routed to different endpoints.

Azure Stream Analytics

In the Stream Analytics job we use a SQL like query to filter the incoming message stream and route the messages to endpoints. The query will work fine as long as you use only the columns that are in the body of the messages (like “temperature” or “humidity” in this examle).

To be able to use the values in the properties, we need to use the GetMetadataPropertyValue function. Please take not of the sentence on the docs page: This function cannot be tested on the Azure portal using sample data

Query

SELECT
    GetMetadataPropertyValue([IoTHub-Messaging], '[User].[temperatureAlert]') AS temperaturealert,
    GetMetadataPropertyValue([IoTHub-Messaging], '[User].[CustomerName]') AS customername,
    GetMetadataPropertyValue([IoTHub-Messaging], '[User].[CustomerId]') AS customerid,
    *

The first three columns are our property and message enrichment columns while the other columns are all added as well.

Output

Let’s assume we want to add all message to a storage account where the customer id is part of the path.

Stream Analytics Blob storage output

This will work, as we added the customerid column in the query and it can be used for the path. Remember this is a demo and we only use the customerid as part of the path.

In the architecture diagram at the beginning of the post an Alert route is drawn. You can achieve this by adding a second query to the job which routes certain messages to that output.

VisionAI DevKit won’t deploy a module

Today my VisionAI DevKit was not deploying a module. In the logs (sudo journalctl -u iotedge -f) I could see the deployment was received:

Successfully pulled image machinelearndfd8df7d.azurecr.io/mobilenetimagenet:3
Creating module VisionSampleImagenet…
Could not create module VisionSampleImagenet
caused by: No such image: machinelearndfd8df7d.azurecr.io/mobilenetimagenet:3

Strange. During troubleshooting I started docker images and saw a lot of older images and versions. After deleting a log of them with docker image rm xyz the deployment succeeded and the module started. 🙂

Learning: Clean up the mess…

Configure Azure IoT Edge for downstream devices

A lot of documentation and posts are available to setup an Azure IoT Edge to act as an IoT Hub for downstream devices. In order to get it up and running in a dev environment, I had to do some more research.

My setup is a RaspberryPi 3 with Raspbian stretch and an Azure IoT DevKit which looks like this. And please remember the setup I used is for development only. I’ve used symmetric key authentication for the IoT Device. In a production scenario you would probably use certificate based authentication and no self signed certificates for the TLS encryption.

Transparent Gateway
Source: https://docs.microsoft.com/en-us/azure/iot-edge/iot-edge-as-gateway

Some starting points for reading are:

And here are my findings with the solutions that worked for my setup

  1. The downstream IoT devices should be able to connect to port 443 on the Edge module. But that port was not open/listening.
  2. How to verify the gateway certificate after the connection has been established?

The connectionstring for connecting to the gateway instead of an IoT Hub you can add ;GatewayHostName=hostname and the device should then go to the gateway. Take a note of the hostname and make sure it matches the name you specified when you were creating the certificates.

Looking at the serial output of the DevKit, I noticed it could not connect to the gateway. A quick analysis revealed that it does not accept connections on port 443. Hmm. Maybe a firewall on the Pi? As it turned out you have to tell the edge container to listen to 443 if you want to use it as a gateway.

Port bindings for the Edge module
{
    "HostConfig": {
        "PortBindings": {
            "8883/tcp": [{ "HostPort": "8883" }],
            "443/tcp": [{ "HostPort": "443" }],
            "5671/tcp": [{ "HostPort": "5671" }]
        }
    }
}

This will allow incoming connections not only for HTTPS. After the change was pushed to the Edge device, I could connect to it on port 443. Hurray.

The next challenge was to get the downstream device accept the certificate, that the gateway offered. In order to be able to verify the certificate, it has to trust the root certificate. This was, in my case, the file azure-iot-test-only.root.ca.cert.pem from the ~/certividates/certs directory. Open it with an editor, paste the content into the ino file and use the certificate.

// declare a constant with the content of 
// azure-iot-test-only.root.ca.cert.pem from ~/certificates/certs
static const char edgeCert [] =
"-----BEGIN CERTIFICATE-----\r\n"
...
"-----END CERTIFICATE-----";

// set trusted certs for the client
DevKitMQTTClient_SetOption(OPTION_MINI_SOLUTION_NAME, "something");
DevKitMQTTClient_SetOption("TrustedCerts", edgeCert);

Now the IoT device should be able to connect to the gateway. Have fun with IoT 😉

Azure SQL with AAD authentication

I though this had to be an easy task. Well, actually it is. If you find the right documentation and read it in the correct order 🙂

Basically I wanted to be able to login with my AAD (Azure Active Directory) user.

In the first step, the database needs to be configured for Azure Active Directory in order to add users in the second step.

Configure an Administrator

In the Azure portal go the the SQL server and search for “active directory” to add an Active Directory admin.

After you’ve added an admin and saved the value, you will be able to use SSMS (SQL Server Management Studio) to logon to the server. Probably SSMS will prompt you about a firewall exception.

Use SQL Management Studio to add users and grant permissions

For other users (not the administrator we configured above) to be able to logon, access has to be granted like with an on premises SQL Server.

Add a user to the master DB

Create a new query o

CREATE USER [rene.hezser@something.com] FROM EXTERNAL PROVIDER;

Next grant permissions to the user on the database itself.

Add user to database

Open another query on the database.

CREATE USER [rene.hezser@something.com] FROM EXTERNAL PROVIDER;
ALTER ROLE [db_owner] ADD MEMBER [rene.hezser@something.com];

That should be it.

Some documentation I used:

23 Jun

Two Hackathons in a week

What a week. Two hackathons (‘hack’+marathon) in a row. That was exhausting.

  • A three day hackathon with my colleges from Arvato Systems and a customer. We’ve used Cognitive services with 8 different programming languages and created great PoCs.

 

 

 

 

 

 

 

 

  • The second hackathon was about Azure Stack with Microsoft.

Thanks to all participants, the organisation. It has been fun and a great experience. Now I am looking forward on how the results will influence decisions for follow-up projects.

Besides the work, I enjoyed the opportunity to get to know  you all better and had some interesting networking. Let’s see what events the future brings 😉

18 Jun
7 Jun
19 Apr

Was ist Cloud-native? – Die Muttersprache der Cloud

Im Zeitalter der Digitalen Transformation umschwirren uns unzählige neue Buzzwords. Bei „Cloud-Native“ sprechen wir dabei keineswegs von einer weiteren Etage auf dem Turm zu Babel, sondern vielmehr von einem essentiellen Bestandteil der Digitalisierung, nämlich der Muttersprache der Cloud.

Ich habe wieder einen Beitrag auf dem Arvato Cloud Blog geschrieben:

https://it.arvato.com/cloud-blog/de/2018/04/was-ist-cloud-native—die-muttersprache-der-cloud.html

Azure Table Storage REST Api not returning data

Today I wanted to query entities of an Azure Table via REST Api and did not get any results.

Looking over the query over and over again did not solve the problem. Sometimes I did not get any items back.

The “sometimes” depended on the query. I checked each part. Partition Key, string and date columns. Everything looked all right. And then it hit me.

I did not get a result, if there was too much data. Specifying the $top option to 1000 will always return data.

Learning for today: if a filtered query would return too much data, it will not return anything until you implement paging according to https://docs.microsoft.com/en-us/rest/api/storageservices/query-timeout-and-pagination

Meetup #4 – Thema: Azure IoT Hub, MQTT

Am 24. Januar ist es wieder soweit.

https://www.meetup.com/Azure-Meetup-OWL/events/244924594/

Themen:

René Hézser – Arvato Systems

Vorstellung Azure IoT Hub, Anbindung eines ESP8266 mit LED und Sensor an den IoT Hub

Dennis Hering – Microsoft Deutschland GmbH

Grundlagen / Funktionsweisen von MQTT,
“Last Will and Testament (LWT)” – Best Practices und Code Patterns

Bitte denkt an die Anmeldung über die Webseite, damit wir euch beim Pförtner anmelden können.

Resitor values for blinking Christmas Tree

I bought a DIY blinking Christimas Tree. Unfortunately it did not contain any assembly instructions 🙁

So I looked for the part number CTR-30B, which is printed on both parts of the tree. I found a couple of instructions. After I soldered the tree, I saw that the colors were not evenly bright. I adjusted the values of the resitors and want to share the values.

R2: 330
R4: 560
R6: 2k

For R1, R3, R5 and R7 I used the provided 10k resistors.

Upload files to NodeMCU from Windows Bash

Uploading files to a NodeMCU ESP8266 can be done with the Java tool ESPlorer. If you want to automate this process, you’ll want to use something else.

A quick research brought up NodeMCU-Uploader, which is a python script. On my Windows machine, I’ve got the bash installed. Naturally I want to use it 🙂

Fortunately the bash allows access to the COM ports. You have to modify the permissions for the device though.

  • sudo chmod 666 /dev/ttyS3where “3” is the COM port, you can see in the Windows Device Manager

After that, the COM port can be access. The uploader can be installed with pip.

  • sudo apt install pip
  • pip install nodemcu-uploader

After everything has been set up, files can be uploaded by specifying the port and file

nodemcu-uploader --port /dev/ttyS3 upload application.lua

No Default Subscription?

Set-AzureWebsite : No default subscription has been designated. Use Select-AzureSubscription -Default <subscriptionName> to set the default subscription.

*doh* Again I’ve used PowerShell comandlets for Azure classic instead of Resource Manager 🙁

Reminder: Always check for the magic “Rm” chars in the command, if a resource cannot be found.

Azure Meetup OWL

Nicht vergessen. Morgen findet das Azure Meetup zum Thema Build, Test und Deployment mit Azure in Bielefeld statt.

Meetup #2 – Build, Test und Deployment mit Azure

Wednesday, Oct 11, 2017, 7:00 PM

Arvato Bielefeld / Sennestadt
Fuggerstraße 11 Bielefeld, DE

17 Mitglieder Went

Liebe Azure OWL Community,am[masked] wird unser zweites Azure OWL Meetup durchgeführt.Diesmal soll es hauptsächlich um Build, Test und Deployment auf der Azure Platform gehen.Tyler von Microsoft wird uns einen Vortrag zu Build, Test und Deployment vorstellen und speziell auf Delivery Pipelines mit Docker, Kubernetes und Vistual Studio Team…

Check out this Meetup →

 

HowTo use Azure cmdlets in Azure Schedule

A Runbook schedule can be triggered every hour. If you need a smaller interval, like every minute, you can use the Azure Scheduler to do so.
So I went to the Azure Portal, created an Azure Schedule instance (with a job collection tier of at least basic, to be able to create schedules that are triggered every minute) and called a Runbook via webhook.

The Runbook contains a cmdlet that results in an error 🙁
Get-AzureRmMetric : The term 'Get-AzureRmMetric' is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.

Azure cmdlets can be made available through the Automation Account the Runbook is using. The “Browse Gallery” link will let you find and choose the necessary cmdlets.

The error message above appears, because a) the cmdlet was not installed and b) the referenced version of AzureRM.profile was to old. Fortunately the problem can be resolved easily by upgrading the Azure modules.

After all modules are up to date, I could add the desired module and my runbook wasn’t complaining anymore 🙂

Azure SQL – Standard Tier IDs

In case you need the ServiceObjectiveId for SQL standard tiers, here is the list for you.

Tier nameServiceObjectiveId
Standard (S0)f1173c43-91bd-4aaa-973c-54e79e15235b
Standard (S1)1b1ebd4d-d903-4baa-97f9-4ea675f5e928
Standard (S2)455330e1-00cd-488b-b5fa-177c226f28b7
Standard (S3)789681b8-ca10-4eb0-bdf2-e0b050601b40
Standard (S4)3cf14e1a-0a5d-408c-bbc7-f63c5282f735
Standard (S6)ab69b4e3-d7cc-4aa5-87a6-f8b50615a03c
Standard (S7)b6ca0894-d2f0-4e40-99f5-0f8a93cc2437
Standard (S9)0efa88e9-99ff-4e36-a148-8c4b20c0826c
Standard (S1298100e8b-2f8a-4a81-9eb5-4d1e675c5a29

Usually you could change the tier within the Azure Portal. To change them via PowerShell, you can use the above IDs.

Connection Problems to a Secure Service Fabric Cluster

To be able to connect to a secure Service Fabric Cluster via PowerShell, you need to import the certificate specified into your personal certificate store. Otherwise an Exception will be thrown. Unfortunately the Exception does not point into the right direction 🙁

So in case you get an Exception like this

Connect-ServiceFabricCluster : An error occurred during this operation. Please check the trace logs for more details.
At line:1 char:1
+ Connect-ServiceFabricCluster -ConnectionEndpoint xyz-sf-de …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Connect-ServiceFabricCluster], FabricException
+ FullyQualifiedErrorId : CreateClusterConnectionErrorId,Microsoft.ServiceFabric.Powershell.ConnectCluster

you need to import the certificate with its private key (*.pfx) into the personal certificate store of the PC you are running PowerShell on.

 

Specifying -verbose for PowerShell will print additional information, that does not help a lot.

PS C:\WINDOWS\system32> Connect-ServiceFabricCluster -ConnectionEndpoint xyz-sf-dev.northeurope.cloudapp.azure.com:19000 -X509Credential -FindType FindByThumbprint -FindValue xyz -StoreLocation CurrentUser -StoreName My -ServerCertThumbprint xyz -Verbose
VERBOSE: System.Fabric.FabricException: An error occurred during this operation. Please check the trace logs for more
details. —> System.Runtime.InteropServices.COMException: Exception from HRESULT: 0x80071C57
at System.Fabric.Interop.NativeClient.IFabricClientSettings2.SetSecurityCredentials(IntPtr credentials)
at System.Fabric.FabricClient.SetSecurityCredentialsInternal(SecurityCredentials credentials)
at System.Fabric.Interop.Utility.<>c__DisplayClass25_0.<WrapNativeSyncInvoke>b__0()
at System.Fabric.Interop.Utility.WrapNativeSyncInvoke[TResult](Func`1 func, String functionTag, String
functionArgs)
— End of inner exception stack trace —
at System.Fabric.Interop.Utility.RunInMTA(Action action)
at System.Fabric.FabricClient.InitializeFabricClient(SecurityCredentials credentialArg, FabricClientSettings
newSettings, String[] hostEndpointsArg)
at Microsoft.ServiceFabric.Powershell.ClusterConnection.FabricClientBuilder.Build()
at Microsoft.ServiceFabric.Powershell.ClusterConnection..ctor(FabricClientBuilder fabricClientBuilder, Boolean
getMetadata)
at Microsoft.ServiceFabric.Powershell.ConnectCluster.ProcessRecord()
Connect-ServiceFabricCluster : An error occurred during this operation. Please check the trace logs for more details.
At line:1 char:1
+ Connect-ServiceFabricCluster -ConnectionEndpoint xyz-sf-de …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Connect-ServiceFabricCluster], FabricException
+ FullyQualifiedErrorId : CreateClusterConnectionErrorId,Microsoft.ServiceFabric.Powershell.ConnectCluster