Thursday, 29 November 2018

Simplifying security for serverless and web apps with Azure Functions and App Service

Serverless and PaaS are all about unleashing developer productivity by reducing the management burden and allowing you to focus on what matters most, your application logic. That can’t come at the cost of security, though, and it needs to be easy to achieve best practices. Fortunately, we have a whole host of capabilities in the App Service and Azure Functions platform that dramatically reduce the burden of securing your apps.

Today, we’re announcing new security features which reduce the amount of code you need in order to work with identities and secrets under management. These include:

◈ Key Vault references for Application Settings (public preview)
◈ User-assigned managed identities (public preview)
◈ Managed identities for App Service on Linux/Web App for Containers (public preview)
◈ ClaimsPrincipal binding data for Azure Functions
◈ Support for Access-Control-Allow-Credentials in CORS config

We’re also continuing to invest in Azure Security Center as a primary hub for security across your Azure resources, as it offers a fantastic way to catch and resolve configuration vulnerabilities, limit your exposure to threats, or detect attacks so you can respond to them. For example, you may think you've restricted all your apps to HTTPS-only, but Security Center will help you make absolutely sure. If you haven’t already, be sure to give it a try.

So, without any further ado, let’s talk about the details of these new features!

Key Vault references for Application Settings (public preview)


At Microsoft Ignite 2018, we gave a sneak peek of a new feature that would allow apps to source their application settings from Key Vault. I’m incredibly pleased to announce that as of today, this feature is available in public preview!

More and more organizations are moving to secure secrets management policies, which is fantastic to see. Azure Key Vault gives you one source of truth for your secrets, with full control over access policies and audit history. While the existing Application Settings feature of App Service and Azure Functions is considered secure, with secrets encrypted at rest, it doesn’t provide these management capabilities that you may need.

However, working with Key Vault traditionally requires you to write some new code. We’ve found that many teams can’t easily update every place their application works with secrets, particularly in legacy applications. Azure Functions triggers are also an issue, as they are managed by the platform. Both of these scenarios are addressed with this new feature.

Sourcing Application Settings from Key Vault


The Key Vault references feature makes it so that your app can work as if it were using App Settings as they have been, meaning no code changes are required. You can get all of the details from our Key Vault reference documentation, but I’ll outline the basics here.

This feature requires a system-assigned managed identity for your app. Later in this post I’ll be talking about user-assigned identities, but we’re keeping these previews separate for now.

You’ll then need to configure an access policy on your Key Vault which gives your application the GET permission for secrets.

Lastly, set the value of any application setting to a reference of the following format:

@Microsoft.KeyVault(SecretUri=secret_uri_with_version)

Where secret_uri_with_version is the full URI for a secret in Key Vault. For example, this would be something like: https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931

Azure Tutorial and Material, Azure Guides, Azure Certification, Azure Learning

That’s it! No changes to your code required!

For this initial preview, you need to explicitly set a secret version, as we don’t yet have built-in rotation handling. This is something we look forward to making available as soon as we can.

User-assigned managed identities (public preview)


Our existing support for managed identities is called system-assigned. The idea is that the identity is created by the platform for a specific application and is tied to the lifecycle of the application. If you delete the application, the identity is removed from Azure Active Directory immediately.

Today we’re previewing user-assigned identities, which are created as their own Azure resource and then assigned to a given application. A user-assigned identity can also be assigned to multiple applications, and an application can have multiple user-assigned identities.

Azure Tutorial and Material, Azure Guides, Azure Certification, Azure Learning

Quick tip: Although you can use an identity for multiple resources, be careful not to over-share identities and leak permissions to resources that don’t need them. Always keep the principle of least privilege in mind, and default to creating separate identities for each component of your application. Only share if truly necessary.

Managed identities for App Service on Linux/Web App for Containers (public preview)


We’re also pleased to expand our support for managed identities to App Service on Linux/Web App for Containers. Now, Linux apps can have the same great experience of turnkey service-to-service authentication without having to manage any credentials. This preview includes both system-assigned and user-assigned support. In addition to a token service that makes it easy to request access to resources like Key Vault and Azure Resource Manager, this new support also gives Linux apps access to the Key Vault references feature mentioned before.

To get started with your Linux applications.

Azure Tutorial and Material, Azure Guides, Azure Certification, Azure Learning

ClaimsPrincipal binding data for Azure Functions


Since the first preview of Azure Functions, you’ve been able to use App Service Authentication/Authorization to limit access to your function apps. Today we’re making it easier to leverage incoming identities from your function code. This is currently finishing deployment it will be available to all function apps in Azure by the end of the week.

For .NET, this is exposed as a ClaimsPrincipal object, similar to what you’d see in ASP.NET. The object will be automatically injected if you add a ClaimsPrincipal object to your function signature, similar to how ILogger is injected.

using System.Net;
using Microsoft.AspNetCore.Mvc;
using System.Security.Claims;

public static IActionResult Run(HttpRequest req, ClaimsPrincipal principal, ILogger log)
{
     // ...
     return new OkResult();
}

Other languages will be able to access the same through the context object in an upcoming update. Until then, this is a .NET-only preview. 

I really love how this cleans up identity-dependent functions. This feature, in combination with the token binding, removes a nice bit of code that isn’t core to your business logic.

Support for Access-Control-Allow-Credentials in CORS config


Last, but not least, we have a quick update to our CORS feature which enables the Access-Control-Allow-Credentials header to be set. This is necessary whenever you need to send cookies or a token as part of calling your API. Without this response header being set, the browser will not pass data along.

You can find out more about the CORS feature and this new setting in the tutorial, “Host a RESTful API with CORS in Azure App Service.” To enable the header, you’ll just need to update your CORS config to set “supportCredentials” to true.

The Access-Control-Allow-Credentials header can also be enabled in the local Functions host for development purposes.

Wednesday, 28 November 2018

Azure Cosmos DB and multi-tenant systems

We will discuss how to build a multi-tenant system on Azure Cosmos DB. Azure Cosmos DB itself is a multi-tenant PaaS offering on Microsoft Azure. Building a multi-tenant system on another multi-tenant system can be challenging, but Azure provides us all the tools to make our task easy. An example of a multi-tenant system would be a company providing background check services that any other company can use in their HR system. For the purposes of this blog post we are going to use this example and continue from the point of view of company providing background checks as a service. We will refer to this company as “publisher.”

Let’s begin to discuss how you can build a multi-tenant system that will store sensitive user data. Data isolation and security is the most important aspect of any system. We must design the system so that each tenant’s data is isolated from one another. The data stored in any given tenant should be divided into compartments so one tenant breach cannot flow into another tenant. This would be similar to compartmentalizing the hull of a ship to reduce floodability.

To increase the isolation and protection of customer data in a multi-tenant system, we should build the system with only one approved service that can have just in time (JIT) access to tenant data. We need to set up a different system principal for each customer’s data partition so that the scope of access for any principal is segmented by customer. We don’t want a service to have access to all tenant data, that is a big security risk. What we want is the service to get the access permission for one tenant JIT. The benefit of this approach is tenants can rotate their certificates and keys anytime.

Every tenant can manage their data using the publisher's front-end service (FES), but they cannot directly manipulate their own data in Azure Cosmos DB collections. This isolation will remove the need for every tenant to have access to master and read-only keys. All the data access will happen through a service and no one will access Azure Cosmos DB unless directly on the portal or through code. The publisher application, which manages the customer data, is hosted in a different Azure Active Directory tenant and subscription, which is separate from that of the customer’s tenant and data.

However, the tenant will own all the collections and data without having direct access to the data. This simplifies the billing for all data storage and throughput that the tenant is directly billed, but is a tricky requirement. Let’s see how you can manage this requirement.

The main actors of this solution are Azure Managed Applications, Daemon Application, Azure Cosmos DB, Azure Key Vault and Azure Active Directory (AAD). The following paragraphs will help us understand each of the mentioned solutions.

An Azure Managed Application is like a service catalog in the marketplace, but with one key difference. In a managed application, the resources are provisioned in a resource group that is managed by the publisher of the app. The resource group is present in the consumer's subscription, but an identity in the publisher's tenant has access to the resource group in the customer subscription. As the publisher, you specify the cost of ongoing support for the solution.

Azure Cosmos DB, Azure Certification, Azure Tutorial and Material, Azure Learning, Azure Database

Managed applications reduce barriers to consumers using your solutions. They do not need expertise in cloud infrastructure to use your solution. Consumers have limited access to the critical resources. They don't need to worry about making a mistake when managing it. Managed applications enable customers to adopt your solution without needing expertise in the underlying Azure infrastructure.

Managed applications enable you to establish an ongoing relationship with your consumers. You define the terms for managing the application, and all charges are handled through Azure billing.

Although customers deploy these managed applications in their subscriptions, they do not have to maintain, update, or service them. You can ensure that all customers are using approved versions. Customers do not have to develop application specific domain knowledge to manage these applications. They automatically acquire application updates without the need to worry about troubleshooting and diagnosing issues with the applications. The advantages of an Azure Managed Application is billing, separation of data between different tenants, easy maintenance, among other benefits.

After deploying a tenant managed application, create a daemon application. Follow the instructions on how to create an AAD application and service principal that can access resources. This daemon application has its own identity and access to tenant subscription. This application is the bridge between the customer tenant application and the service provider (publisher).

It is important to understand a few things. First, user interaction is not possible with a daemon application, which requires the application to have its own identity. An example of a daemon application is a batch job, or an operating system service running in the background. This type of application requests an access token by using its application identity and presenting its application ID, credentials (password or certificate), and application ID URI to AAD. After a successful authentication, the daemon receives an access token which represents the identity of the application from AAD and is then used to call the web API.

The magic of Azure Managed Applications is that the publisher can access the customer subscription resources it managed as if these resources are located within a subscription in the publisher’s AAD tenant. The customer tenant subscription resources are visible to the customer in their own Azure subscription, but are not accessible due to an Azure Resource Lock. Only the publisher has full access to the managed application resources in the customer’s subscription.

After creating the daemon application, you need to register it in the identity and access (IAM) of Azure Cosmos DB instance, which is deployed as a managed resource component of the customer tenant subscription.

The last piece you will develop is the front-end service (FES). This the service used to manage the components in the customer tenant. This service cannot directly access Azure Cosmos DB until it goes through the orchestration of taking the daemon application identity. The following illustrates a step-by-step walkthrough for the FES interaction with the customer’s subscription resources.

Azure Cosmos DB, Azure Certification, Azure Tutorial and Material, Azure Learning, Azure Database

FES takes over the daemon application identity at the run time. FES also has its own managed identity (MSI) which is registered in Key Vault for access. At the run time, the FES connects to the Key Vault using the Azure MSI and obtains the certificate credential, which in turn uses a credential to obtain a token from AAD representing the daemon application (Step 1).

Once the FES gets the certificates, it assumes the identity of daemon service by using the client ID and secret certificate. Then it will call AAD to get the access token for the Managed Application (Step 2). This FES uses Azure Active Directory Authorization DLL (AAD DLL). See the FES code snippet below, which helps FES to get the token from AAD.

using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.IdentityModel.Clients.ActiveDirectory;

string secretIdentifier = " key vault secretIdentifier for daemon app goes here ";
var tokenCache = TokenCache.DefaultShared;
string pubTenantId = " publisher’s Azure AD directory id here ";

//// get app key from Key Vault (… let me know if you need a sample for reading a certificate/private key instead of a secret
var azureServiceTokenProvider = new AzureServiceTokenProvider();

var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
string secret = keyVaultClient.GetSecretAsync(appKey)
.GetAwaiter()
.GetResult();

//// now get a token representing the Daemon app, using the secret credential of the Daemon Azure AD application.  Resource is the appropriate Azure resource Uri.
string daemonAppId = "daemon Azure AD application Id";
string authString = $"https://login.microsoftonline.com/{pubTenantId}";
string daemonAppResourceUri = "https://management.core.windows.net/";
var clientCredential = new ClientCredential(daemonAppId, secret.Value);
var authenticationContext = new AuthenticationContext(authString, false, tokenCache);

var authnResult = authenticationContext.AcquireTokenAsync(resourceUri, clientCredential)
.GetAwaiter()
.GetResult();

string daemonToken = authnResult.AccessToken;

//// alternately, to use a certificate you would substitute the above variables:
////     var clientCredential = new ClientAssertionCertificate(clientId, certificate);

Once the access token is obtained by FES, it calls into Azure Cosmos DB to get the master key (Step 3 and 4). This is accomplished by using the access token of the daemon application. For this FES we pass the AAD token in header.

using Microsoft.Azure.Management.CosmosDB.Fluent;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core;

string subscriptionId = " subscribing customer’s subscription id ";
string resourceGroupName = " subscribing customer’s resource group name ";
string databaseAccountName = " subscribing customer’s Cosmos DB account name ";

var credential = new AzureCredentials(new TokenCredentials(daemonToken), pubTenantId, AzureEnvironment.AzureGlobalCloud);

var azure = Azure.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credential)
.WithSubscription(subscriptionId);

var cosmosDbAccounts = azure.CosmosDBAccounts;
var readWritekeys = cosmosDbAccounts.ListKeysAsync(resourceGroupName, databaseAccountName)
.GetAwaiter()
.GetResult();

Once it has the master key, it starts accessing the Cosmos DB (Step 5).

using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;

string cosmosDBendpointUri = $"{databaseAccountName}.documents.azure.com:443/";

string masterKey = readWritekeys.PrimaryMasterKey; //// pick the one you need

var connectionPolicy = new ConnectionPolicy { ConnectionMode = ConnectionMode.Direct, ConnectionProtocol = Protocol.Tcp };

var documentClient = new DocumentClient(new Uri(cosmosDBendpointUri), masterKey, connectionPolicy);

client.OpenAsync()
.GetAwaiter()
.GetResult();

You may wonder why the daemon application identity, rather than an Azure MSI representing the FES, is used to retrieve the Azure Cosmos DB keys. The answer is security isolation, JIT access, getting the daemon application secret from Key Vault, and accessing AAD to get it’s token all help support security isolation. This orchestration makes sure that FES does not have access to all the tenants’ keys. It can get access to keys JIT only by using the daemon identity.

This system has the following advantages:

◈ No access key is kept with the publisher, in code, or in any configuration files. This method provides the security needed for every tenant.
◈ One publisher access token cannot access all the tenants.
◈ Each subscribing customer is provisioned with its own daemon application identity for access to that customer’s resources.
◈ Only at the run time can FES get the access token by using the daemon application secrets.

Azure Cosmos DB brings many of its advantages to this solution, such as:

◈ The publisher does not know how much throughput and space is required at the time of onboarding a new tenant.
◈ Azure Cosmos DB’s elastic nature for storage and throughput keeps this solution very flexible.
◈ The Azure Managed Applications template defined by the publisher comes with a minimum default Azure Cosmos DB whose request units can be expanded as needed.
◈ JIT access through the use of daemon applications and Key Vault.

Tuesday, 27 November 2018

Running Cognitive Service containers

We announced a preview of Docker support for Microsoft Azure Cognitive Services with an initial set of containers ranging from Computer Vision and Face, to Text Analytics. Here we will focus on trying things out, firing up a cognitive service container, and seeing what it can do.

Installing Docker


You can run docker in many contexts, and for production environments you will definitely want to look at Azure Kubernetes Service (AKS) or Azure Service Fabric. In subsequent blogs we will dive into doing this in detail, but for now all we want to do is fire up a container on a local dev-box which works great for dev/test scenarios.

You can run Docker desktop on most dev-boxes, just download and follow the instructions. Once installed, make sure that Docker is configured to have at least 4G of RAM (one CPU is sufficient). In Docker for Windows it should look something like this:

Azure Certification, Azure Guides, Azure Study Material, Azure Tutorial and Material

Getting the images


The Text Analytics images are available directly from Docker Hub as follows:

◈ Key phrase extraction extracts key talking points and highlights in text either from English, German, Spanish, or Japanese.
◈ Language detection detects the natural language of text with a total of 120 languages supported.
◈ Sentiment analysis detects the level of positive or negative sentiment for input text using a confidence score across a variety of languages.

For the text and Recognize Text images, you need to sign up for the preview to get access:

◈ Face detection and recognition detects human faces in images as well as identifying attributes including face landmarks (nose, eyes, and more), gender, age, and other machine-predicted facial features. In addition to detection, this feature can check to see if two people in an image or images are the same by using a confidence score. It can compare it against a database to see if a similar-looking or identical face already exists, and it can also organize similar faces into groups using shared visual traits.
◈ Recognize Text detects text in an image using optical character recognition (OCR) and extracts the recognized words into a machine-readable character stream.

Here we are using the language detection image, but the other images work the same way. To download the image, run docker pull:

docker pull mcr.microsoft.com/azure-cognitive-services/language

You can also run docker pull to check for updated images.

Provisioning a Cognitive Service


Now you have the image locally, but in order to run a container you need to get a valid API key and billing endpoints, then pass them as command line arguments. First, go to the Azure portal and open the Cognitive Services blade. If you don’t have a Cognitive Service that matches the container, in this case a Text Analytics service, then select add and create one. It should look something like this:

Azure Certification, Azure Guides, Azure Study Material, Azure Tutorial and Material

Once you have a Cognitive Service then get the endpoint and API key, you’ll need this to fire up the container:

Azure Certification, Azure Guides, Azure Study Material, Azure Tutorial and Material

The endpoint is strictly used for billing only, no customer data ever flows that way.

Running a container


To fire up the container, you use the docker run command to pass the required docker options and image arguments:

docker run --rm -it -p 5000:5000 mcr.microsoft.com/azure-cognitive-services/language eula=accept apikey=<apikey> billing=<endpoint>

The values for the API key and billing arguments come directly from the Azure portal as seen above. There are lots of Docker options that you can use, so we encourage you to check out the documentation.

If you need to configure an HTTP proxy for making outbound requests then you can do that using these two arguments:

◈ HTTP_PROXY – the proxy to use, e.g. http://proxy:8888
◈ HTTP_PROXY_CREDS – any credentials needed to authenticate against the proxy, e.g. username:password.

When running you should see something like this:

Azure Certification, Azure Guides, Azure Study Material, Azure Tutorial and Material

Trying it out


In the console window you can see that the container is listening on http://localhost:5000 so let’s open your favorite browser and point it to that.

Azure Certification, Azure Guides, Azure Study Material, Azure Tutorial and Material

Now, select Service API Description or jump directly to http://localhost:5000/swagger. This will give you a detailed description of the API.

Azure Certification, Azure Guides, Azure Study Material, Azure Tutorial and Material

Select Try it out and then Execute, you can change the input value as you like.

Azure Certification, Azure Guides, Azure Study Material, Azure Tutorial and Material

The result will show up further down on the page and should look something like the following image:

Azure Certification, Azure Guides, Azure Study Material, Azure Tutorial and Material

You are now up and running! You can play around with the swagger UX and try out various scenarios. In our blogs to follow, we will be looking at additional aspects of consuming the API from an application as well as configuring, deploying, and monitoring containers.

Monday, 26 November 2018

Best practices for alerting on metrics with Azure Database for MySQL monitoring

Whether you are a developer, database administrator, site reliability engineer, or a DevOps professional, monitoring databases is an important part of maintaining the reliability, availability, and performance of your MySQL server. There are various metrics available for you in Microsoft Azure Database for MySQL to get insights on the behavior of the server. You can also set alerts on these metrics using the Azure portal or Azure CLI.

Azure Study Materials, Azure Guides, Azure Certification, Azure Tutorial and Material

With modern applications evolving from a traditional on-premises approach to becoming more hybrid or cloud-native, there is also a need to adopt some best practices for a successful monitoring strategy on a hybrid and public cloud. Here are some example best practices on how you can use monitoring data on your MySQL server, and areas you can consider improving based on these various metrics.

Active connections


Sample threshold (percentage or value): 80 percent of total connection limit for greater than or equal to 30 minutes, checked every five minutes.

Things to check:

◈ If you notice that active connections are at 80 percent of the total limit for the past half hour, verify if this is expected based on the workload.
◈ If you think the load is expected, active connections limit can be increased by upgrading the pricing tier or vCores. You can check active connection limits for each SKU.

Azure Study Materials, Azure Guides, Azure Certification, Azure Tutorial and Material

Failed connections


Sample threshold (percentage or value): 10 failed connections in the last 30 minutes, checked every five minutes.

Things to check:

◈ If you see connection request failures over the last half hour, verify if this is expected by checking the logs for failure reasons.


◈ If this is a user error, take the appropriate action. For example, if there is an authentication failed error, check your username/password.

◈ If the error is SSL related, check the SSL settings and input parameters are properly configured.
     ◈  Example: mysql "sslmode=verify-ca sslrootcert=root.crt host=mydemoserver.postgre.database.azure.com dbname=postgres user=mylogin@mydemoserver"

CPU percent or memory percent


Sample threshold (percentage or value): 100 percent for five minutes or 95 percent for more than two hours.

Things to check:

◈ If you have hit 100 percent CPU or memory usage, check your application telemetry or logs to understand the impact of the errors.
◈ Review the number of active connections. Check for connection limits. If your application has exceeded the maximum connections or is reaching the limits, then consider scaling up computing.

IO percent


Sample threshold (percentage or value): 90 percent usage for greater than or equal to 60 minutes.

Things to check:

◈ If you see that IOPS is at 90 percent for one hour or more, verify if this is expected based on the application workload.
◈ If you expect a high load, then increase the IOPS limit by increasing storage. Storage to IOPS mapping is below for reference.

Storage


The storage you provision is the amount of storage capacity available to your Azure Database for MySQL server. The storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.

Basic General purpose  Memory optimized 
Storage type Azure Standard Storage  Azure Premium Storage  Azure Premium Storage 
Storage size  5GB TO 1TB  5GB to 4TB  5GB to 4TB 
Storage increment size  1GB  1GB  1GB
IOPS  Variable  3IOPS/GB
Min 100 IOPS
Max 6000 IOPS 
3IOPS/GB
Min 100 IOPS
Max 6000 IOPS

You can add additional storage capacity during and after the creation of the server. The Basic tier does not provide an IOPS guarantee. In the General purpose and Memory optimized pricing tiers, the IOPS scale with the provisioned storage size in a three to one ratio.

Storage percent


Sample threshold (percentage or value): 80 percent

Things to check:

◈ If your server is reaching provisioned storage limits, it will soon be out of space and set to read-only.
◈ Please monitor your usage and you can also provision for more storage to continue using the server without deleting any files, logs, and more.

Saturday, 24 November 2018

Leverage Azure Security Center to detect when compromised Linux machines attack

When an attacker compromises a machine, they typically have a goal in mind. Some attackers are looking for information residing on the victim’s machine or are looking for access to other machines on the victim’s network. Other times, attackers have plans to use the processing power of the machine itself or even use the machine as a launch point for other attacks. While on Linux virtual machines (VM) in Microsoft Azure we most commonly see attackers installing and running cryptocurrency mining software. This blog post will focus on the latter when an attacker wants to use the compromised machine as a launch point for other attacks.

Azure Security Center (ASC) utilizes an agent that runs on multiple distributions of Linux. When auditd is enabled, it collects logs including process creation events. These are run through the detection pipeline to look for malicious and suspicious activity. Alerts are surfaced through the ASC portal.

The Microsoft Threat Intelligence Center uses a range of methods to identify new emerging threats, including a sophisticated hybrid Linux honeypot service. A honeypot is a decoy system, set up to be attacked and lure cyber attackers to reveal themselves.

In this post, we discuss some recent instances where attacks against the honeypot originated from IPs within customer machines. In each case, malicious behavior on those compromised customer VMs had already resulted in alerts being raised through Azure Security Center. Analysis of these attacks yielded greater insight into the attacker’s behavior. This fed further detection development, allowing us to surface more attack behavior to customers earlier, and provide a more complete view of the attack end to end.

Initial intrusion


The diagram below shows the attack setup. The analysis suggests that an Apache Cassandra account with a default password was used to initially compromise an Azure VM. Once access was gained, the attack approached the honeypot (1) and other targets (2). We identified two IP addresses (3, 4) that the attacker used to log into this VM, one of which also attacked the honeypot (5). Another thing that stood out was the two IPs that the attacker was using shared the same first two octets and resolved to Romania. We will come back to this fact later.

Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Guides, Azure Learning

Intrusion breakdown


One of the more common attacks that we see against customer virtual machines is a brute force or password spray attack, these quickly lead to the installation of crypto coin mining malware. In this case, the malicious user was doing something a bit different.

Host enumeration

After the initial compromise, the attacker pulled down a perl based host enumeration script from the domain nasapaul.com which hosts a few enumeration and speed test scripts. Azure Security Center surfaces this behavior via a “detected file download from a known malicious source” alert.

Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Guides, Azure Learning

That script looks for specific information in the /proc/cpuinfo file to give the attacker an idea of what kind of machine they are on. You can see some of the commands run in the text box below. That same script also runs a speed test which is a service that nasapaul.com offers.

CPU=$(grep –m 1 “model name” /proc/cpuinfo | cut –d: –f2 | sed-e ‘s/^ */ /’ |sed –e ‘s/$/ / ‘)

CPUS=$ (grep –c ^processor /proc/cpuinfo)

STEP=$ (grep –m 1 “stepping” /proc/cpuinfo | cut –d: –f2 | sed –e ‘s/^ */ / ‘ | sed –e ‘s/$/ / ‘) 

BOGO=$ (grep –m 1 “stepping” /proc/cpuinfo | cut –d: –f2 | sed –e ‘s/^ */ / ‘ | sed –e ‘s/$/ / ‘)

OS=$ (lsb_release –si)

ram=$ (free –m | grep –oP ‘\d+’ | head –n 1)

VER=$ (uname –a)

uptime=$ (</proc/uptime)

uptime=$ {uptime%%. *} bold=$ (tput bold)

zile=$ ( ( uptime%60 ( )

secunde=$ ( ( uptime%60 ) )

minute=$ ( ( uptime/60%60 ) )

ore=$ ( ( uptime/60/60%24 ) )

vid=$ (lspci | grep VGA |cut -f5- -d ‘ ‘)

DISK=$ (df –h --total | grep total |awk ‘ {printf “” $2 “B\n\n” } ‘ )

Initial exploitation


That session ended, but the attacker started a new session and created a connection to a secure FTP server and pulled some files down. Then they modified the files for execution:

chmod +x 1 cyberinfo cybernetikrandom go h4e petarda port pscan2 screen speedtestvps.py sshd

This set of files is a toolkit from a known hacking group. The attacker uses the “go” file to run “pscan2” and “sshd” against two different class B IP ranges. That means they ran the scan against just over 65,000 addresses. They also used the tool “h4e” which our investigation showed was a perl script used in denial of service attacks. The text file “port” holds results of the scans, typically what IPs were listening too and maybe what ports were open. It isn’t clear if those commands completed successfully, but two hours later the attacker deleted them all and pulled down a different kit.

Password spray


This time the attacker used Wget to pull down their toolkit from a public website. As they did before, they pulled down the tools then modified them all for execution.

chmod +x a all classes co gasite.txt hu pass range scan.log ssh2 x

/bin/bash ./a ##.49
./ssh2 1500 -b ##.49 pass 22 "uname -a & lscpu"
/bin/bash ./a ###.66
./ssh2 1500 -b ###.66 pass 22 "uname -a & lscpu"
nano gasite.txt

After that, the same simple pattern is repeated against a number of class B ranges. The file “a” takes the first two octets of a class B range as input, then calls “ssh2”. “ssh2” takes input for a number of threads, the range, a password file (“pass” which in this case contains over 35,000 user/password combinations), a port number, and then the initial commands to run. The file “gasite.txt” collects output.

Later on, we see the files “co” and “range” used with the “classes” folder. The “classes” folder has details of 26 cloud and hosting companies with their IP ranges. Microsoft is there along with all the other major providers. The files “co” and “range” just expand the initial two octets into a full IP.

The attacker didn’t appear to ever execute the files “all”, “hu”, or “x” but they all have to do with configuring IP ranges, specifically filling out the full four octets of an IP. It is possible that the “ssh2” executable uses them.

Analysis of the toolkit took some effort. The output filename “gasite.txt” translates to “found.txt” and the “ssh2” file is a custom Romanian version of an ssh scanner packed and/or obfuscated using UPX. Once unpacked, the Romanian strings came through (see image below). Some further research by the red team tracked down the original ssh2 source code and a forum where our attacker or someone using the same executable, was getting some help with their code.

Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Guides, Azure Learning

Result: Enhanced behavioral analytics in Azure Security Center


While investigating the intrusion, we were able to pull out a number of unique TTPs for inclusion into new analytics or for improving existing ones. They uncovered things like better password spray detection and improved coverage of attacker host enumeration. We were also able to validate that existing analytics fired as expected. The goal isn’t to show a customer multiple redundant alerts for the same intrusion, but to provide insight into the full scope of an attacker’s actions. We also acknowledge that the actor behind this attack could change some aspects of their technique. The greater the detection coverage across the attack lifecycle the more resilient we are to changes in attacker methodology. Additionally, specific techniques used by this attacker could be used by other attackers and we’d like to make sure we catch them too.

Recommended actions


Review your alerts regularly in Azure Security Center. The customer received multiple Azure Security Center alerts for this intrusion and the malicious activity stopped soon after and has not appeared again. Azure Security Center consolidates all of your alerts in one centralized location in security alerts. This makes it easy for you to see the severity of your alerts and help you prioritize your response to them. Each alert gives you a detailed description of the incident as well as steps on how to remediate the issue. For further investigation, you can review the alerts in the “Investigation Path”, an interactive and visual way to see every entity involved in the attack.

Change your passwords regularly. While Azure Security Center alerted on the activity, the intrusion could have been prevented through good password hygiene. Of the many username and password combinations in the attacker toolkit, a good chunk of them are defaults that are created when you first install a piece of software. By changing these default passwords or going password-less, you prevent your passwords from being used against you.

Final thoughts


Our team works both ends of the cybersecurity problem. We constantly improve and refine our detections through both public and internal security research. We also are proactive in monitoring the external threat as a key input to ensuring that our detection coverage is most relevant to the attacks facing both Microsoft and its customers. If you have Linux machines in Azure, consider using Azure Security Center to help monitor them and prevent them from targeting others.

Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Guides, Azure Learning

In addition to the actions you can take, Microsoft has several physical infrastructure and operational controls in place to help protect the Azure platform. We have over 3,500 cybersecurity experts at Microsoft to help protect, detect, and respond to security threats against our infrastructure and services 24/7, 365 days a year. One of those teams is our team, the Microsoft Threat Intelligence Center. To learn more about our team and how we work to protect against malicious activity in Azure, watch our latest Microsoft Mechanics video.

Thursday, 22 November 2018

How to develop secure applications using Azure Cosmos DB

Before we begin to discuss how to develop secure applications using Azure Cosmos DB, we should also highlight some of the different layers of security that Azure Cosmos DB offers. The following image illustrates these various layers of security:

Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Cosmos DB

Azure Cosmos DB is a ring zero Azure service, this means it will be available in any new Azure data center as soon as it goes online and must keep all its compliance certificates current.

The first layer of Azure provides physical safety of data centers and continuous protections from DDoS attacks. Azure has dedicated teams to continuously monitor the security issues. All Azure services run a common security agent to collect anomalous activity. Production resources are patched regularly and all the secrets, certificates, or passwords have a defined lifetime. These certificates or secrets should be rotated after they expire. All the production ports in Azure Cosmos DB are scanned and penetration tested regularly. The source code is scanned for security issues and they require two approvers before integrating into the product. 

Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Cosmos DB

Access to Azure is restricted. Operations staff or developers working for Azure Cosmos DB cannot access any production from their machines. All production is accessed via dedicated secure access workstations (SAWs). These workstations are watertight, there is no outside access from these machines unless it is through Azure. Engineers get just in time (JIT) approval for all production access. All access is monitored and every activity of an engineer is monitored by an escort.  All production deployments require multiple approvals including test sign off and approvers.

Azure has a very strict access policy by Microsoft personnel. It is almost impossible for any Microsoft employee to access the production system if he or she is not authorized. I am mentioning these details because customers often ask if their data is secure in Azure and whether Azure Cosmos DB engineering teams can access their data.

Security offered by IP firewall


Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB supports policy driven IP-based access controls for inbound firewall support. This model is like the firewall rules of a traditional database system and provides an additional level of security to the Azure Cosmos DB account. With this model, you can now configure an Azure Cosmos DB account to be accessible only from an approved set of machines and/or cloud services. Access to Azure Cosmos DB resources from these approved sets of machines and services still require the caller to present a valid authorization token.

The IP access control policy can be set in the Azure portal, or programmatically through Azure CLI, Azure PowerShell, or the REST API by updating the “ipRangeFilter” property.

Security offered by virtual network

Virtual network is the next layer that secures Azure Cosmos DB accounts. You can configure your Azure Cosmos DB account to allow access only from a specific subnet of Azure Virtual Networks. By enabling a service endpoint for Azure Cosmos DB from a virtual network and its subnet, traffic is ensured an optimal and secure route to Azure Cosmos DB.

Once an Azure Cosmos DB account is configured with a virtual network service endpoint, it can be accessed only from the specified subnet, and the public or Internet access is removed. To learn in detail about service endpoints, refer to the Azure “Virtual Network Service Endpoints” overview article. 

You can filter network traffic to and from Azure resources in an Azure Virtual Network with a network security group. A network security group contains security rules that allow or deny inbound network traffic and outbound network traffic from several types of Azure resources. 

Access control with keys

The security layers discussed so far come as part of Azure Cosmos DB and don’t require much other than configuring the firewall and virtual network. Now, let’s discuss what you can do as an application developer. All access to Azure Cosmos DB is controlled by two keys, a master key and read-only key. Master key, as its name implies is a master key and can do all operations on Azure Cosmos DB. A read-only key enables you to read the data, but no other actions are possible with this key.

Developers may worry about keys, wondering if someone could steal them and access their data. The rest of this post will discuss the architecture you should adopt to build a secure application on Azure Cosmos DB.

Architecture for building a secure application


First, no application should have access keys in the config file or in code. Keys should always be kept in Key Vault. Your application or users can have managed identities registered with Key Vault and they can get the keys at the run time.

Second, application should not try to access the database directly. They should have a middle tier web service to access data. This approach provides the following benefits:

◈ Frontend application can call REST service to get the data. Frontend does not need to understand different API’s of Azure Cosmos DB. Azure Cosmos DB is a multi-model database, data can be kept in different formats, but the frontend applications can be abstracted away from all the details of data access.

◈ Generally you may have many applications, in different formats (PC, phone, and web) which will be accessing the same data. Not all application must repeat the data retrieval logic.

◈ You can provide throttling if needed to any rogue application at the web service layer before they can harm the database.

◈ You can audit at the finer granularity all the access to database.

◈ And then of course, there are different benefits of three tier architecture e.g. abstraction from data layer, opportunity for caching, different teams can work on different layer parallelly, scaling different layers separately, ease of maintenance etc.

Moreover, the biggest benefit is that you only have one service which needs to access keys. As you rotate keys only one service will need to refresh them, rather than hundreds or thousands of clients.

Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Cosmos DB

As shown above, you can have an application, which is authenticated by AAD and calls the REST web service. The application shown above can be tens of thousands of instance of a web site. REST API will get the caller identity in the call. Here is an opportunity for this web service to apply some business rules for the call. This web service can consult its custom rules to see which user has permission for which Azure Cosmos DB collection. Once the business rule layer validates the call, Data Service can get the keys from Key-Vault to access the data from Azure Cosmos DB.

If you are following Azure Cosmos DB closely, you may wonder about resource tokens. You should prefer the above architecture rather than implementing a service which can dispense the resource tokens and then a client can use those resource tokens to access Azure Cosmos DB directly.

I will assert having a data service is a better design than letting multiple client apps directly access the database. Resource token is a right solution for any small application, but it has some of its own limitations. Resource token does not work for all the API’s. It works only for SQL API. You still have to create a web service, which will dispense the token as per your business rules, so why don’t that service go one-step ahead and dispense the data itself.

Yes, you are adding a hop for the application, and it may be adding few milliseconds, but this is a better alternative to Resource token.

Azure Cosmos DB keeps all your data encrypted at rest and on wire.  With virtual network, IP Filtering, Key-Vault you can built a very secure application with Azure Cosmos DB.

Wednesday, 21 November 2018

Understanding health criteria in Azure Monitor for VMs

Azure Monitor for virtual machines (VMs) offers a myriad of monitoring features that help you view VMs from various perspectives. A few of those perspectives are:

1. The health of the VM and its components. This helps you to monitor, diagnose, and localize issues in operating systems, core components, and services such as Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), and more. Also, it identifies performance issues with CPU, memory, disk, and network adapter on your VMs.

2. Performance trends. These trends show you the performance with CPU, memory, disk, and network adapter performance metrics over time.

3. Maps - Connection topology and metrics. This gives you a visual map that shows the processes running on your VMs and the interaction between them, as well as important events and alerts.

Azure Monitor for VMs evaluates a various canned set of conditions called health criteria on your VMs in near real-time and triggers an alert if any health criterion goes to critical/warning state (alerts are turned on by default, but you can alter the behavior). For example, if the CPU utilization health criterion is in a critical state, an alert titled “CPU utilization too high” will fire. You can manage the health alerts just like you manage any other alert in Azure Monitor in the Alerts pivot in the portal.

You no longer have to worry about creating alerts for each health criteria.

Azure Certification, Azure Tutorial and Material, Azure Learning, Azure Study Materials, Azure Monitor

To understand if health criteria fires an alert or not, simply click on the health criteria link from per VM experiences to view its configuration.

Leveraging the APIs


While you can use Azure portal for most of the tasks related to health monitoring for VMs, there are certain properties and configurations that are only accessible through the APIs. In addition, you may want to use APIs to automate some of the tasks. Let’s take a tour of the APIs and see how you can leverage them for specific scenarios.

1. To identify what health criteria are applicable or active on a particular VM, and to get their IDs that are required for other API-related operations, use the following GET API example leveraging ARM client on command prompt shown below.

Azure Certification, Azure Tutorial and Material, Azure Learning, Azure Study Materials, Azure Monitor

"subscriptions/Subscription_Name/resourceGroups/Resource_Group_Name/providers/Microsoft.Compute/virtualMachines/VM_Name/providers/Microsoft.WorkloadMonitor/monitors?api-version=2018-08-31-preview”

2. Updates to health states are governed by two additional parameters, frequency and lookback period. Frequency is the rate at which a health criterion is executed, and lookback is the duration over which is a criterion is evaluated. For example, if health criterion is evaluating the condition CPU utilization is greater than 95 percent with frequency of 5 minutes and a lookback duration of 15 minutes. This means that the condition whether CPU utilization is greater than 95 percent is evaluated every 5 minutes, but if it remains greater than 95 percent for 15 minutes (3 consecutive evaluation cycles), then the state will be updated to critical if it wasn’t already.

Currently, frequency and lookback period are not exposed in the UI. However, you can get these values using the API. Shown below is an example leveraging ARM client on command prompt.

Azure Certification, Azure Tutorial and Material, Azure Learning, Azure Study Materials, Azure Monitor

"subscriptions/Subscription_Name/resourceGroups/Resource_Group_Name/providers/Microsoft.Compute/virtualMachines/VM_Name/providers/Microsoft.WorkloadMonitor/monitors/Monitor_ID?api-version=2018-08-31-preview”

Note: Modification of frequency and Lookback period isn’t available today.

3. Enabling or disabling an alert on a health criterion. For example, if alerting is enabled on a health criterion and you would like to disable it, the following expression could be used.


armclient patch 

“subscriptions/Subscription_Name/resourceGroups/Resource_Group_Name/providers/Microsoft.Compute/virtualMachines/VM_Name/providers/Microsoft.WorkloadMonitor/monitors/Monitor_ID?api-version=2018-08-31-preview 1-preview "{'properties':{'alertGeneration':’Disabled’ }}"

4. Updating the threshold of a Health Criteria.


armclient patch

subscriptions/Subscription_Name/resourceGroups/Resource_Group_Name/providers/Microsoft.Compute/virtualMachines/VM_Name/providers/Microsoft.WorkloadMonitor/monitors/Monitor_ID?api-version=2018-08-31-preview 1-preview "{'properties':{'Threshold':’Desired_Value’ }}"

After making the desired edits you can use the GET API highlighted in point No. 2 above to understand whether the configuration updated. You can even validate it from the portal.

5. To associating Action group(s) with any of a VM’s health criteria.


Create a file titled “file_name” and update its extension to “.json”.

Its contents should look like this:

{

                "ActionGroupResourceIds": [“Action_Group_Resource_ID"]

                }  

Use the following PUT API to associate action group to a VM (for health alerts triggered by Azure Monitor for VMs).

armclient PUT

"subscriptions/Subscription_Name/resourceGroups/Resource_Group_Name/providers/Microsoft.Compute/virtualMachines/VM_Name/providers/Microsoft.WorkloadMonitor/notificationSettings?api-version=2018-08-31-preview"  @file_name.json (JSON Script)

To check whether the setting updated correctly, use the following GET API.

armclient GET

"subscriptions/Subscription_Name/resourceGroups/Resource_Group_Name/providers/Microsoft.Compute/virtualMachines/VM_Name/providers/Microsoft.WorkloadMonitor/notificationSettings?api-version=2018-08-31-preview"

To explore the API surface further, look at the swagger published on GitHub.