Sunday 31 March 2019

Announcing Azure Monitor AIOps Alerts with Dynamic Thresholds

We are happy to announce that Metric Alerts with Dynamic Thresholds is now available in public preview. Dynamic Thresholds are a significant enhancement to Azure Monitor Metric Alerts. With Dynamic Thresholds you no longer need to manually identify and set thresholds for alerts. The alert rule leverages advanced machine learning (ML) capabilities to learn metrics' historical behavior, while identifying patterns and anomalies that indicate possible service issues.

Metric Alerts with Dynamic Thresholds are supported through a simple Azure portal experience, as well as provides support for Azure workloads operations at scale by allowing users to configure alert rules through an Azure Resource Manager (ARM) API in a fully automated manner.

Why and when should I apply Dynamic Thresholds to my metrics alerts?


Smart metric pattern recognition – A big pain point with setting static threshold is that you need to identify patterns on your own and create an alert rule for each pattern. With Dynamic Thresholds, we use a unique ML technology to identify the patterns and come up with a single alert rule that has the right thresholds and accounts for seasonality patterns such as hourly, daily, or weekly. Let’s take the example of HTTP requests rate. As you can see below, there is definite seasonality here. Instead of setting two or more different alert rules for weekdays and weekends, you can now get Azure Monitor to analyze your data and come up with a single alert rule with Dynamic Thresholds that changes between weekdays and weekends.

Azure Monitor, Azure Certifications, Azure Guides, Azure Tutorial and Material, Azure Learning

Scalable alerting – Wouldn’t it be great if you could automatically apply an alert rule on CPU usage to any virtual machine (VM) or application that you create? With Dynamic Thresholds, you can create a single alert rule that can then be applicable automatically to any resource that you create. You don’t need to provide thresholds. The alert rule will identify the baseline for the resource and define the thresholds automatically for you. With Dynamic Thresholds, you now have a scalable approach that will save a significant amount of time on management and creation of alerts rules.

Domain knowledge – Setting a threshold often requires a lot of domain knowledge. Dynamic Thresholds eliminates that need with the use of your ML algorithms. Further, we have optimized the algorithms for common use cases such as CPU usage for a VM or requests duration for an application. So you can have full confidence that the alert will capture any anomalies while still reducing the noise for you.

Intuitive configuration – Dynamic Thresholds allow setting up metric alerts rules using high-level concepts, alleviating the need to have extensive domain knowledge about the metric. This is expressed by only requiring users to select the sensitivity for deviations (low, medium, high) and boundaries (lower, higher, or both thresholds) based on the business impact of the alert in the UI or ARM API.

Azure Monitor, Azure Certifications, Azure Guides, Azure Tutorial and Material, Azure Learning

Dynamic Thresholds also allow you to configure a minimum amount of deviations required within a certain time window for the system to raise an alert, the default time window is four deviations in 20 minutes. The user can configure this and choose what he/she would like to be alerted on by changing the failing periods and time window.

Azure Monitor, Azure Certifications, Azure Guides, Azure Tutorial and Material, Azure Learning

Azure Monitor, Azure Certifications, Azure Guides, Azure Tutorial and Material, Azure Learning

Metric Alerts with Dynamic Threshold is currently available for free during the public preview.

Saturday 30 March 2019

Umanis lifts the hood on their AI implementation methodology

Given the ever-increasing speed of change in technology, along with the variety of sectors and industries Umanis works in, they focused on building a methodology that could be standardized across AI implementations from project to project. This methodology follows an iterative cycle: assimilate, learn, and act, with the goal of adding value with each iteration.

The Azure platform acts as an enabler of this methodology as seen in the image below.

Azure Learning, Azure Certifications, Azure Guides, Azure Tutorial and Material

In most data and artificial intelligence (AI) projects implemented at Umanis, several trends are gaining momentum and are likely to amplify in 2019:

◈ More unstructured, big, and real-time data.
◈ An increased need for fast and reliable AI solutions to scale up.
◈ Increasing expectations from customers.

In this blog post, we will explain how you can address these kinds of projects, and how Umanis maps their approach to the Azure offering to deliver solutions that are easy to use, operationalize, and maintain.

The 3 phases of the AI implementation methodology


1. Assimilate


In this initial phase, you can be hit by anything. From the good to the big, bad, and ugly: databases, text, logs, telemetry, images, videos, social networks, and more are flowing in. The challenge is to make sense of everything, so you can serve the next phase (Learn) successfully. By assimilating, we mean:

◈ Ingest: The performance of an algorithm depends on the quality of the data. We consider “ingesting” to be checking the quality of the data, the quality of the transmission, and building the pipelines to feed the subsequent parts.

◈ Store: Since the data will be used by highly demanding algorithms (I/O, processing power) that will mix data from various sources, you need to store the data in the most efficient way for future access by algorithms or data visualizations.

◈ Structure: Finally, you’ll need to prepare the data for an algorithms’ consumption and execute as many transformations, preprocessing, and cleaning tasks as you can to speed up the data scientists’ activities and algorithms.

2. Learn


This is the heart of any AI project: Creating, deploying, and managing models.

◈ Create: Data scientists use available data to design algorithms, train their models, and compare the results. There are two key points to this:

1. Don’t make them wait for results! Data scientists are rare resources and their time is precious.
2. Allow any language or combination of languages. On that perspective, Azure Databricks is a great solution as it addresses this natively by allowing different languages to be used in a single block of code.

Azure Learning, Azure Certifications, Azure Guides, Azure Tutorial and Material

◈ Use: Once algorithms are deployed as APIs and consumed, the need for parallelization goes up. SLAs and testing the performance of the sending, processing, and receiving pipeline is crucial.

◈ Refine: Refining the quality of algorithms ensures reliable results over time. The easy part of this activity is automatic re-training on a regular basis. The less obvious one is what we call the “human in the loop” activity. In short, a Power BI report showing the results of predictions that a human can re-classify quickly as needed, and the machine uses this human expertise to get better at its task.

3. Act


All of the above phases are useless unless you actually make good use of the algorithm’s added value.

◈ Inform: Any mistake in code, misunderstanding in requirements, or bug can be devastating as first user impressions are crucial. Therefore, instead of a “big bang” of visualizations, start very small, iterate very quickly, and make a few key users on-board to secure adoption before widening the audience.

◈ Connect: Systems that use the information from algorithms need to be plugged in. This is called RPA, IPA, or automation in general, and the architectures can vary greatly on each project. Don’t overlook the need for human monitoring of this activity. Consider the impact of the most wrong answer from an algorithm, and you will get a good feel of the need for human supervision.

◈ Dialog: When dealing with human interaction, so much comes into play that to be successful, the scope of the interaction needs to be narrowed down to the actions that really add value and are not trivial. (This is not easily possible via classic interfaces.)

Thursday 28 March 2019

Azure Stack IaaS – part six

Pay for what you use


In the virtualization days I used to pad all my requests for virtual machines (VM) to get the largest size possible. Since decisions and requests took time, I would ask for more than I required just so I wouldn’t have delays if I needed more capacity. This resulted in a lot of waste and a term I heard often–VM sprawl.

The behavior is different with Infrastructure-as-a-Service (IaaS) VMs in the cloud. A fundamental quality of a cloud is that it provides an elastic pool for your resource to use when needed. Since you only pay for what you use, you don’t need to over provision. Instead, you can optimize capacity based on demand. Let me show you some of the ways you can do this for your IaaS VMs running in Azure and Azure Stack.

Resize


It’s hard to know exactly how big your VM should be. There are so many dimensions to consider, such as CPU, memory, disks, and network. Instead of trying to predict what your VM needs for the next year or even month, why not take a guess, let it run, and then adjust the size once you have some historical data.

Azure and Azure Stack makes it easy for you to resize your VM from the portal. Pick the new size and you’re done. No need to call the infrastructure team and beg for more capacity. No need to over spend for a huge VM that isn’t even used.

Azure Stack IaaS, Azure Learning, Azure Guides, Azure Certifications, Azure Tutorial and Material

Scale out


Another dimension of scale is to make multiple copies of identical VMs to work together as a unit. When you need more, create additional VMs. When you need less, remove some of the VMs. Azure has a feature for this called Virtual Machine Scale Sets (VMSS) which is also available in Azure Stack. You can create a VMSS with a wizard. Fill out the details of how the VM should be configured, including which extensions to use and which software to load onto your VM. Azure takes care of wiring the network, placing the VMs behind a load balancer, creating the VMs, and running the in guest configuration.

Azure Stack IaaS, Azure Learning, Azure Guides, Azure Certifications, Azure Tutorial and Material

Once you have created the VMSS, you can scale it up or down. Azure automates everything for you. You control it like IaaS, but scale it like PaaS. It was never this easy in the virtualization days.

Azure Stack IaaS, Azure Learning, Azure Guides, Azure Certifications, Azure Tutorial and Material

Add, remove, and resize disk


Just like virtual machines in the cloud, storage is pay per use. Both Azure and Azure Stack make it easy for you to manage the disks running on that storage so you only need to use what your application requires. Adding, removing, and resizing data disks is a self-service action so you can right-size your VM’s storage based on your current needs.

Azure Stack IaaS, Azure Learning, Azure Guides, Azure Certifications, Azure Tutorial and Material

Usage based pricing


Just like Azure, Azure Stack prices are based on how much you use. Since you take on the hardware and operating costs, Azure Stack service fees are typically lower than Azure prices. Your Azure Stack usage will show up as line items in your Azure bill. If you run your Azure Stack in a network which is disconnected from the Internet, Azure Stack offers a yearly capacity model.

Pay-per-use really benefits Azure Stack customers. For example, one organization runs a machine learning model once a month. It takes about one week for the computation. During this time, they use all the capacity of their Azure Stack, but for the other three weeks of the month, they run light, temporary workloads on the system. A later blog will cover how automation and infrastructure-as-code allows you to quickly set this up and tear it down, allowing you to just use what the app needs in the time window it’s needed. Right-sizing and pay-per-use saves you a lot of money.

Wednesday 27 March 2019

Clean up files by built-in delete activity in Azure Data Factory

Azure Data Factory (ADF) is a fully-managed data integration service in Azure that allows you to iteratively build, orchestrate, and monitor your Extract Transform Load (ETL) workflows. In the journey of data integration process, you will need to periodically clean up files from the on-premises or the cloud storage server when the files become out of date. For example, you may have a staging area or landing zone, which is an intermediate storage area used for data processing during your ETL process. The data staging area sits between the data source stores and the data destination store. Given the data in staging areas are transient by nature, you need to periodically clean up the data in the staging area after the ETL process has being completed.

We are excited to share ADF built-in delete activity, which can be part of your ETL workflow to deletes undesired files without writing code. You can use ADF to delete folder or files from Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, File System, FTP Server, sFTP Server, and Amazon S3.

You can find ADF delete activity under the “Move & Transform” section from the ADF UI to get started.

Azure Data Factory, Azure Study Materials, Azure Learning, Azure Guides, Azure Certifications

1. You can either choose to delete files or delete the entire folder. The deleted files and folder name can be logged in a csv file.

Azure Data Factory, Azure Study Materials, Azure Learning, Azure Guides, Azure Certifications

Azure Data Factory, Azure Study Materials, Azure Learning, Azure Guides, Azure Certifications

2. The file or folder name to be deleted can be parameterized, so that you have the flexibility to control the behavior of delete activity in your data integration flow.

Azure Data Factory, Azure Study Materials, Azure Learning, Azure Guides, Azure Certifications

3. You can delete expired files only rather than deleting all the files in one folder. For example, you may want to only delete the files which were last modified more than 30 days ago.

Azure Data Factory, Azure Study Materials, Azure Learning, Azure Guides, Azure Certifications

4. You can start from ADF template gallery to quickly deploy common use cases involving delete activity.

Azure Data Factory, Azure Study Materials, Azure Learning, Azure Guides, Azure Certifications

Tuesday 26 March 2019

Building serverless microservices in Azure - sample architecture

Distributed applications take full advantage of living in the cloud to run globally, avoid bottlenecks, and always be available for users worldwide. This not only requires the right infrastructure to be deployed in, but also support for the decoupled architecture an application with these characteristics versus the traditional monolithic approach. This is why most cloud native applications are using a microservices architecture that helps achieve this at global scale.

Azure Serverless, Azure Guides, Azure Learning, Azure Tutorial and Material, Azure Certifications

The benefits of using a microservices architecture are maximized when those applications are built in the cloud, with a wide range of managed services that will make it easier to materialize the microservices promise. With those services managing infrastructure and scaling for you, and improving the way you can do critical processes like deployment or monitoring those solutions, you can maximize the amount of value delivered per cycle.

Azure Serverless, Azure Guides, Azure Learning, Azure Tutorial and Material, Azure Certifications

There are different patterns you might want to explore and each of them fits a specific scenario. Today we’re focusing on how building serverless microservices is a great fit for event-driven scenarios, and how you can use the Azure Serverless platform.

Building serverless, event-driven microservices


Taking an event-driven approach to build microservices-based applications when fitting the scenario and the problem to solve, can help mitigate some problems of a more traditional approach:

◈ Scaling compute resources: With the automated and flexible scaling based on actual demand that’s provided by a serverless platform, you don’t need to worry about how the scaling happens or how to handle it on the code of your application.

◈ Operations dependency: When deploying a microservices-based solution, there is usually a strong dependency on the operations teams for allocating infrastructure resources for deployment and execution, both initially and with each incremental change. Taking a serverless approach by using fully managed services removes that necessity, since all the underlying infrastructure is managed for you by the platform.

◈ Costs for hosting: With a traditional deployment, the cost is determined by how much you have to pay for each hosting node, and usually implies an over allocation of resources, resulting in increased hosting expenditure. With an event-driven approach, using services with consumption-based pricing models means the price is determined by the number of requests or operations, and the costs for hosting are better adjusted to the real usage of the solution (and are usually lower).

◈ Services discovery: Managing services integration, communication, and interactions are common problems on distributed applications. Since each service is performing a very specific action according to the single responsibility principle, more often than not a service will need to communicate with others to achieve its goal. The real challenge is keeping these connections as simple as possible and totally decoupled services. With an event-driven approach, you can take advantage of both of the following:

     ◈ A centralized, unified way of communicating services via events using a pub-sub model, fully managed with Azure Event Grid.
     ◈ An integrated programming model based on triggers to automatically respond to those events and bindings to connect and integrate different services seamlessly, such as the experience offered by Azure Functions and Logic Apps for event-driven compute.

Sample architecture for serverless microservices


In the sample architecture for a rideshare application for a fictitious company named Relecloud, you can learn more about the architectural design of a microservices-based application. The sample uses fully managed services from the Azure Serverless platform to build the main building blocks of microservices solutions such as:

1. API Gateway: Using API Management to expose the endpoints of the backend services, so the client application can consume them securely. This also helps with decoupling the client side from the backend, since you can easily manage changes on where the services are actually hosted from the gateway without affecting the client application.

2. Entry points: The public facing APIs that the client application will be using, powered by Azure Functions responding to HTTP requests.

3. Workflow orchestrator: Middle-tier service to interconnect the public facing APIs with the actual backend services that are tied to the data stores and other critical components, orchestrating the work of these services based on actions on the client side.

4. Async queue: Messaging service to handle services intercommunication and pass along information and data between the different services, represented by Azure Event Grid. By using an event-driven approach, we’re also favoring services decoupling, since the information exchange will have a fire-and-forget approach, with services pushing events and handlers subscribed to those events and handlers subscribing to those events for processing.

5. Backend services: The services that are directly operating with the data layer and other components of the solution, isolated from the rest and easily replaceable if needed (e.g. changing the type of database used to store data) without affecting the rest of the application and interactions.

Azure Serverless, Azure Guides, Azure Learning, Azure Tutorial and Material, Azure Certifications

Sunday 24 March 2019

Expanded Jobs functionality in Azure IoT Central

Since announcing the release of our Jobs feature during the Azure IoT Central general availability launch, we are excited to share how we are working to improve your device management workflow through additional jobs functionalities. Today, you are now able to copy an existing job you’ve created, save a job to continue working on later, stop or resume a running job, and download a job details report once your job has completed running. These additional Jobs functionalities make managing your devices at scale much easier.

In order to copy a job you’ve created, simply select a job from your main jobs list and select “Copy”. This will open a copy of the job where you can optionally update any part of the job configuration. If any changes have been made to your device set since its creation, your copied job will reflect those changes for you to edit.

Azure IoT Central, Azure Certifications, Azure Guides, Azure Tutorial and Materials

While you are editing your job, you now have the option to save the job to continue working on later by selecting “Save”. This saved job will appear on your main jobs list with a status of “Saved” and you can open it again at any time to continue editing.

Azure IoT Central, Azure Certifications, Azure Guides, Azure Tutorial and Materials

Once you have chosen to run your job, you can select the “Stop” button to stop the job from executing any further. You can open a stopped job from your list and select “Run” again at any time you’d like.

Whether your job has been stopped or is completed running, you can select “Download Device Report” near your device list in order to download a .csv file that lists the device ID, time the job was completed or stopped, status of the device, and the error message (if applicable). This can be used to troubleshoot devices or as a sorting tool.

Azure IoT Central, Azure Certifications, Azure Guides, Azure Tutorial and Materials

We are continually working on improving your device management experience to make managing devices at scale easier than ever. If you have any suggestions for the device management or Jobs functionalities you would find useful in your workflow.

Saturday 23 March 2019

Azure Stack IaaS – part five

Self-service is core to Infrastructure-as-a-Service (IaaS). Back in the virtualization days, you had to wait for someone to create a VLAN for you, carve out a LUN, and find space on a host. If Microsoft Azure ran that way, we would have needed to hire more and more admins as our cloud business grew.

Do it yourself


A different approach was required, which is why IaaS is important. Azure's IaaS gives the owner of the subscription everything they need to create virtual machines (VMs) and other resources on their own, without involving an administrator.

Let me give you a few examples that show Azure and Azure Stack self-service management of VMs.

Deployment


Creating a VM is as simple as going through a wizard. You can create the VM by specifying everything needed for the VM in the “Create virtual machine” blade. You can include the operating system image or marketplace template, the size (memory, CPUs, number of disks, and NICs), high availability, storage, networking, monitoring, and even in guest configuration.

Azure Stack IaaS, Azure Study Materials, Azure Tutorial and Materials, Azure Learning

Daily operations


That’s great for deployment, but what about later down the road when you need to quickly change the VM? Azure and Azure Stack have you covered there too. The settings section of the VM allows you to make changes to networking, disks, size CPUs, memory, and more, in-guest configuration extensions and high availability.

Azure Stack IaaS, Azure Study Materials, Azure Tutorial and Materials, Azure Learning

One thing that was always a pain in the virtualization days was getting the right firewall ports open. Now you can manage this on your own without waiting on the networking team. In Azure and Azure Stack firewall rules are called network security groups. This can all be configured in a self-service manner as shown below.

Azure Stack IaaS, Azure Study Materials, Azure Tutorial and Materials, Azure Learning

Disks and image self-service is important too. In the virtualization days this was also a big pain point. I had to give these to my admin to get them into the system for usage. Fortunately, storage is self-service in Azure and Azure Stack. Your IaaS subscription includes access to both storage accounts and managed disks from where you can upload and download your disks and images.

Azure Stack IaaS, Azure Study Materials, Azure Tutorial and Materials, Azure Learning

Managed disks also give you the option to create and export snapshots.

Azure Stack IaaS, Azure Study Materials, Azure Tutorial and Materials, Azure Learning

Other resources a VM owner can manage include load balancer configuration, DNS, VPN gateways, subnets, attach/detach disks, scale up/down, scale in/out, and so many other things it is astounding.

Support and troubleshooting


When there is a problem, no one wants to wait for someone else to help. The more tools you have to correct the situation the better. While operating one of the largest public clouds, the Azure IaaS team has learned what the top issues are facing customers and their support needs. To empower VM owners to solve these issues themselves, they have created a number of self-service support and troubleshooting features. Perhaps the most widely used is the Reset Password feature. Why wasn’t this feature around in the virtualization days?

Azure Stack IaaS, Azure Study Materials, Azure Tutorial and Materials, Azure Learning

I need to mention a setting that has prevented me from creating a support problem because of my absentmindedness. It is the Lock feature. A lock can prevent any change or deletion on a VM or any other resource.

Azure Stack IaaS, Azure Study Materials, Azure Tutorial and Materials, Azure Learning

Other useful troubleshooting and support features include re-deploying your VM to another host if you suspect your VM is having problems on the host it is currently on, checking boot diagnostics to see the state of the VM before it fully boots and is ready for connections, and reviewing performance diagnostics. As we learn and build these features in Azure, they eventually find their way to Azure Stack so that your admins don’t have to work so hard to support you.

Happy infrastructure admins


When you can take care of yourself, your admins can manage the underlying infrastructure without being interrupted by you. This means they can work on the things important to them and you can focus on what is important to you.

Tuesday 19 March 2019

Azure Stack IaaS – part four

Protect your stuff


In this post, we’ll cover the concepts and best practices to protect your IaaS virtual machines (VMs) on Azure Stack.

Protecting your IaaS virtual machine based applications


Azure Stack is an extension of Azure that lets you deliver IaaS Azure services from your organization’s datacenter. Consuming IaaS services from Azure Stack requires a modern approach to business continuity and disaster recovery (BC/DR). If you’re just starting your journey with Azure and Azure Stack, make sure to work through a comprehensive BC/DR strategy so your organization understands the immediate and long-term impact of modernizing applications in the context of cloud. If you already have Azure Stack, keep in mind that each application must have a well-articulated BC/DR plan calling out the resiliency, reliability, and availability requirements that meet the business needs of your organization.

What Azure Stack is and what it isn’t


Since launching Azure Stack at Ignite 2017, we’ve received feedback from many customers on the challenges they face within their organization evangelizing Azure Stack to their end customers. The main concerns are the stark differences from traditional virtualization. In the context of modernizing BC/DR practices, three misconceptions stand out:

Azure Stack is just another virtualization platform

Azure Stack is delivered as an appliance on prescriptive hardware co-engineered with our integrated system partners. Your focus must be on the services delivered by Azure Stack and the applications your customers will deploy on the system. You are responsible for working with your applications teams to define how they will achieve high availability, backup recovery, disaster recovery, and monitoring in the context of modern IaaS, separate from infrastructure running the services.

I should be able to use the same virtualization protection schemes with Azure Stack

Azure Stack is delivered as a sealed system with multiple layers of security to protect the infrastructure. Constraints include:

◈ Azure Stack operators only have constrained administrative access to the system. Elevated access to the system is only possible through Microsoft support.
◈ Scale unit nodes and infrastructure services have code integrity enabled.
◈ At the networking layer, the traffic flow defined in the switches is locked down at deployment time using access control lists.

Given these constraints, there is no opportunity to install backup/replication agents on the scale-unit nodes, grant access to the nodes from an external device for replication and snapshotting, or physically attach external storage devices for storage level replication to another site.

Another ask from customers is the possibility of deploying one Azure Stack scale-unit across multiple datacenters or sites. Azure Stack doesn’t support a stretched or multi-site topology for scale-units. In a stretched deployment, the expectation is that nodes in one site can go offline with the remaining nodes in the secondary site available to continue running applications. From an availability perspective, Azure Stack only supports N-1 fault tolerance, so losing half of the node count will take the system offline. In addition, based on how scale-units are configured, Azure Stack only supports fault domains at a node level. There is no concept of a site within the scale-unit.

Azure Stack IaaS, Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

I am not deploying modern applications in Azure, none of this applies to me

Azure Stack is designed to offer cloud services in your datacenter. There is a clear separation between the operation of the infrastructure and how IaaS VM-based applications are delivered. Even if you’re not planning to deploy any applications to Azure, deploying to Azure Stack is not “business as usual” and will require thinking through the BC/DR implications throughout the entire lifecycle of your application.

Define your level of risk tolerance


With the understanding that Azure Stack requires a different approach to BC/DR for your IaaS VM-based applications, let’s look at the implications of having one or more Azure Stack systems, the physical and logical constructs in Azure Stack, and the recovery objectives you and your application owners need to focus on.

How far apart will you deploy Azure Stack systems

Let’s start by defining the impact radius you want to protect against in the event of a disaster. This can be as small as a rack in a co-location facility or an entire region of a country or continent. Within the impact radius, you can choose to deploy one or more Azure Stack systems. If the region is large enough you may even have multiple datacenters close together, each with Azure Stack systems. The key takeaway is that if the site goes offline due to a disaster or catastrophic event, there is no amount of redundancy that will keep the Azure systems online. If your intent is to survive the loss of an entire site as the diagram below shows, then you must consider deploying Azure Stack systems into multiple geographic locations separated by enough distance so a disaster in one location does not impact any other locations.

Azure Stack IaaS, Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

Help your application owners understand the physical and logical layers of Azure Stack

Next it’s important to understand the physical and logical layers that come together in an Azure Stack environment. The Azure Stack system running all the foundational services and your applications physically reside within a rack in a datacenter. Each deployment of Azure Stack is a separate instance or cloud with its own portal. The diagram below shows the physical and logical layering that’s common for all Azure Stack systems deployed today and for the foreseeable future.

Azure Stack IaaS, Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

Define the recovery time objectives for each application with your application owners

Now that you have a clear understanding of your risk tolerance if a system goes offline, you need to decide the protection schemes for your applications. You need to make sure you can quickly recover applications and data on a healthy system. We’re talking about making sure your applications are designed to be highly available within a scale-unit using availability sets to protect against hardware failures. In addition, you should also consider the possibility of an application going offline due to corruption or accidental deletion. Recovery can be as simple as scaling-out an application or restoring from a backup.

To survive an outage of the entire system, you’ll need to identify the availability requirements of each application, where the application can run in the event of an outage, and what tools you need to introduce to enable recovery. If your application can run temporarily in Azure, you can use services like Azure Site Recovery and Azure Backup to protect your application. Another option is to have additional Azure Stack systems fully deployed, operational, and ready to run applications. The time required to get the application running on a secondary system is the recovery time objective (RTO). This objective is established between you and the application owners. Some application owners will only tolerate minimal downtime while others are ok with multiple days of downtime if the data is protected in a separate location. Achieving this RTO will differ from one application to another. The diagram below summarizes the common protection schemes used at the VM or application level.

In the event of a disaster, there will be no time to request an on-demand deployment of Azure Stack to a secondary location. If you don’t have a deployed system in a secondary location, you will need to order one from your hardware partner. The time required to deliver, install, and deploy the system is measured in weeks.

Azure Stack IaaS, Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

Establish the offerings for application and data protection


Now that you know what you need to protect on Azure Stack and your risk tolerance for each application, let’s review some specific patterns used with IaaS VMs.

Data protection

Applications deployed into IaaS VMs can be protected at the guest OS level using backup agents. Data can be restored to the same IaaS VM, to a new VM on the same system, or a different system in the event of a disaster. Backup agents support multiple data sources in an IaaS VM such as:

◈ Disk: This requires block-level backup of one, some, or all disks exposed to the guest OS. It protects the entire disk and captures any changes at the block level.
◈ File or folder: This requires file system-level backup of specific files and folders on one, some, or all volumes attached to the guest OS.
◈ OS state: This requires backup targeted at the OS state.
◈ Application: This requires a backup coordinated with the application installed in the guest OS. Application-aware backups typically include quiescing input and output in the guest for application consistency (for example, Volume Shadow Copy Service (VSS) in the Windows OS).

Application data replication

Another option is to use replication at the guest OS level or at the application level to make data available in a different system. The replication isn’t offloaded to the underlying infrastructure, it’s handled at the guest OS or above. One example is applications like SQL support asynchronous replication in a distributed availability group.

High availability

For high availability, you need to start by understanding the data persistence model of your applications:

◈ Stateful workloads write data to one or more repositories. It’s necessary to understand which parts of the architecture need point-in-time data protection and high availability to recover from a catastrophic event.

◈ Stateless workloads on the other hand don’t contain data that needs to be protected. These workloads typically support on-demand scale-up and scale-down and can be deployed in multiple locations in a scale-out topology behind a load balancer.
To support application level high availability within an Azure Stack system, multiple virtual machines are grouped into an availability set. Applications deployed in an availability set sit behind a load balancer that distributes incoming traffic randomly among multiple virtual machines.

Across Azure Stack systems, a similar approach is possible with the following differences; The load balancer must be external to both systems or in Azure (i.e. Traffic Manager). Availability sets do not span across independent Azure Stack systems.

Saturday 16 March 2019

Azure Stack IaaS – part three

Foundation of Azure Stack IaaS


Remember back in the virtualization days when you had to pick a host for your virtual machine? Some of my business units could tell by the naming convention the make and manufacturer of the hardware. Using this knowledge, they’d fill up the better gear first, leaving the teams that didn’t know better with the oldest hosts.

Clouds take a different approach. Instead of hosts, VMs are placed into a pool of capacity. The physical infrastructure is abstract. The compute, storage, and networking resources consumed by the VM are defined through software.

Azure Stack is an instance of the Azure cloud that you can run in your own datacenter. Microsoft has taken the experience and technology from running one of the largest clouds in the world to design a solution you can host in your facility. This forms the foundation of Azure Stack’s infrastructure-as-service (IaaS).

Let’s explore some of the characteristics of the Azure Stack infrastructure that allows you to run cloud-native VMs directly in your facility.

Cloud inspired hardware


Microsoft employees can’t just purchase their favorite server and rack it into an Azure datacenter. The only servers that enter an Azure datacenter have been specifically built for Azure. Not only are the servers built for Azure, so are the networking devices, the racks, and the cabling. This extreme standardization allows the Azure team to operate an Azure datacenter with just a handful of employees. Because all the servers are standardized and can be uniformly operated and automated, adding additional capacity to a datacenter doesn’t require hiring more employees to operate them.

Other advantages of standardizing hardware configurations is the standardization leads to expected, repeatable results – not only for Microsoft and Azure, but for its customers. The hardware integration has been validated and is a known recipe. Servers, storage, networking, cabling layout, and more are all well-known and based on these recipes, the ordering, delivery, and integration of new hardware components. Servicing and eventual retirement are repeatable and scalable. The full end-to-end validation of these configurations is done once with quick checks in place when the capacity is delivered and installed.

These principles are applied to Azure Stack solutions as well. The configurations, their capabilities, and validation are all well-known and the result is a repeatable and supportable product. Microsoft, its partners, and most importantly the end customer benefit. While an Azure Stack customer is limited to the defined, partner solutions, they have been built with reasonable flexibility so the customer can choose the specific capabilities or capacities required. Please note, there is one exception – the Azure Stack Development Kit (ASDK) allows you to install Azure Stack on any hardware that meets the hardware requirements. The ASDK is for evaluation purposes and not supported as a production environment.

Azure Stack hardware partnerships


Microsoft has partnered and co-engineered solutions with a variety of hardware partners or OEMs. The benefit is that Azure Stack can meet you where your existing relationships exist. These relationships may be based on existing hardware purchasing agreements, geographic location, or support capabilities. Keeping in mind the principles of operating a solution in a well-defined manner, Microsoft has set minimum requirements for Azure Stack hardware solutions. Each of our partners can then choose from their portfolio the components, servers, and network switches that best meet your needs. This creates a well-defined variety that continues to be supportable and delivers the overall solution value.

Azure Stack IaaS, Azure Certifications, Azure Guides, Azure Learning

Resiliency to failure


One of the principles we have taken from Microsoft’s experience in the enterprise and from Azure is overall solution resilience. The world of software and hardware is not perfect; things fail – cables go bad, software has bugs, power outages occur, and on and on. While we work to build better software and with our solution partners to continually improve, we must expect that things fail. Azure Stack solutions are not perfect, but have been constructed with the intent to overcome the common points of failure. For example, each copy of tenant/user data is stored on three separate storage devices in three separate servers. The physical network paths are redundant and provide better performance and resiliency to potential failure. The internal software of Azure Stack are services that coordinate across multiple instances. This type of end-to-end architectural design and implementation leads to a better end experience. Combining this approach to infrastructure resilience with the well-known and validated solutions approach described above provides for a better experience for the customer.

Hardened by default


When you run your IaaS VMs in Azure Stack you should know they are running on a secure foundation. It turns out that one of the reasons people select Azure Stack is because they have data and/or processes that are either regulated or defined in a contractual agreement. Azure Stack not only gives its owners control of their data and processes, it comes with an infrastructure which is secured by default. In fact, the underlying infrastructure is locked down in a way that neither the owner nor Microsoft can access it. If it ever needs to be accessed because of a support issue, both the owner and Microsoft combine their keys to obtain access to the system and for a limited time.

Azure leads the industry in security compliance, and security compliance is important for Azure Stack as well. In Azure, Microsoft fully manages the technology, people, and processes as well as its compliance responsibilities. Things are different with Azure Stack. While the technology is provided by Microsoft, the people and processes are managed by the operator. To help operators jump-start the certification process, Azure Stack has gone through a set of formal assessments by a third party-independent auditing firm to document how the Azure Stack infrastructure meets the applicable controls from several major compliance standards. The documentation is an assessment of the technology not a certification of Azure Stack due to the standards including several personnel-related and process-related controls, but they help you get started. The technology assessments include the following standards:

◈ PCI-DSS – Addresses the payment card industry
◈ CSA Cloud Control Matrix –  A comprehensive mapping across multiple standards, including FedRAMP Moderate, ISO27001, HIPAA, HITRUST, ITAR, NIST SP800-53, and others
◈ FedRAMP High – For government customers

If you face compliance mandates or internal processes that demand that you originate and manage your cloud data encryption keys, and even for Azure Stack, the CipherTrust Cloud Key Manager (CCKM) from Thales works with Azure and Azure Stack “Bring Your Own Key” (BYOK) API’s to enable such key control. CipherTrust Cloud Key Manager creates Azure-compatible keys from a FIPS 140-2 source. You can then can upload, manage, and revoke, if needed, to and from Azure Key Vaults running in Azure Stack or Azure, all from a single pane of glass.

For instance, you could create a salary app on Azure Stack, generate data encryption keys with CipherTrust Cloud Key Manager, and then set a policy to enable use of those keys in the Key Vault on Azure Stack only during the last week of the month when the app is computing the salaries. Among many other benefits, CCKM provides reduced time exposure for the keys, remote backup, a secure location for storing the keys, and the decoupling of management of the keys from the app itself. Not to mention automated key versioning. CCKM supports both Azure Active Directory (AAD) and Active Directory Federation Services (ADFS) deployments.

Get started by reviewing your options


As noted earlier, Azure Stack is sold as an integrated hardware system, with software pre-installed on the validated hardware. It typically comes in a standard server rack. You choose where your system will be located. You can host it in your data center or perhaps you want to run it in a service provider’s facility.

With the Azure Stack running in your location of choice, you also have a choice of who operates the Azure Stack infrastructure. An Azure Stack operator is responsible for giving access to the Azure Stack, keeping the software and firmware up-to-date, providing the content in the marketplace, monitoring the system health, and diagnosing issues. Azure Stack provides automation, documentation, and training for all of these processes so that someone from your organization can operate Azure Stack. We also provide trained partner experts who can operate your Azure Stack either in their facility or yours.

Here is an overview of your options when you acquire your Azure Stack:

A system you manage A managed service 
◈ Typically on-premises
◈ You control management and ops
◈ Buy Azure Stack from Microsoft
◈ Buy hardware from the vendor
◈ Call Microsoft for support
◈ Typically at service provider premises
◈ Service is managed for you
◈ Buy service from service provider
◈ Service includes hardware and software
◈ Call the service provider for support

Tuning your IaaS VMs for a cloud infrastructure


Once you have your Azure Stack up and running and you begin to plan your first IaaS VM deployments, you need to think about these VMs as cloud deployments, not virtualization deployments. IaaS VMs run best when they take advantage of the cloud infrastructure that they are running on. Many times, the way you tune a VM in your cloud infrastructure will be very different than the way you tuned VMs in your traditional virtualization environment. That said, you can always start with what you already have and improve those solutions through modern operations.

A great example of this is the use of multiple disks to get the needed IOps and throughput required of the application. As is the case in Azure, virtual machines placed in Azure Stack have limits applied for their disk activity. This limits the impact of one VM’s activity on another VM – aka noisy neighbor. While these limits are great for IaaS environments, it may take extra work to deploy workloads that get the appropriate resources needed, and in this example, it is IOps.

For optimization of SQL Server deployments, our documentation provides guidance on how to configure storage to obtain the needed performance. In this case, the approach is to attach multiple disks and stripe them to obtain the capacity and performance. When you use managed disks for your VMs, it allows the system to optimize where the physical data gets stored within your Azure Stack. Moving from virtualization environments to IaaS is reasonably straightforward and has its benefits, but requires a little bit of work on your first deployment. You can always use tools like Azure Monitor and the Virtual Machine solutions to better understand your workloads and gain insights on the performance of your VMs. When your VMs are not answering the performance requirements, you can also use the Azure Performance Diagnostics VM Extension for Windows to troubleshoot and identify potential bottlenecks.

The great thing about IaaS, and specifically Azure Stack, is the ability to easily reuse the deployment templates or artifacts to reduce the work for migration of similar workloads.

Infrastructure purpose built for running cloud-native VMs


Few organizations can claim that they have experience building one the largest cloud infrastructures in the world. When you buy an Azure Stack, you get the benefit of Microsoft’s Azure experience. Microsoft has partnered with the best OEMs to deliver a standardized configuration so that you don’t have to worry about these details. The infrastructure of Azure Stack is purpose-built to get the best for your IaaS VMs – keeping them safe, secure, and performant.

Thursday 14 March 2019

Azure Stack laaS – part two

Start with what you already have


Every organization has a unique journey to the cloud. This journey is based on the organization’s history, business specifics, culture, and maybe most importantly, their starting point. While it can be hard for some to say goodbye to their current virtualization environment and way of doing things, the journey to the cloud provides many options, features, functionalities, and opportunities to improve existing governance, operations, and implement new ones. The journey to the cloud can also provide the opportunity to redesign applications and take advantage of the cloud architecture. Additionally, Microsoft Azure gives you the option to host your virtual machines (VMs) in the public cloud or in your own facility with Azure Stack.

In most cases, this journey starts with a lift and shift of the existing servers, either virtual machines or physical servers. Because Azure Stack at its core is an infrastructure-as-a-service (IaaS) platform, the right way to think about this first phase of the journey is as a lift and optimize process. Moving the servers should be the first step towards enabling modern operations across your workloads. That could mean something as little as selecting the right size for your VMs so that you “pay for what you use,” enabling self-service by doing it yourself, automating deployments, or even building on the success of others.

What to think about when migrating


The Azure migration center provides a good model to help start the assessment, make sure you have the right stakeholders involved and help create the proper frame for your migration.

As you start this assessment, there are several factors which you can use to identify what is the best suited platform for your workload, whether that is Azure or Azure Stack:

◈ Cost
◈ Connectivity requirements
◈ Potential regulations and data gravity requirements
◈ High availability and regional requirements

Azure Stack laaS, Azure Tutorial and Materials, Azure Learning, Azure Guides, Azure Study Materials

After you complete the assessment and planning, you will need to select the right tool for the migration.

Our partner ecosystem includes ISVs that have built solutions which range from simple migrations, to “as a Service” solutions. There are also Microsoft migration options which require manual steps to implement, but offer a potential lower cost.

Azure Stack laaS, Azure Tutorial and Materials, Azure Learning, Azure Guides, Azure Study Materials

Partner options


Azure Stack has ISV solutions for every stage of application migration, from envisioning and/or discovery, to modernization by leveraging PaaS capabilities. Each have their own capabilities and improve the process in their own way.

◉ Carbonite – Offers server migration, backup, high availability of Windows Servers, and enterprise protection for Microsoft Windows endpoints.

◉ Cloudbase – Offers a migration-as-a-service solution called Coriolis which integrates with Azure Migrate and uses it for the initial assessment, as well as the VM-size mapping.

     ◉ Coriolis will be available as a trial version in the Azure Stack Marketplace, offering free VM migrations to validate the process and make sure it is the right solution. 

◉ Commvault – Complements migration, management, protection, and activation of data on Microsoft Azure Stack and other hybrid cloud infrastructure solutions. Commvault helps enterprises increase agility, reduce costs, and discover valuable insights.

     ◉ Commvault is available in the Azure Stack Marketplace and it offers a 60-day free trial that can be upgraded in place to a full version.   

◉ Corent – Offers a migration-as-a-service solution. See below for published cases studies:

     ◉ “Migrating a BFSI(Banking, Financial Services and Insurance) Application to Microsoft Azure Stack using SurPaaS MaaS”     
     ◉ “Calligo delivers successful migration to Azure Stack using Corent Technology SurPaaS platform”

     ◉ Corent Technology is offering a free POC of scanning and migrating of up to five VMs to readers of this blog.  Email sales@corenttech.com with #AzureStackBlog to set up your free PoC. 

◉ ZeroDown – Provides business continuity and high availability across multiple stamps, and even during a migration process. 

◉ This isn’t really a migration tool, instead it can offer fault tolerance and high availability for your solution.     

     ◉ This solution can also help with creating fault tolerance and high availability across multiple stamps. 

     ◉ It is also available in the Azure Stack Marketplace and offers a 30-day free trial   

Microsoft migration options


The Storage Migration Service makes it easier to migrate servers and to target VMs in Azure Stack. You can use the graphical tool that inventories data on servers and then transfer that data and configuration to the VMs already deployed on Azure Stack. The service works without apps or users having to change anything. Depending on the assessment, some of these workloads might go to Azure IaaS, or Azure Files.

Use Storage Migration Service because you've got a server or lots of servers that you want to migrate to Azure Stack virtual machines. Storage Migration Service is designed to help by doing the following:

◈ Inventory multiple servers and their data.
◈ Rapidly transfer files, file shares, and security configuration from the source servers.
◈ Optionally take over the identity of the source servers, also known as cutting over, so that users and apps don't have to change anything to access existing data.
◈ Manage one or multiple migrations from the Windows Admin Center user interface.

Typically, in your migration journey, you will use a mixture of tools. So you will need to understand the options available in order to select the right tool for the specific workloads.

Tuesday 12 March 2019

Azure Stack IaaS – part one

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform


When we discuss Azure Stack with our customers, they see the value in Azure Stack providing cloud-native capabilities to their datacenters. They see the opportunity to modernize their apps and address the unique solutions Azure Stack can deliver, but they often pause as they ponder where to begin. They wonder how to get value from the investments they have in apps currently running on virtual machines (VM). They wonder, “Does Azure Stack help me here? What if I am not quite ready for Platform-as-a-Service?” These questions are difficult, but the answers become more clear when they understand that Azure Stack at its core is an IaaS platform.

Azure Stack allows customers to run their own instance of Azure in their datacenter. Organizations pick Azure Stack as part of their cloud strategy because it helps them handle situations when the public cloud won’t work for them. The three most common reasons use Azure Stack are because of poor network connectivity to the public cloud, regulatory or contractual requirements, or backend systems that cannot be exposed to the Internet.

Azure Stack has created a lot of excitement around new hybrid application patterns, consistent Azure APIs to simplify DevOps practices and processes, the extensive Azure ecosystem available through the Marketplace, and the option to run Azure PaaS Services locally, such as App Services and IoT Hub. Underlying all of these are some exciting IaaS capabilities and we are so exciting to be kicking off a new blog series to show it off.

IaaS is more than virtual machines


People often think of IaaS as simply virtual machines, but IaaS is more. When you deploy a VM in Azure or Azure Stack, the machine comes with a software defined network including DNS, public IPs, firewall rules (also called network security groups), and many other capabilities. The VM deployment also creates disks for your VMs on software defined storage running in Blob Storage. In the Azure Stack portal image, you can see how this full software defined infrastructure is displayed after you have deployed a VM:

Azure Stack IaaS, Azure Certifications, Azure Tutorial and Materials, Azure Guides, Azure Study Materials

IaaS is the foundation for PaaS Services


Did you know that the Azure PaaS services are powered by IaaS VMs behind the scenes? As a user you don’t see these VMs, but they deliver the capabilities like Event Hubs or Azure Kubernetes Service (AKS). This same Azure IaaS is the foundation of PaaS in Azure Stack. Not only can you use it to deliver your applications, Azure PaaS services will use IaaS VMs to deliver solutions on Azure Stack.

Take Event Hubs, currently in private preview, as an example. An Azure Stack administrator downloads the Event Hubs resource provider from the Marketplace and installs it. Installation creates a new admin subscription and a set of IaaS resources. The administrator sees things like virtual networks, DNS zones, and virtual machine scale sets in the administration portal:

Azure Stack IaaS, Azure Certifications, Azure Tutorial and Materials, Azure Guides, Azure Study Materials

However, when one of your developers deploys their Event Hub in Azure Stack, they don’t see the behind-the-scenes IaaS VMs and resources in their subscription, they just see the Event Hub:

Azure Stack IaaS, Azure Certifications, Azure Tutorial and Materials, Azure Guides, Azure Study Materials

Modernize your apps through operations


Often people think that application modernization involves writing or changing application code, or that modernization means rearchitecting the entire application. In most cases, the journey starts with small steps. When you run your VMs in Azure or Azure Stack, you can modernize your operations.

In addition to the underlying infrastructure, Azure and Azure Stack offers a full set of integrated and intelligent services. These services support the management for your VMs, self-service capabilities, enhance deployment, and enable infrastructure-as-code. With Azure Stack, you empower your teams. 

Over the next couple of blog posts we will go into more detail about these areas. Here is a chart of the cloud capabilities you can utilize to modernize your IaaS VM operations:

Azure Stack IaaS, Azure Certifications, Azure Tutorial and Materials, Azure Guides, Azure Study Materials