Thursday, 30 June 2022

MLOPs Blog Series Part 2: Testing robustness of secure machine learning systems using machine learning ops

Robustness is the ability of a closed-loop system to tolerate perturbations or anomalies while system parameters are varied over a wide range. There are three essential tests to ensure that the machine learning system is robust in the production environments: unit testing, data and model testing, and integration testing.

Unit testing

Tests are performed on individual components that each have a single function within the bigger system (for example, a function that creates a new feature, a column in a DataFrame, or a function that adds two numbers). We can perform unit tests on individual functions or components; a recommended method for performing unit tests is the Arrange, Act, Assert (AAA) approach:

1. Arrange: Set up the schema, create object instances, and create test data/inputs.

2. Act: Execute code, call methods, set properties, and apply inputs to the components to test.

3. Assert: Check the results, validate (confirm that the outputs received are as expected), and clean (test-related remains).

Data and model testing

It is important to test the integrity of the data and models in operation. Tests can be performed in the MLOps pipeline to validate the integrity of data and the model robustness for training and inference. The following are some general tests that can be performed to validate the integrity of data and the robustness of the models:

1. Data testing: The integrity of the test data can be checked by inspecting the following five factors—accuracy, completeness, consistency, relevance, and timeliness. Some important aspects to consider when ingesting or exporting data for model training and inference include the following:

• Rows and columns: Check rows and columns to ensure no missing values or incorrect patterns are found.

• Individual values: Check individual values if they fall within the range or have missing values to ensure the correctness of the data.

• Aggregated values: Check statistical aggregations for columns or groups within the data to understand the correspondence, coherence, and accuracy of the data.

2. Model testing: The model should be tested both during training and after it has been trained to ensure that it is robust, scalable, and secure. The following are some aspects of model testing:

• Check the shape of the model input (for the serialized or non-serialized model).

• Check the shape and output of the model.

• Behavioral testing (combinations of inputs and expected outputs).

• Load serialized or packaged model artifacts into memory and deployment targets. This will ensure that the model is de-serialized properly and is ready to be served in the memory and deployment targets.

• Evaluate the accuracy or key metrics of the ML model.

Integration testing

Integration testing is a process where individual software components are combined and tested as a group (for example, data processing or inference or CI/CD).

Azure DevOps, Azure Machine Learning, Azure Exam Prep, Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Career, Azure Skills
Figure 1: Integration testing (two modules)

Let’s look at a simple hypothetical example of performing integration testing for two components of the MLOps workflow. In the Build module, data ingestion and model training steps have individual functionalities, but when integrated, they perform ML model training using data ingested to the training step. By integrating both module 1 (data ingestion) and module 2 (model training), we can perform data loading tests (to see whether the ingested data is going to the model training step), input and outputs tests (to confirm that expected formats are inputted and outputted from each step), as well as any other tests that are use case-specific.

In general, integration testing can be done in two ways:

1. Big Bang testing: An approach in which all the components or modules are integrated simultaneously and then tested as a unit.

2. Incremental testing: Testing is carried out by merging two or more modules that are logically connected to one another and then testing the application's functionality. Incremental tests are conducted in three ways:

• Top-down approach

• Bottom-up approach

• Sandwich approach: a combination of top-down and bottom-up

Azure DevOps, Azure Machine Learning, Azure Exam Prep, Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Career, Azure Skills
Figure 2: Integration testing (incremental testing)

The top-down testing approach is a way of doing integration testing from the top to the bottom of the control flow of a software system. Higher-level modules are tested first, and then lower-level modules are evaluated and merged to ensure software operation. Stubs are used to test modules that aren't yet ready. The advantages of a top-down strategy include the ability to get an early prototype, test essential modules on a high-priority basis, and uncover and correct serious defects sooner. One downside is that it necessitates a large number of stubs, and lower-level components may be insufficiently tested in some cases.

The bottom-up testing approach tests the lower-level modules first. The modules that have been tested are then used to assist in the testing of higher-level modules. This procedure is continued until all top-level modules have been thoroughly evaluated. When the lower-level modules have been tested and integrated, the next level of modules is created. With the bottom-up technique, you don’t have to wait for all the modules to be built. One downside is those essential modules (at the top level of the software architecture) that impact the program's flow are tested last and are thus more likely to have defects.
The sandwich testing approach tests top-level modules alongside lower-level modules, while lower-level components are merged with top-level modules and evaluated as a system. This is termed hybrid integration testing because it combines top-down and bottom-up methodologies.

Source: microsoft.com

Tuesday, 28 June 2022

MLOps Blog Series Part 1: The art of testing machine learning systems using MLOps

Testing is an important exercise in the life cycle of developing a machine learning system to ensure high-quality operations. We use tests to confirm that something functions as it should. Once tests are created, we can run them automatically whenever we make a change to our system and continue to improve them over time. It is a good practice to reward the implementation of tests and identify sources of mistakes as early as possible in the development cycle to prevent rising downstream expenses and lost time.

In this blog, we will look at testing machine learning systems from a Machine Learning Operations (MLOps) perspective and learn about good case practices and a testing framework that you can use to build robust, scalable, and secure machine learning systems. Before we delve into testing, let’s see what MLOps is and its value to developing machine learning systems.

Azure MLOps, Azure Exam Prep, Azure Tutorial and Material, Azure Guides, Azure Career, Azure Skills, Azure Jobs, Azure Preparation, Azure Exam Preparation
Figure 1: MLOps = DevOps + Machine Learning.

Software development is interdisciplinary and is evolving to facilitate machine learning. MLOps is a process for fusing machine learning with software development by coupling machine learning and DevOps. MLOps aims to build, deploy, and maintain machine learning models in production reliably and efficiently. DevOps drives machine learning operations. Let’s look at how that works in practice. Below is an MLOps workflow illustration of how machine learning is enabled by DevOps to orchestrate robust, scalable, and secure machine learning solutions.

Azure MLOps, Azure Exam Prep, Azure Tutorial and Material, Azure Guides, Azure Career, Azure Skills, Azure Jobs, Azure Preparation, Azure Exam Preparation
Figure 2: MLOps workflow.
 
The MLOps workflow is modular, flexible, and can be used to build proofs of concept or operationalize machine learning solutions in any business or industry. This workflow is segmented into three modules: Build, Deploy, and Monitor. Build is used to develop machine learning models using an machine learning pipeline. The Deploy module is used for deploying models in developer, test, and production environments. The Monitor module is used to monitor, analyze, and govern the machine learning system to achieve maximum business value. Tests are performed primarily in two modules: the Build and Deploy modules. In the Build module, data is ingested for training, the model is trained using ingested data, and then it is tested in the model testing step.

1. Model testing: In this step, we evaluate the performance of the trained model on a separated set of data points named test data (which was split and versioned in the data ingestion step). The inference of the trained model is evaluated according to selected metrics as per the use case. The output of this step is a report on the trained model's performance. In the Deploy module, we deploy the trained models to dev, test, and production environments, respectively. First, we start with application testing (done in dev and test environments).

2. Application testing: Before deploying an machine learning model to production, it is vital to test the robustness, scalability, and security of the model. Hence, we have the "application testing" phase, where we rigorously test all the trained models and the application in a production-like environment called a test, or staging, environment. In this phase, we may perform tests such as A/B tests, integration tests, user acceptance tests (UAT), shadow testing, or load testing.

Below is the framework for testing that reflects the hierarchy of needs for testing machine learning systems.

Azure MLOps, Azure Exam Prep, Azure Tutorial and Material, Azure Guides, Azure Career, Azure Skills, Azure Jobs, Azure Preparation, Azure Exam Preparation
Figure 3: Hierarchy of needs for testing machine learning systems.

One way to think about machine learning systems is to consider Maslow's hierarchy of needs. Lower levels of a pyramid reflect “survival,” and the true human potential is unleashed only after basic survival and emotional needs are met. Likewise, tests that inspect robustness, scalability, and security ensure that the system not only performs at the basic level but reaches its true potential. One thing to note is that there are many additional forms of functional and nonfunctional testing, including smoke tests (rapid health checks) and performance tests (stress), but they may all be classified as system tests.

Source: microsoft.com

Saturday, 25 June 2022

Azure Orbital Ground Station as Service extends life and reduces costs for satellite operators

Azure Orbital Ground Station, Station as-a-Service (GSaaS), Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Certification, Microsoft Tutorial and Material, Microsoft Guide

How can Microsoft empower satellite operators to focus on their mission and enable them to continue the operation of their satellites, without making capital investments in their ground infrastructure?

To answer that question, Microsoft worked alongside the National Oceanic and Atmospheric Administration (NOAA), and our partner Xplore, to demonstrate how the commercial cloud can provide satellite mission management for NOAA’s legacy polar satellites (NOAA-18)—extending the mission life of these satellites while reducing the cost of operation through Azure Orbital Ground Station as-a-Service (GSaaS).

Partnering with the National Oceanic and Atmospheric Administration and Xplore

The initiative was part of a year-long cooperative research and development agreement (CRADA) with NOAA, where we worked together to determine the ability of the Azure Orbital platform to connect and downlink data from NOAA satellites. NOAA also tested the ability of Microsoft Azure to comply with specified security controls in a rapid and effective manner. Our cloud-based solutions performed successfully across all measures.

Partners are central to Microsoft’s approach to space, and they played a key role in this project. As part of the CRADA, we leveraged our partner network to bring together Azure Orbital with Xplore’s Major Tom mission control software platform. This approach enabled NOAA to transmit commands to the NOAA-18 spacecraft and verify the receipt of these commands. This test was conducted in real-time, and data was flowing bi-directionally with the NOAA-18 satellite.

Commercial technology enabled the rapid demonstration of these innovative capabilities. Xplore was able to move quickly to bring functions of NOAA’s heritage space system architecture to the Azure cloud through their Major Tom platform. This highlights the power of Azure as a platform to bring together Azure Orbital as the ground station, Major Tom to provide the mission control software for commanding and telemetry viewing, and the NOAA operators to monitor the scenarios.

This successful demonstration shows that the Azure Orbital GSaaS, and the partner network it brings together, enables sustainable outcomes for satellite operators. Our work with NOAA is just the beginning of the journey. We look forward to partnering with additional satellite operators to help them reduce their infrastructure management costs, lower latency, increase capacity and resiliency, and empower their missions through the power of Azure Orbital GSaaS and the Azure cloud.

Learn more about Azure Orbital and Azure Space

To learn more about Azure Orbital GSaaS, visit our product page, or take a look at the session with Microsoft Mechanics, which goes into more detail on how we connect space satellites around the world and bring earth observational data into Azure for analytics via Microsoft and partner ground stations. We demonstrate how it works and how it fits into Microsoft’s strategy with Azure Space to bring cloud connectivity everywhere on earth and to make space satellite data accessible for everyday use cases.

More broadly, Azure Space marks the convergence between global satellite constellations and the cloud. As the two join together, our purpose is to bring cloud connectivity to even the most remote corners of the earth, connect to satellites, and harness the vast amount of data collected from space. This can help solve both long-term trending issues affecting the earth like climate change, or short-term real-time issues such as connected agriculture, monitoring and controlling wildfires, or identifying supply chain bottlenecks.

Source: microsoft.com

Thursday, 23 June 2022

Simplify and centralize network security management with Azure Firewall Manager

We are excited to share that Azure Web Application Firewall (WAF) policy and Azure DDoS Protection plan management in Microsoft Azure Firewall Manager is now generally available.

With an increasing need to secure cloud deployments through a Zero Trust approach, the ability to manage network security policies and resources in one central place is a key security measure.

Today, you can now centrally manage Azure Web Application Firewall (WAF) to provide Layer 7 application security to your application delivery platforms, Azure Front Door, and Azure Application Gateway, in your networks and across subscriptions. You can also configure DDoS Protection Standard for protecting your virtual networks from Layer 3 and Layer 4 attacks.

Azure Firewall Manager is a central network security policy and route management service that allows administrators and organizations to protect their networks and cloud platforms at a scale, all in one central place. 

Azure Web Application Firewall is a cloud-native web application firewall (WAF) service that provides powerful protection for web apps from common hacking techniques such as SQL injection and security vulnerabilities such as cross-site scripting.

Azure DDoS Protection Standard provides enhanced Distributed Denial-of-Service (DDoS) mitigation features to defend against DDoS attacks. It is automatically tuned to protect all public IP addresses in virtual networks. Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes. 

By utilizing both WAF policy and DDoS protection in your network, this provides multi-layered protection across all your essential workloads and applications.

WAF policy and DDoS Protection plan management are an addition to Azure Firewall management in Azure Firewall Manager.

Centrally protect your application delivery platforms using WAF policies 

In Azure Firewall Manager, you can now manage and protect your Azure Front Door or Application Gateway deployments by associating WAF policies, at scale. This allows you to view all your key deployments in one central place, alongside Azure Firewall deployments and DDoS Protection plans.

Azure Firewall Manager, Azure Networking, Distributed Denial-of-Service (DDoS), Azure Web Application Firewall, Azure WAF, Azure Security

Upgrade from WAF configuration to WAF policy


In addition, the platform supports administrators to upgrade from a WAF config to WAF policies for Application Gateways, by selecting the service and Upgrade from WAF configuration. This allows for a more seamless process for migrating to WAF policies, which supports WAF policy settings, managed rulesets, exclusions, and disabled rule-groups.

As a note, all WAF configurations that were previously created in Application Gateway can be done through WAF policy.

Azure Firewall Manager, Azure Networking, Distributed Denial-of-Service (DDoS), Azure Web Application Firewall, Azure WAF, Azure Security

Manage DDoS Protection plans for your virtual networks


You can enable DDoS Protection Plan Standard on your virtual networks listed in Azure Firewall Manager, across subscriptions and regions. This allows you to see which virtual networks have Azure Firewall and/or DDoS protection in a single place.

Azure Firewall Manager, Azure Networking, Distributed Denial-of-Service (DDoS), Azure Web Application Firewall, Azure WAF, Azure Security

View and create WAF policies and DDoS Protection Plans in Azure Firewall Manager


You can view and create WAF policies and DDoS Protection Plans from the Azure Firewall Manager experience, alongside Azure Firewall policies.

In addition, you can import existing WAF policies to create a new WAF policy, so you do not need to start from scratch if you want to maintain similar settings.

Azure Firewall Manager, Azure Networking, Distributed Denial-of-Service (DDoS), Azure Web Application Firewall, Azure WAF, Azure Security

Azure Firewall Manager, Azure Networking, Distributed Denial-of-Service (DDoS), Azure Web Application Firewall, Azure WAF, Azure Security

Monitor your overall network security posture


Azure Firewall Manager provides monitoring of your overall network security posture. Here, you can easily see which virtual networks and virtual hubs are protected by Azure Firewall, a third-party security provider, or DDoS Protection Standard. This overview can help you identify and prioritize any security gaps that are in your Azure environment, across subscriptions or for the whole tenant.

Azure Firewall Manager, Azure Networking, Distributed Denial-of-Service (DDoS), Azure Web Application Firewall, Azure WAF, Azure Security

Coming soon, you’ll also be able to view your Application Gateway and Azure Front Door monitors, for a full network security overview.

Source: microsoft.com

Sunday, 12 June 2022

Learn what’s new in Azure Firewall

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are happy to share several key Azure Firewall capabilities as well as an update on recent important releases into general availability and preview.

Intrusion Detection and Prevention System (IDPS) signatures lookup now generally available.

◉ TLS inspection (TLSi) Certification Auto-Generation now generally available.

◉ Web categories lookup now generally available.

◉ Structured Firewall Logs now in preview.

◉ IDPS Private IP ranges now in preview.

Azure Firewall is a cloud-native firewall-as-a-service offering that enables customers to centrally govern and log all their traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto-scaling.

IDPS signatures lookup

Azure Firewall Premium IDPS signature lookup is a great way to better understand the applied IDPS signatures on your network as well as fine-tuning them according to your specific needs. IDPS signatures lookup allows you to:

◉ Customize one or more signatures and change their mode to Disabled, Alert, or Alert and Deny. For example, if you receive a false positive where a legitimate request is blocked by Azure Firewall due to a faulty signature, you can use the signature ID from the network rules logs and set its IDPS mode to off. This causes the "faulty" signature to be ignored and resolves the false positive issue.

◉ You can apply the same fine-tuning procedure for signatures that are creating too many low-priority alerts, and therefore interfering with visibility for high-priority alerts.

◉ Get a holistic view of the entire 58,000 signatures.

◉ Smart search.

◉ Allows you to search through the entire signatures database by any type of attribute. For example, you can search for specific CVE-ID to discover what signatures are taking care of this CVE by typing the ID in the search bar.

Azure Firewall, Azure Exam Prep, Azure Firewall, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Jobs

TLSi Certification Auto-Generation


For non-production deployments, you can use the Azure Firewall Premium TLS inspection Certification Auto-Generation mechanism, which automatically creates the following three resources for you:

◉ Managed Identity

◉ Key Vault

◉ Self-signed Root CA certificate

Just choose the new managed identity, and it ties the three resources together in your Premium policy and sets up TLS inspection.

Azure Firewall, Azure Exam Prep, Azure Firewall, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Jobs

Web categories lookup


Web Categories is a filtering feature that allows administrators to allow or deny web traffic based on categories, such as gambling, social media, and more. We added tools that help manage these web categories: Category Check and Mis-Categorization Request.

Using Category Check, an admin can determine which category a given FQDN or URL falls under. In the case that a FQDN or URL fits better under a different category, an administrator can also report an incorrect classification, in which the request will be evaluated and updated if approved.

Azure Firewall, Azure Exam Prep, Azure Firewall, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Jobs

Structured Firewall Logs


Today, the following diagnostic log categories are available for Azure Firewall:

◉ Application rule log

◉ Network rule log

◉ DNS proxy log

These log categories are using Azure diagnostics mode. In this mode, all data from any diagnostic setting will be collected in the AzureDiagnostics table.

With this new feature, customers will be able to choose using Resource Specific Tables instead of the existing AzureDiagnostics table. In case both sets of logs are required, at least two diagnostic settings would need to be created per firewall.

In Resource Specific mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting.

This method is recommended since it makes it much easier to work with the data in log queries, provides better discoverability of schemas and their structure, improves performance across both ingestion latency and query times, and the ability to grant Azure role-based access control (RBAC) rights on a specific table.

New Resource Specific tables are now available in diagnostic setting allowing users to utilize the following newly added categories:

Network rule log: contains all Network Rule log data. Each match between data plane and network rule creates a log entry with the data plane packet and the matched rule's attributes.

◉ NAT rule log: contains all destination network address translation (DNAT) events log data. Each match between data plane and DNAT rule creates a log entry with the data plane packet and the matched rule's attributes.

◉ Application rule log: contains all Application rule log data. Each match between data plane and Application rule creates a log entry with the data plane packet and the matched rule's attributes.

◉ Threat Intelligence log: contains all Threat Intelligence events.

◉ IDPS log: contains all data plane packets that were matched with one or more IDPS signatures.

◉ DNS proxy log: contains all DNS Proxy events log data.

◉ Internal FQDN resolve failure log: contains all internal Firewall FQDN resolution requests that resulted in failure.

◉ Application rule aggregation log: contains aggregated Application rule log data for Policy Analytics.

◉ Network rule aggregation log: contains aggregated Network rule log data for Policy Analytics.

◉ NAT rule aggregation log: contains aggregated NAT rule log data for Policy Analytics.

Additional Kusto Query Language (KQL) log queries were added (as seen in the diagram below) to query structured firewall logs.

Azure Firewall, Azure Exam Prep, Azure Firewall, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Jobs

IDPS Private IP ranges


In Azure Firewall Premium IDPS, Private IP address ranges are used to identify if traffic is inbound or outbound. By default, only ranges defined by Internet Assigned Numbers Authority (IANA) RFC 1918 are considered private IP addresses. To modify your private IP addresses, you can now easily edit, remove or add ranges as needed.

Azure Firewall, Azure Exam Prep, Azure Firewall, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Jobs

Source: microsoft.com

Saturday, 11 June 2022

Achieve seamless observability with Dynatrace for Azure

As adoption of public cloud grows by leaps and bounds, organizations want to leverage software and services that they love and are familiar with as a part of their overall cloud solution. Microsoft Azure enables customers to host their apps on the globally trusted cloud platform and use the services of their choice by closely partnering with popular SaaS offerings. Dynatrace is one such partner that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities on Azure.

“Deep and broad observability, runtime application security, and advanced AI and automation are key for any successful cloud transformation. Through the Dynatrace platform’s integration with Microsoft Azure, customers will now have immediate access to these capabilities. This integration will deliver answers and intelligent automation from the massive amount of data generated by modern hybrid-cloud environments, enabling flawless and secure digital interactions.”—Steve Tack, SVP Product Management, Dynatrace.

Modern cloud-native environments are complex and dynamic. When failures occur, development teams need deep visibility into the systems to get to the root cause of the issues and understand the impact of potential fixes. Good observability solutions such as Dynatrace for Azure not only enable you to understand what is broken, but also provide the ability to proactively identify and resolve issues before they impact your customers. Currently, if you want to leverage Dynatrace for observability, you go through a complex process of setting up credentials, Event Hubs, and writing custom code to send monitoring data from Azure to Dynatrace. This is often time-consuming and hard to troubleshoot when issues occur. To alleviate this customer pain, we worked with Dynatrace to create a seamlessly integrated solution on Azure that’s now available on the Azure Marketplace.

Dynatrace’s integration provides a unified experience with which you can:

1. Create a new Dynatrace environment in the cloud with just a few clicks. Dynatrace SaaS on Azure is a fully managed offering that takes away the need to set up and operate infrastructure.

2. Seamlessly ship logs and metrics to Dynatrace. Using just a few clicks, configure auto-discovery of resources to monitor and set up automatic log forwarding. Configuring Event Hubs and writing custom code to get monitoring data is now a thing of the past.

3. Easily install Dynatrace OneAgent on virtual machines (VMs) and App Services through a single click. OneAgent continuously monitors the health of host and processes and automatically instruments any new processes.

4. Use single sign-on to access the Dynatrace SaaS portal—no need to remember multiple credentials and log in separately.

5. Get consolidated billing for the Dynatrace service through Azure Marketplace.

“Microsoft is committed to providing a complete and seamless experience for our customers on Azure. Enabling developers to use their most loved tools and services makes them more productive and efficient. Azure native integration of Dynatrace makes it effortless for developers and IT administrators to monitor their cloud applications with the best of Azure and Dynatrace together.”—Balan Subramanian, Partner Director of Product Management, Azure Developer Experiences.

Get started with Dynatrace for Azure

Let’s now look at how you can easily set up and configure Dynatrace for Azure:

Acquire the Dynatrace for Azure offering: You can find and acquire the solution from the Azure Marketplace.

Azure, Azure Exam, Azure Exam Prep, Azure Certification, Azure Certification Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure News

Create a Dynatrace resource in Azure portal: Once the Dynatrace solution is acquired, you can seamlessly create a Dynatrace resource using the Azure portal. Using the Dynatrace resource, you can configure and manage your Dynatrace environments within the Azure portal.

Azure, Azure Exam, Azure Exam Prep, Azure Certification, Azure Certification Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure News

Configure log forwarding: Configure which Azure resources send logs to Dynatrace, using the familiar concept of resource tags.

Azure, Azure Exam, Azure Exam Prep, Azure Certification, Azure Certification Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure News

Install Dynatrace OneAgent: With a single click, you can install Dynatrace OneAgent on multiple VMs and App Services.

Azure, Azure Exam, Azure Exam Prep, Azure Certification, Azure Certification Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure News

Access Dynatrace native service for Azure with single sign-on: Use the single sign-on experience to effortlessly access dashboards, Smartscape® topology visualization, log content, and more on the Dynatrace portal.

Azure, Azure Exam, Azure Exam Prep, Azure Certification, Azure Certification Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure News

Source: microsoft.com

Thursday, 9 June 2022

Improve outbound connectivity with Azure Virtual Network NAT

For many customers, making outbound connections to the internet from their virtual networks is a fundamental requirement of their Azure solution architectures. Factors such as security, resiliency, and scalability are important to consider when designing how outbound connectivity will work for a given architecture. Luckily, Azure has just the solution for ensuring highly available and secure outbound connectivity to the internet: Virtual Network NAT. Virtual Network NAT, also known as NAT gateway, is a fully managed and highly resilient service that is easy to scale and specifically designed to handle large-scale and variable workloads.

NAT gateway provides outbound connectivity to the internet through its attachment to a subnet and public IP address. NAT stands for network address translation, and as its name implies, when NAT gateway is associated to a subnet, all of the private IPs of a subnet’s resources (such as, virtual machines) are translated to NAT gateway’s public IP address. The NAT gateway public IP address then serves as the source IP address for the subnet’s resources. NAT gateway can be attached to a total of 16 IP addresses from any combination of public IP addresses and prefixes.

Azure Virtual Network NAT, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Preparation, Azure Exam Guides
Figure 1: NAT gateway configuration with a subnet and a public IP address and prefix.

Customer is halted by connection timeouts while trying to make thousands of connections to the same destination endpoint


Customers in industries like finance, retail, or other scenarios that require leveraging large sets of data from the same source need a reliable and scalable method to connect to this data source.

In this blog, we’re going to walk through one such example that was made possible by leveraging NAT gateway.

Customer background


A customer collects a high volume of data to track, analyze, and ultimately make business decisions for one of their primary workloads. This data is collected over the internet from a service provider’s REST APIs, hosted in a data center they own. Because the data sets the customer is interested in may change daily, a recurring report can’t be relied on—they must request the data sets each day. Because of the volume of data, results are paginated and shared in chunks. This means that the customer must make tens of thousands of API requests for this one workload each day, typically taking from one to two hours. Each request correlates to its own separate HTTP connection, similar to their previous on-premises setup.

The starting architecture


In this scenario, the customer connects to REST APIs in the service provider’s on-premises network from their Azure virtual network. The service provider’s on-premises network sits behind a firewall. The customer started to notice that sometimes one or more virtual machines waited for long periods of time for responses from the REST API endpoint. These connections waiting for a response would eventually time out and result in connection failures.

Azure Virtual Network NAT, Azure Exam Prep, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Preparation, Azure Exam Guides
Figure 2: The customer sends traffic from their virtual machine scale set (VMSS) in their Azure virtual network over the internet to an on-premises service provider’s data center server (REST API) that is fronted by a firewall.

The investigation


Upon deeper inspection with packet captures, it was found that the service provider’s firewall was silently dropping incoming connections from their Azure network. Since the customer’s architecture in Azure was specifically designed and scaled to handle the volume of connections going to the service provider’s REST APIs for collecting the data they required, this seemed puzzling. So, what exactly was causing the issue?

The customer, the service provider, and Microsoft support engineers collectively investigated why connections from the Azure network were being sporadically dropped, and made a key discovery. Only connections coming from a source port and IP address that were recently used (on the order of 20 seconds) were dropped by the service provider’s firewall. This is because the service provider’s firewall enforces a 20-second cooldown period on new connections coming from the same source IP and port. Any connections using a new source port on the same public IP were not impacted by the firewall’s cooldown timer. From these findings, it was concluded that source network address translation (SNAT) ports from the customer’s Azure virtual network were being reused too quickly to make new connections to the service provider’s REST API. When ports were reused before the cooldown timer completed, the connection would timeout and ultimately fail. The customer was then confronted with the question of, how do we prevent ports from being reused too quickly to make connections to the service provider’s REST API? Since the firewall’s cooldown timer could not be changed, the customer had to work within its constraints.

NAT gateway to the rescue


Based on this data, NAT gateway was introduced into the customer’s setup in Azure as a proof of concept. With this one change, connection timeout issues became a thing of the past.

NAT gateway was able to resolve this customer’s outbound connectivity issue to the service provider’s REST APIs for two reasons. One, NAT gateway selects ports at random from a large inventory of ports. The source port selected to make a new connection has a high probability of being new and therefore will pass through the firewall without issue. This large inventory of ports available to NAT gateway is derived from the public IPs attached to it. Each public IP address attached to NAT gateway provides 64,512 SNAT ports to a subnet’s resources and up to 16 public IP addresses can be attached to NAT gateway. That means a customer can have over 1 million SNAT ports available to a subnet for making outbound connections. Secondly, source ports being reused by NAT gateway to connect to the service provider’s REST APIs are not impacted by the firewall’s 20-second cooldown timer. This is because the source ports are set on their own cooldown timer by NAT gateway for at least as long as the firewall’s cooldown timer before they can be reused.

Stay tuned for our next blog where we’ll do a deep dive into how NAT gateway solves for SNAT port exhaustion through not only its SNAT port reuse behavior but also through how it dynamically allocates SNAT ports across a subnet’s resources.

Source: microsoft.com

Tuesday, 7 June 2022

Power hybrid and multicloud environments with Azure Arc data solutions

We know we are in the middle of a transformative time. At the heart of this transformation is the digitization of data. Data is the most strategic asset for organizations across all industries, and a new level of data agility is required to deal with dynamic changes in our world. Organizations that have embraced their data as a strategic asset are at a competitive advantage. In fact, a recent Gartner® Predict report estimates that 90 percent of data management tools and platforms that fail to support multicloud and hybrid capabilities will be set for decommissioning through 2026.

But for all the powerful capabilities the cloud offers to help organizations on their transformation journey, not all data is created equal. The myriad of regulations to navigate, the need for data sovereignty, the low tolerance for any form of disruption keeps data from living in a single public cloud, expanding your data estate and complexity. Managing the vast amounts of data that exists across siloed, disparate systems, applications, and locations, while also getting the most from existing investments is not a balance many solutions can achieve.

This is why, back in November 2019, we debuted Azure Arc, a set of technologies that extends Azure innovations and cloud benefits to any infrastructure. Cloud-native databases like Azure SQL and PostgreSQL have been enabled by Azure Arc, delivering the much-needed consistency and cloud automation for all data workloads. We want you to focus less on managing data, and more on creating value and unlocking insights. Because true innovation starts at the data layer.

Innovate anywhere with Azure Arc

With Azure Arc-enabled data services, we can bring cloud data management to any infrastructure, across customer on-premises data centers, other third-party clouds, and the edge. Here’s how Azure Arc is delivering on that promise:

1. Always up to date with full automation. Benefit from an evergreen SQL with the latest features using automated updates so there is no more end-of-support. Those in-place, rolling updates ensure close to zero downtime, so you can maximize efficiencies and minimize disruptions

2. Get industry-leading, multi-layered security with built-in capabilities. With comprehensive encryption including Transparent Data Encryption and Always Encrypted, as well as Azure Role-Based Access Control and Policy, your data is protected both at the powerful database engine-level and by Azure Security capabilities from the cloud.

3. With elastic scale, you can scale up or down based on your resource needs without application downtime to optimize performance. You can also realize cost efficiencies by paying only for what you use without the need to overprovision.

4. Deliver a vastly simplified DevOps experience through full automation with built-in capabilities like rapid deploy, high availability, and disaster recovery. You can deploy a 3 replica SQL MI with full high availability (HA) in two minutes with a single command, and gain a unified view into your query performance, storage capacity, and error logs using dashboards directly from the built-in monitoring. Use the tools you are already familiar with.

All these capabilities can run in any environment regardless of the connectivity to Azure. You can run Arc-enabled data services without a connection to Azure. If you can be fully connected to Azure, the user experience is richer and in real-time, but it is not required, data services can keep running even without a connection to Azure. Azure portal deployment and other value-added management services are fully integrated under direct connection.

Built for mission-critical

And now, we’re making that solution available in an even more powerful way. We are excited to announce the general availability of Azure Arc-enabled data services Business Critical (BC) tier, designed to support the most demanding mission-critical data workloads.

◉ With feature parity with SQL Server Enterprise Edition, it delivers all the proven capabilities customers have trusted for decades. It runs online transaction processing (OLTP) and hybrid transaction/analytics processing (HTAP) with record-setting performance, advanced high availability, and top-rated security.

◉ It meets the most demanding business continuity requirements using Always On Availability Groups, so your app will have close to zero downtime in case of an automated failover.

◉ Failover to another instance within the same Kubernetes cluster for local high availability, or to a different cluster in a different datacenter or even a public cloud, delivering “cloud-level redundancy”.

◉ We also provide a free passive instance to run in your disaster recovery for even greater value

◉ Choose the configuration that best suits your workloads, with no set limits on CPU and memory configuration. To further maximize the performance, we provide one free read-scale replica to offload any read-heavy workloads.

Our partner, Dell Technologies, conducted a series of OLTP benchmarks, using Intel technology, which took a closer look at the kind of performance possible with this new service tier. The results were remarkable. Arc-enabled SQL Managed Instance was proven to provide the same performance as SQL Server on Windows Server, so customers can run their workloads with confidence. The speed of provisioning and deprovisioning will massively speed up your continuous integration (CI) test runs. SQL instances can now be deployed very easily via automation and be available in 60 seconds, and Business Critical cuts multi-replica HA deployment time from hours or days to minutes with a single command! Those out-of-the-box experiences allow you to realize time efficiencies and redirect resources to where they matter most.

Broad partner ecosystem

Our ambition to help you digitally transform your business with the cloud and edge is boundless, we know we can’t do it alone. No single cloud provider can deliver all the infrastructure and as-a-service solutions you’ll need. That’s why we’re building an ecosystem of partners across service providers, platform providers including OS and Container platforms, and independent software vendors (ISVs) to help you envision, plan, and deploy the full stack of hybrid and multicloud solutions. Our history in both productivity and the datacenter is unique among cloud providers. Microsoft is at our best when our platforms fuel the growth of others, and I’m thrilled to see how energized the ecosystem is to evolve with us.

Power Hybrid, Multicloud Environments, Azure Arc, Azure Exam, Azure Exam Prep, Azure Career, Azure Skills, Azure Jobs, Azure News, Azure Tutorial and Materials

Source: microsoft.com

Thursday, 2 June 2022

Scale your cloud-native apps and accelerate app modernization with Azure, the best cloud for your apps

Azure Exam, Azure Certification, Azure Career, Azure Skills, Azure Jobs, Azure Preparation Exam, Azure Online Prep, Azure Certification

Developers are essential to the world we live in today, and the work you do is critical to powering organizations in every industry. Every developer and development team brings new ideas and innovation. Our ambition with the Microsoft Cloud and Azure is to be the platform for all of this innovation to really empower the entire community as they build what comes next.

Microsoft was founded as a developer tools company, and developers remain at the very center of our mission. Today, we have the most used and beloved developer tools with Visual Studio, .NET, and GitHub. We offer a trusted and comprehensive platform to build amazing apps and solutions that help enable people and organizations across the planet to achieve more.

Over 95 percent of the world’s largest companies today are choosing Microsoft Azure to run their business, in addition to thousands of smaller and mid-size innovative organizations as well. The NBA uses Azure and AI capabilities to turn billions of in-game data points into customizable content for its fans. Stonehenge Technology Labs has increased developer velocity through its fast-growing commerce enhancement software, STOPWATCH, using Azure, Live Share, and Visual Studio.

With the Microsoft Cloud and Azure, we meet you where you are and make it easy for you to start your cloud-native journey—from anywhere. That means developers can use their favorite languages, open-source frameworks, and tools to code and deploy to the cloud and the edge, collaborating in a secure way and integrating different components in no time with low-code solutions.

Supporting all of this, here are some of the latest developments we’ll talk about at Microsoft Build this week. You can also view the Scaling cloud-native apps and accelerating app modernization session to learn more about these announcements. 

Build modern, cloud-native apps productively with serverless technologies and the best Kubernetes experience for developers

As new apps are built, you’ll want them to be cloud-native since they’re designed to take full advantage of everything the cloud offers. Using cloud-native design patterns helps achieve the agility, efficiency, and speed of innovation that you need to deliver for your businesses. The experience bar and what end users expect from apps is going up. Product launches, peak shopping seasons, and sporting events are just a few examples of highly dynamic usage demands that modern apps must be prepared to handle.

This is made possible through architectures and technologies like containers, serverless, microservices, APIs, and DevOps everywhere, which offer the shortest path to cloud value. With Azure, GitHub, and the Microsoft Cloud, we’re working to better enable you to easily leverage all these capabilities.

Azure Container Apps offers an ideal platform for application developers who want to run microservices in serverless containers without managing infrastructure. Today, Azure Container Apps is generally available and ready for you to use. It’s built on the strong open-source foundation of the Kubernetes ecosystem, which is core for cloud-native applications.

Azure Kubernetes Services (AKS) was built to be a destination for all developers and provide the best-managed experience for Kubernetes, whether it’s your first time trying it or you use it regularly for quick testing. It delivers elastic provisioning of capacity without the need to manage underlying compute infrastructure and is the fastest way to spin up managed Kubernetes clusters and configure a seamless DevSecOps workflow with CI/CD integration.

A great example of a customer taking advantage of AKS today is Adobe. Adobe evolved to cloud-native practices a few years ago and adopted a microservices architecture. They chose AKS because of its scalable, flexible, and multi-cloud capabilities, and it brought faster development, from onboarding to production, all while providing automated guardrails with DevSecOps practices.

Today, we have some great updates to enhance the developer and operator experience on AKS even further, making it faster and easier than ever before so you can spend more time writing code. We’re launching the Draft extension and CLI, the preview of a new integrated AKS web application routing add-on, and a KEDA extension (Kubernetes Event-driven Autoscaling extension). 

The power and scalability of a cloud-native platform

What makes the Microsoft Cloud particularly rich as a development platform and ecosystem is the services it delivers and the underlying cloud infrastructure that allows you to focus on writing and shipping code. You can build upon and leverage a complete cloud-native platform, from containers to cloud-native databases and AI services.

Azure Cosmos DB is a fully-managed and serverless developer database—and the only database service in the market to offer service legal agreements (SLAs) guaranteeing single-digit millisecond latency and 99.999 percent availability. These guarantees are available globally at any scale, even through traffic bursts.

Today, we’re improving Azure Cosmos DB elasticity with new burst capacity and increased serverless capacity to 1TB—while only charging for the storage and throughput used. In preview, this ability is ideal for workloads with intermittent and unpredictable traffic and allows developers to build scalable, cost-effective cloud-native applications.

We see customers innovating at a faster pace with cloud-native technologies. Azure Arc brings Azure security and cloud-native services to hybrid and multicloud environments, enabling you to secure and govern infrastructure and apps anywhere.

One example of a customer turning to Azure Arc is Canada’s largest bank, Royal Bank of Canada (RBC). As a Kubernetes-based deployment, Azure Arc enables the company to leverage existing infrastructure investments, and skillsets to manage and automate database deployments. Arc-enabled Data services allowed RBC to accelerate their time to market and development of products—bringing more time and focus to innovation and integration of their products and capabilities.

We continue to innovate and add new capabilities to Azure Arc to enable hybrid and multicloud scenarios. Today, we’re excited to announce several new Azure Arc capabilities including the landing zone accelerator for Azure Arc-enabled Kubernetes, offering customers greater agility for cloud-native apps and tools to simplify hybrid and multicloud deployments—all while strengthening security and compliance. The landing zone accelerator provides best practices, guidance, and automated reference implementations for a fast and easy deployment.

Azure Managed Grafana is part of our approach to provide customers with all the tools they need to manage, monitor, and secure their hybrid and multicloud investments. We recently launched this integration so you can easily deploy Grafana dashboards, complete with Azure’s built-in high availability and security.

I’m excited to share that the Business Critical tier of Azure Arc-enabled SQL Managed Instance is now generally available to meet the most demanding critical business continuity requirements. This allows developers to build scalable, cost-effective cloud-native apps and add the same top-rated security and automated update capabilities they’ve trusted for decades.

Modernize Java applications

Java continues to be one of the most important programming languages, and we’re committed to helping Java developers run their Spring applications more easily in the cloud. As part of a long-time collaboration with Pivotal, now VMware, Azure Spring Cloud was created as a fully managed service for Spring Boot applications to solve the challenges of running Spring at scale. Azure Spring Cloud is a fully-featured platform for all types of Spring applications; to better reflect this, the service is now called Azure Spring Apps.

Azure Spring Apps Enterprise will be generally available in June, bringing fully managed VMware Tanzu components running on Azure and advanced Spring Runtime support. Customers like FedEx are already leveraging this collaboration on Azure Springs Apps to deliver an impactful solution for their end-customers, helping predict estimated delivery times for millions of packages globally.

Build with Microsoft Cloud

Developing with the Microsoft Cloud puts the latest technologies in your hands and empowers you with both control and productivity. It offers a trusted and comprehensive platform so you can build great apps and solutions.

Microsoft Build is all about celebrating the work you do and helping you build what comes next. Be sure to view the session Scaling cloud-native apps and accelerating app modernization to learn more about these announcements. I also encourage you to view the Rapidly code, test, and ship from secure development environments session for more depth on Microsoft’s developer tools. There’s an exciting week planned, so join in throughout the entire digital event for more announcements, customer stories, breakout sessions, learning opportunities, and technical demos. Enjoy the event experience. I can’t wait to see what you build.

Source: microsoft.com