Saturday, 30 October 2021

Microsoft powers transformation at NVIDIA GTC Fall—GPU technology conference

Microsoft Powers Transformation, NVIDIA GTC Fall—GPU Technology, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Preparation, Microsoft Prep, Microsoft Jobs

Customers around the world rely on Microsoft Azure to drive innovations related to our environment, public health, energy sustainability, weather modeling, economic growth, and more. Finding solutions to these important challenges requires huge amounts of focused computing power. Customers are increasingly finding the best way to access such high-performance computing (HPC) through the agility, scale, security, and leading-edge performance of Azure’s purpose-built HPC and AI cloud services.

Azure’s market-leading vision for HPC and AI is based on a core of genuine and recognized HPC expertise, using proven HPC technology and design principles, enhanced with the best features of the cloud. The result is a capability that delivers performance, scale, and value, unlike any other cloud. This means applications are scaling 12 times higher than other public clouds. It means higher application performance per node. It means powering AI workloads for one customer with a supercomputer fit to be among the top five in the world. It also means delivering massive compute power into the hands of medical researchers over a weekend to prove out life-saving innovations in the fight against COVID-19.

This year during NVIDIA GTC 21, we’re spotlighting some of the most transformational applications powered by NVIDIA accelerated computing that highlights our commitment to edge, on-premises, and cloud computing. Registration is free, so sign up to learn how Microsoft is powering transformation.

AI and supercomputing scale

The AI and machine learning space continues to be one of the most inspiring areas of technical evolution since the internet. The trend toward using massive AI models to power a large number of tasks is changing how AI is built. Training models at this scale requires large clusters of hundreds of machines with specialized AI accelerators interconnected by high-bandwidth networks inside and across the machines. We have been building such clusters in Azure to enable new natural language generation and understanding capabilities across Microsoft products.

The work that we have done on large-scale compute clusters, leading network design, and the software stack, including Azure Machine Learning, ONNX Runtime, and other Azure AI services, to manage it is directly aligned with our AI at Scale strategy.

Machine learning at the edge

Microsoft provides various solutions in the intelligent edge portfolio to empower customers to make sure that machine learning not only happens in the cloud but also at the edge. The solutions include Azure Stack Hub, Azure Stack Edge, and IoT Edge.

Whether you are capturing sensor data and inferencing at the edge or performing end-to-end processing with model training in Azure and leveraging the trained models at the edge for enhanced inferencing operations—Microsoft can support your needs however and wherever you need to.

Visualization and GPU workstations

Azure enables a wide range of visualization workloads, which are critical for desktop virtualization as well as professional graphics such as computer-aided design, content creation, and interactive rendering. Visualization workloads on Azure are powered by NVIDIA’s world-class graphics processing units (GPUs) and RTX technology, the world’s preeminent visual computing platform.

With access to graphics workstations on Azure cloud, artists, designers, and technical professionals can work remotely, from anywhere, and from any connected device.

Recapping 2021 moments with Azure and NVIDIA technologies

Wildlife Protection Services

Microsoft Powers Transformation, NVIDIA GTC Fall—GPU Technology, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Preparation, Microsoft Prep, Microsoft Jobs
From deforestation to wildfire management to protecting endangered animals, studying wildlife populations is essential to a sustainable future. Learn how Wildlife Protection Services works with Microsoft AI for Earth to provide the monitoring technology that conservation groups need to keep watch over wild places and protect wildlife, using an infrastructure of Azure High Performance Computing virtual machines with NVIDIA V100 GPUs.

Van Gogh Museum

Microsoft Powers Transformation, NVIDIA GTC Fall—GPU Technology, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Preparation, Microsoft Prep, Microsoft Jobs
With tens of thousands of Chinese visitors each year, the Van Gogh Museum wanted to create something unique for this audience. Enter WeChat, an app that could transform portrait photos into digital paintings reminiscent of Van Gogh’s art. Users, able to see how the artist would have painted them, would ideally be drawn closer to his art through this unique, personal experience.

FLSmidth

Microsoft Powers Transformation, NVIDIA GTC Fall—GPU Technology, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Preparation, Microsoft Prep, Microsoft Jobs
FLSmidth has an ambitious goal of zero emissions by 2030 but they were hampered by latency and performance limitations of their on-premises infrastructure. By moving to Microsoft Azure in collaboration with partner Ubercloud, FLSmidth found the perfect vehicle for optimizing the engineering simulation platforms that depend on high-performance computing. The switch has removed all latency, democratized their platform, and produced results 10 times faster than their previous infrastructure.

Previous 2021 Azure HPC and AI product launches

Azure announces general availability of scale-out NVIDIA A100 GPU Clusters: the fastest public cloud supercomputer—the Azure ND A100 v4 Virtual Machine—powered by NVIDIA A100 Tensor Core GPUs -  are designed to let our most demanding customers scale up and scale out without slowing down.

In the June 2021 TOP500 list Microsoft Azure took public cloud services to a new level, demonstrating work on systems that took four consecutive spots from No. 26 to No. 29 on the TOP500 list. They are parts of a global AI supercomputer called the ND A100 v4 cluster, available on demand in four global regions today.  These rankings were achieved on a fraction of our overall cluster size. Each of the systems delivered 16.59 petaflops on the HPL benchmark also known as Linpack, a traditional measure of HPC performance on 64-bit floating point math that’s the basis for the TOP500 rankings.

Azure announces the DeepSpeed-and Megatron-powered Megatron-Turing Natural Language Generation model (MT-NLG), the largest and the most powerful monolithic transformer language model trained to date, with 530 billion parameters. It is the result of a research collaboration between Microsoft and NVIDIA to further parallelize and optimize the training of very large AI models.

Join us at the NVIDIA GTC Fall 2021 conference

Microsoft Azure is sponsoring NVIDIA GTC 2021 conference workshops and training. The NVIDIA Deep Learning Institute (DLI) offers hands-on training in AI, accelerated computing, and accelerated data science to help developers, data scientists, and other professionals solve their most challenging problems. These in-depth workshops are taught by experts in their respective fields, delivering industry-leading technical knowledge to drive breakthrough results for individuals and organizations.

On-demand Microsoft sessions with GTC

Microsoft session recordings will be available on the GTC site starting April 12, 2021.

Source: azure.microsoft.com

Thursday, 28 October 2021

Upgrade your network protection with Azure Firewall Premium

Azure Exam Prep, Azure Tutorial and Materals, Azure Exam, Azure Materials, Azure Career, Azure Online Guides, Azure Firewall Premium

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are excited to share several new Azure Firewall Premium capabilities based on your feedback:

◉ Azure Firewall Premium availability in more regions. In Azure, we make it our mission to be the world’s computer and as part of fulfilling this mission we are available in more regions and data centers than any other public cloud. The Azure Firewall team follows the same ethos by bringing Firewall Premium’s next generation threat protection and security closer to the applications and customers.

◉ Terraform support for Firewall Policy Premium is now available. Terraform is a popular open-source tool used by DevOps for implementing infrastructure as code. Azure Firewall Premium supports a range of DevOps tools including Azure CLI, PowerShell, REST API. Customers can now use Terraform to manage their Azure Firewall Premium.

◉ Web categories Category Check is now available in preview. Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Often customers want to check what categories does a specific URL fall under. Customers can now use the convenience of Azure Portal to determine URL web categories and share feedback if the category is not accurate.

◉ Migrate to Premium SKU using Stop/Start approach. Stop/Start is a convenient approach adopted by customers to stop firewall billing during evaluation period. This approach deallocates the firewall virtual instances while preserving the underlying firewall resource. Customers are looking for a simple migration to Azure Firewall Premium can now use the Stop/Start approach. This approach is supported for both Secure Hub and Hub VNET firewalls.

Azure Firewall Premium is now available in new regions

Azure Firewall Premium is now available in both Microsoft Government Cloud and Azure China 21Vianet. This expansion makes Azure Firewall Premium now available in 44 Azure regions.

◉ USGov Texas

◉ USGov Arizona

◉ USGov Virginia

◉ China North 2

◉ China East 2

Terraform support for Firewall Policy Premium

Terraform community recently added support for Firewall Policy Premium SKU properties. Customers using Firewall Policy can now leverage Terraform to manage both Standard and Premium SKU attributes.

Web categories—Category Check and Mis-Categorization request

Web categories is a feature that allows you to filter application traffic based on defined categories, such as gambling, social media, and download sites. With Web Categories Category Check, administrators can now perform a fully qualified domain name (FQDN) or URL check to identify which category a website falls under. This is particularly useful when defining your rules. In addition, you can now provide feedback by submitting a Mis-Categorization request if you believe a website should fall under a different category.

Azure Exam Prep, Azure Tutorial and Materals, Azure Exam, Azure Materials, Azure Career, Azure Online Guides, Azure Firewall Premium

Migrate to Premium SKU using Stop/Start approach


If you use Azure Firewall Standard SKU with Firewall Policy, you can use the Allocate/Deallocate method to upgrade your Firewall SKU to Premium. This migration approach is supported on both VNET Hub and Secure Hub Firewalls. Secure Hub deployments will be upgraded while preserving the public IP of the firewall.

Source: microsoft.com

Tuesday, 26 October 2021

Spot every Spring Boot application issue on the spot using AppDynamics

Today, we are excited to announce the integration of AppDynamics in Azure Spring Cloud for full stack observability.

Over the past 20 months, we worked with many enterprise customers to learn about the scenarios they face. Many of these customers have thousands of Spring Boot applications running in on-premises data centers. As they migrate these applications to the cloud, they need to instrument them for application performance monitoring (APM) using solutions their developers are familiar with and have been using for years. In addition, they must ensure continuity for desktop and mobile applications that are already pre-instrumented for end-to-end monitoring using agents like AppDynamics, which automatically discovers and maps all applications, microservices, and infrastructure as well as any dependencies in dynamic hybrid, multicloud environments. With the integration of AppDynamics in Azure Spring Cloud, you can continue your journey and easily instrument your Spring Boot applications with AppDynamics.

"Our deployment of Azure Spring Cloud is just one component of a much broader multi-region, multicloud architecture. We’ve used AppDynamics at Digital Realty for several years and had a strong desire to have this single pane of glass to capture performance metrics and monitor availability across the full stack. But not only that, AppDynamics is a powerful APM tool that is one component of adopting an event-driven architecture. We are experimenting with self-healing and AIOps and need to expand our capabilities in this area, which we can now do."—Devon Yost, Enterprise Architect, Digital Realty Trust

"As our customers look to modernize their applications and leverage the ease of use and scaling of cloud platforms like Azure Spring Cloud, the need for full stack observability is more important than ever. Many of Presidio’s customers leverage the instrumentation capabilities of AppDynamics to run their Spring Boot applications in the high performing, cost efficient, and highly available manner that their businesses demand. As they plan their migrations to Azure and the Azure Spring Cloud Platform, they will need more than ever consistency of insights and visibility into their applications. Azure Spring Cloud support for AppDynamics brings together the two platforms and gives Presidio and its customers a clear path forward."—Rich Gondek, Senior Technical Director, Digital Business Solutions and Services, Presidio.

“I’m excited by our continued partnership with Microsoft. AppDynamics will provide developers working within Azure Spring Cloud with real-time visibility, with correlated insights that enable them to isolate the root cause of any performance issues and optimize microservices with context to the business impact.”—Gregg Ostrowski, Executive CTO at AppDynamics

“Microsoft is committed to making it easier for Spring developers to run their apps at scale on Azure. AppDynamics is a critical part of full stack observability for many of our customers, and through our partnership they can now get this functionality integrated into Azure Spring Cloud. Supporting APM tools of choice for developers—like AppDynamics—minimizes change so they can focus on delivering new capabilities.”—Julia Liuson, Corporate Vice President, Developer Division, Microsoft

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills
Watch the video above to learn about using AppDynamics to monitor Spring Boot applications.

The ability to observe and correlate applications to the infrastructure, network, and security throughout the entire IT stack is now essential in a digitally transformed world. Technologists demand full-stack observability so they can foresee issues before they affect customers and the business. At the same time, developers continue to innovate and evolve the IT stack by adopting the latest and most agile cloud platforms like Azure Spring Cloud to deploy and run applications at scale quickly and easily. As they do, it is critical that they can move rapidly towards full-stack visibility, insights, and actions transforming siloed data into actionable insights that provide shared context to IT teams so they can deliver exceptional digital experiences, optimize for cost and performance, and maximize digital business revenue.

AppDynamics and Microsoft continue to partner together to ensure that Azure customers can maintain that level of business observability with cloud-native applications on Azure. Now, customers can observe what matters most across their services and workloads deployed on Azure Spring Cloud using AppDynamics—with no code changes required.

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills

Activating the AppDynamics agent in an Azure Spring Cloud deployment is as simple as configuring the JVM options and environment variables when deploying a service via the Azure CLI.

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills
Figure 1: Shows Azure CLI command to activate AppDynamics as part a deployment

The AppDynamics agent can also be activated within Azure Spring Cloud through the Azure Portal, by utilizing Azure Resource Manager (ARM) templates, Bicep templates, or via Terraform.

Upon activation, open the AppDynamics Portal to begin monitoring. In the view shown below, AppDynamics shows all the Spring Boot applications in Azure Spring Cloud.

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills
Figure 2: Shows all the Spring Boot applications running in a production workload

You can also see automated flow maps that show the flow of data through your applications, revealing Spring Boot apps and their dependencies and the exact locations of any issues to be addressed to mitigate business impact. In the view shown below, automated flow maps are automatically available to you—no additional configuration required.

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills
Figure 3: Shows automated app flow maps for Spring Boot applications and their dependencies

AppDynamics traces key business transactions based on production application behavior. It dynamically baselines performance and automatically discovers what is normal, so customers and developers get alerts only when thresholds are exceeded. In the view shown below, it shows the flow of data from api-gateway app to customers-services app, and then to a MySQL database.

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills
Figure 4: Shows flow of data from one Spring Boot app to another Spring Boot app, and then to a MySQL database

You can also get deep diagnostics such as full call stacks and granular method-level code details and database queries. You can turn on development mode for occasional troubleshooting, in which certain default limits on the data that AppDynamics collects are turned off, giving you additional visibility on application activities. In the view shown below, we are zooming into all the database calls, in the last few hours, there were about 5 million calls with good response times.

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills
Figure 5: Zooms into all the database calls from a Spring Boot application

You can look for slowest database calls. In the view shown below for customers-service app, these are the slowest database calls—particularly, plenty of time is spent in setting up connections. As always, there is some room for improvement.

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills
Figure 6: Shows slowest database calls from a Spring Boot application

AppDynamics also has a good JVM metric browser where you can look at JVM metrics across time. In the view shown below, you can look at number of JVM threads versus JVM CPU usage. Certainly, you can draw out many more insights through the metric browser.

Azure AppDynamics, Azure Prep, Azure Preparation, Azure Certification, Azure Career, Azure Guides, Azure Jobs, Azure Skills
Figure 7: Shows JVM Metrics across time in AppDynamics Metric Browser

By using AppDynamics, you can enable seamless collaboration among developers, operators, and business teams. Unified monitoring and real-time insights bring these teams together to maintain application performance, the user experiences and business performance.

Build your solutions and monitor them today


Azure Spring Cloud is jointly built, operated, and supported by Microsoft and VMware. It is a fully managed service for Spring Boot applications that abstracts away the complexity of infrastructure and Spring Cloud middleware management, so you can focus on building your business logic and let Azure take care of dynamic scaling, patches, security, compliance, and high availability. With a few steps, you can provision Azure Spring Cloud, create applications, deploy and scale Spring Boot applications, and start monitoring in minutes. We will continue to bring more developer-friendly and enterprise-ready features to Azure Spring Cloud. We would love to hear how you are building impactful solutions using Azure Spring Cloud.

Source: microsoft.com

Saturday, 23 October 2021

Increase ROI and business impact with Azure Machine Learning

Azure Machine Learning, Azure Exam Prep, Azure Learning, Azure Certification, Azure Career, Azure Online Guides

A growing number of organizations are taking advantage of machine learning to increase efficiency, enhance customer experiences and drive innovation. With increasing use of machine learning, customers might wish to consider a cost-benefit and value analysis across infrastructure, operations, and processes—across their teams and organizations.

Azure Machine Learning is the enterprise-grade service to build and deploy models faster and accelerate the machine learning lifecycle. Thousands of customers, including many Fortune 100 customers are already realizing the impact and benefits provided by Azure Machine Learning. To provide a detailed analysis of the potential return on investment (ROI) with Azure Machine Learning, Microsoft commissioned the Forrester Consulting Total Economic Impact™ (TEI) study to examine the cost savings and business benefits enabled by Azure Machine Learning.

Key findings

Azure Machine Learning provides advanced machine learning operations (MLOps) capabilities for operationalizing the lifecycle. Forrester interviewed five customers that used Azure Machine Learning and surveyed 199 data science, machine learning, or AI decision makers. For the purposes of this study, Forrester aggregated the results from these customers into a single composite organization.

Forrester found that after investing in Azure Machine Learning, customers experienced many efficiencies in their ability to execute machine learning projects, drive greater revenue, and lower operating costs. Benefits were calculated for a three-year period and Forrester concluded that benefits scaled as the organization’s adoption of Azure Machine Learning increased.

Highlighted benefits of using Azure Machine Learning:

◉ Three-year projected ROI of 189 percent to 335 percent.

◉ Improved data scientist productivity by up to 25 percent and data engineering productivity by up to 40 percent.

◉ Up to 40 percent reduction in time to onboard new data scientists, resulting in reduced costs.

◉ Increase in operational efficiency by streamlining model development, training, validation, deployment, and monitoring.

◉ Improved time-to-value of machine learning initiatives and model accuracy resulting in increased revenue and cost savings.

◉ Cost savings from retiring legacy technology.

Source: microsoft.com

Thursday, 21 October 2021

The Enclave Device Blueprint for confidential computing at the edge

A major milestone towards making confidential computing in IoT mainstream for privacy and safety.

In collaboration with Arm® Technologies and Scalys BV, we are announcing the immediate availability of the Enclave Device Blueprint to help towards making confidential computing a mainstream computing paradigm at the edge. The exponential growth in intelligent processing at the edge and autonomous command and control in the internet of things (IoT) necessitates confidential computing to protect privacy and safety. Confidential computing at the edge demands the use of security-hardened and tamper-resistant computing isolations called Trusted Execution Environments (TEE) or simply enclaves. Enclave devices are extremely complex to engineer and operate, and their absence is holding back the fullest potential in edge computing. The Enclave Device Blueprint is here to help simplify the engineering of enclave devices and deploying confidential applications in IoT.

Cloud Strategy, Internet of Things, Azure Exam Prep, Azure Tutorial and Material, Azure Learning, Azure Guides, Azure Prep, Azure Career
Figure 1: Enclave Device Blueprint Architecture

The rising need for confidential computing


Confidential computing augments traditional computing paradigms with additional protections for computing workloads and data when in use. Traditional computing applies cryptography to encrypt content in the form of computing workloads, data, and AI models when in storage or in transit but must decrypt said content in memory when in use. This model continues to work reasonably well when operating air-gapped compute networks where there are lower concerns for data exfiltration or malicious tampering. IoT and cloud computing, on the other hand, ushers the age of computing characterized by hyperconnectivity, multitenant compute infrastructures, and data-driven autonomous command and control of many systems including critical infrastructure thereby calling for a higher bar for privacy and safety. Confidential computing using TEE offers compute isolations necessary to help deliver on both privacy and safety to help unleash the full power of digital transformations with IoT.

Special considerations for IoT


A major difference to observe in confidential computing in the cloud and at the edge is that while cloud solution providers set up and operate the requisite infrastructure in the cloud for their customers, IoT solution builders are responsible for coming up with enclave devices and the complexity in doing so holds them back.

In addition, while the infrastructure for confidential computing in the cloud resides in datacenters where they benefit from additional facility and operational security controls, IoT and enclave devices are generally deemed to be under constant threat of malicious physical access. To minimize this threat, enclave devices tend to favor binding trust directly or very close to the root of trust hardware (bare metal) over the use of supervisory software like hypervisors and container runtimes to keep the Trusted Computing Base (TCB) at a minimum. Supervisory software on the other hand help abstract from the hardware to enable scale across hardware technologies at a cost of a larger TCB. Having to avoid the scale advantages of supervisory software for higher security is one source of complexity in building, operating, and maintaining enclave devices.

The Enclave Device Blueprint at its core seeks to solve these challenges in a manner that still upholds security at the highest levels possible.

The Enclave Device Blueprint


The Enclave Device Blueprint comprises projects, resources, and guidance to abstract and simplify the development of enclave devices and facilitate the deployment of confidential applications at scale for IoT. It strives to complement traditional computing by filling in the architectural and component gaps towards making confidential computing mainstream in IoT.

The blueprint is agnostic of hardware technologies, operating systems, or solution cloud. It invites a community approach to solve a very complex problem where participants can mitigate cost through collaboration and foster higher security through the transparency that comes with open source. All blueprint components are currently in open source and were developed with the vision for complete community ownership and governance.

A real-world realization


More than just a collection of projects, resources, and guidance, we made sure to validate the efficacy of the Enclave Device Blueprint with a real-world device and project. The Enclave Device Blueprint was inspired by the real observation of the roadblocks holding back confidential computing at the edge. It was, therefore, important to ensure the blueprint address obstacles from real experiences and is backed by real product truth.

While in collaboration for the development of the Enclave Device Blueprint, Original Device Manufacturer (OEM) and secured devices builder, Scalys BV, in tandem engineered TrustBox Edge 201 from the blueprint to satisfy both Azure IoT Edge certified and tamper-resistant enclave device requirements for confidential computing in IoT.

Cloud Strategy, Internet of Things, Azure Exam Prep, Azure Tutorial and Material, Azure Learning, Azure Guides, Azure Prep, Azure Career
Figure 2: TrustBox Edge 201 (Source: Scalys 2021)

Building on Azure cloud and making use of fully managed services like Azure IoT Edge, Azure IoT Hub, Azure Functions, and Azure Key Vault, we incorporated the software components of the Enclave Device Blueprint to orchestrate an end-to-end at scale build-deploy pipeline for confidential applications in IoT. It is the modular nature of the Enclave Device Blueprint components that make it agnostic to technology, operating system, or the cloud. The full project is available on Azure Samples and ready to try out using TrustBox Edge 201 from Scalys BV. The Enclave Device Blueprint whitepaper provides a detailed explanation for the blueprint components.

The journey continues


The Enclave Device Blueprint delivers a major milestone aimed at spurring the creation of enclave devices and deploying confidential applications at scale in IoT. Enclave devices help provide privacy and safety protections necessary for IoT to unleash its fullest potential. Based on history, one can safely assume that this is only one of many milestones, the most obvious next being seamless integrations with confidential computing services in the cloud for uniform and pervasive cloud-edge confidential computing experiences.

Source: microsoft.com

Tuesday, 19 October 2021

Business as usual for Azure customers despite 2.4 Tbps DDoS attack

In early August, we shared Azure’s Distributed Denial-of-Service (DDoS) attack trends for the first half of 2021. We reported a 25 percent increase in the number of attacks compared to Q4 of 2020, albeit a decline in maximum attack throughput, from one terabit per second (Tbps) in Q3 of 2020 to 625 Mbps in the first half of 2021.

The last week of August, we observed a 2.4 Tbps DDoS attack targeting an Azure customer in Europe. This is 140 percent higher than 2020’s 1 Tbps attack and higher than any network volumetric event previously detected on Azure.

Azure Security, Azure Exam Prep, Azure Preparation, Azure Certification, Azure Guides, Azure Tutorial and Material, Azure Learning
Figure 1—maximum attack bandwidth (terabit per second) in 2020 vs. August 2021 attack.

The attack traffic originated from approximately 70,000 sources and from multiple countries in the Asia-Pacific region, such as Malaysia, Vietnam, Taiwan, Japan, and China, as well as from the United States. The attack vector was a UDP reflection spanning more than 10 minutes with very short-lived bursts, each ramping up in seconds to terabit volumes. In total, we monitored three main peaks, the first at 2.4 Tbps, the second at 0.55 Tbps, and the third at 1.7 Tbps.

Azure Security, Azure Exam Prep, Azure Preparation, Azure Certification, Azure Guides, Azure Tutorial and Material, Azure Learning
Figure 2—attack lifespan and progress.

Azure’s massive scale DDoS protection


Attacks of this size demonstrate the ability of bad actors to wreak havoc by flooding targets with gigantic traffic volumes trying to choke network capacity. However, Azure’s DDoS protection platform, built on distributed DDoS detection and mitigation pipelines, can absorb tens of terabits of DDoS attacks. This aggregated distributed mitigation capacity can massively scale to absorb the highest volume of DDoS threats, providing our customers the protection they need.

Attack mitigation lifecycle is orchestrated by our control plane logic that dynamically allocates mitigation resources to the most optimal locations, closest to the attack sources. In this case, attack traffic which originated in the Asia-Pacific region and the United States did not reach the customer region but was instead mitigated at the source countries.

Azure provides additional protections beyond ample mitigation capacity. Azure’s DDoS mitigation employs fast detection and mitigation of large attacks by continuously monitoring our infrastructure at many points across the network. When deviations from baselines are extremely large, our DDoS control plane logic cuts through normal detection steps, needed for lower-volume floods, to immediately kick-in mitigation. This ensures the fastest time-to-mitigation and prevents collateral damage from such large attacks.

Whether in the cloud or on-premises, every organization with internet-exposed workloads is vulnerable to DDoS attacks. Because of Azure's global absorption scale and advanced mitigation logic, the customer did not suffer any impact or downtime. If the customer had been running in their own datacenter, they would most probably have incurred extensive financial damage, alongside any intangible costs.

How to protect your workloads from DDoS attacks


The pace of digital transformation has accelerated significantly during the COVID-19 pandemic, alongside the adoption of cloud services. Bad actors, now more than ever, continuously look for ways to take applications offline. Therefore, organizations should give their utmost attention to developing a robust DDoS response strategy with Azure.

Azure DDoS Protection Standard provides enhanced DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to protect all public IP addresses in virtual networks. Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes.

Besides the timely protection against DDoS attacks, another key feature of Azure DDoS Protection Standard is cost protection, whereby customers enrolled in DDoS Protection Standard receive data-transfer and application scale-out service credit for resource costs incurred because of documented DDoS attacks. It is imperative to have such cost protection with large attacks that may incur significant costs.

Source: microsoft.com

Saturday, 16 October 2021

Humana leverages Microsoft Cloud for Healthcare to develop advanced predictive models

Microsoft Cloud for Healthcare, Microsoft Exam Prep, Microsoft Tutorial and Material, Microsoft Career, Microsoft Certification, Microsoft Preparation

Using AI models to make real-world impact

Trips to the hospital happen. And while everyone in the industry strives to deliver world-class care for in-patient experiences, everyone—patients and care teams alike, would prefer to avoid those stays at the hospital. The teams at Humana believed they had enough data to explore the possibility of proactively identifying when patients were heading toward a high-risk event, and they put Microsoft Cloud for Healthcare and AI technology to the test.

Humana’s questions were straightforward: How do we take the data we have today and use it proactively? How do we use AI to identify signals in our existing ecosystem that tell us someone might be experiencing a scenario that puts them at risk? And most importantly, how do we engage proactively, meeting our members in their own environment before they end up in an emergency room?

The first approach to monitor chronic patients is often focused on remote patient monitoring and IoT devices, but to approach this challenge, we wanted to take a different, and much bigger, approach with AI. By combining clinical data, key event triggers that might indicate a patient was experiencing deteriorating health, and a combination of predictive models, Microsoft Research and Humana data science teams collaborated on research to explore whether they could develop a system that would identify potential gaps in care among patients and engage high risk patients with care teams that could reach out and offer support.

The power of AI model refinement

The result of the research was a glimpse into the future of AI in health. Health organizations like Humana have spent the last several years developing powerful, single focus predictive models. Humana had existing models that predicted the likelihood of acute hospital admissions in the near future across their 4.9 million Humana Medicare Advantage members, as well as additional models that predict the cost of care and the likelihood of readmissions. Microsoft Research and Humana data science teams brought those models together with structured data to create and test a combination of neural networks and tree-based models with the Microsoft cloud technologies.

Cloud scale tooling was critical to develop the multivariable model, as well as technology in the Microsoft Cloud for Healthcare to unify the variety of patient data streams. Moreover, Microsoft Research designed an advanced deep learning based sequential modeling approach to capture the dynamics of health status which is crucial to accurately predict the likelihood of readmissions. To further increase the robustness of the learned research model, Microsoft Research developed self-paced resampling techniques to address the sample imbalance challenge in this readmission prediction scenario. The research demonstrated that by integrating all these technologies together, the model’s precision improved by over 20 percent. And most importantly, the advanced models were developed using de-identified data, protecting patient information. 

Empower care teams to help patients when they need it most

“Model precision is critical here in identifying at-risk members,” shares Mike Hardbarger, Director of Data Science at Humana and a contributor to this project’s research. “Our members deserve personalized, proactive care. Using this model in conjunction with others, not only can we help them avoid hospital readmission, but care teams can have the necessary data to follow-up with a custom plan.” From effective prescription management to addressing food insecurity, a care manager can then work directly with the member to set the next best action into motion.

Proactive problem-solving like this relies on collaboration and innovation. Deep learning allowed research teams including Sean Ma, Lead Data Scientist at Humana, to get an inclusive scope of both science and industry considerations. “Working directly with algorithm authors significantly accelerated progress. I am excited for what’s to come,” says Ma.

Using Microsoft Cloud for Healthcare to do more with your data

This research project is just one step in the evolution of the Humana analytics engine. Enhancements will continue over time as additional research is conducted the model continues to be validated.

Source: microsoft.com

Thursday, 14 October 2021

Empowering cloud sustainability with the Microsoft Emissions Impact Dashboard

Microsoft Exam Prep, Microsoft Tutorial and Materials, Microsoft Career, Microsoft Preparation, Microsoft Certification, Microsoft Study Material

As cloud services continue to scale, their impact on the environment cannot be overlooked. That’s why today we’re announcing the general availability of the Microsoft Emissions Impact Dashboard, a tool that helps Microsoft cloud customers understand, track, report, analyze, and reduce carbon emissions associated with their cloud usage. The Emissions Impact Dashboard is part of our commitment to address our own environmental impact and help our customers do the same.

More Info: PL-900: Microsoft Power Platform Fundamentals

We introduced the Emissions Impact Dashboard in January 2020 as the Microsoft Sustainability Calculator. Designed to help customers gather meaningful, shareable information, the Emissions Impact Dashboard gleans critical insights to help organizations make more informed, data-driven decisions about sustainable computing.

The Emissions Impact Dashboard provides transparency into greenhouse gas emissions associated with using Microsoft cloud services and enables a better understanding of the root causes of emissions changes. Organizations can measure the impact of Microsoft cloud usage on their carbon footprint, and they can drill down into emissions by month, service, and datacenter region. The tool also enables customers to enter un-migrated workloads and get an estimate of emissions savings from migrating to the Microsoft cloud. Newly added data protection allows Emissions Impact Dashboard administrators within an organization to control who can see their company data in the tool.

Microsoft Exam Prep, Microsoft Tutorial and Materials, Microsoft Career, Microsoft Preparation, Microsoft Certification, Microsoft Study Material

Figure 1: The main dashboard focuses on showcasing overall emissions and usage over time, as well as carbon intensity, which is a metric of carbon efficiency specific to cloud usage.

Expanded visibility into the impact of cloud usage


Over the past year, we’ve strengthened features in the Emissions Impact Dashboard, such as the calculation of Scope 3 emissions associated with Microsoft cloud usage. These emissions are indirect and occur across the product value chain—from materials extraction and manufacturing to transport, usage, and hardware disposal.

Microsoft Exam Prep, Microsoft Tutorial and Materials, Microsoft Career, Microsoft Preparation, Microsoft Certification, Microsoft Study Material

Figure 2: The dashboard shows carbon emissions by scope, year over year, helping organizations track trends and make impactful decisions about their cloud services usage.

Scope 3 activities are often the largest source of emissions, but they can be difficult to measure. Using our third-party validated methodology for carbon accounting, the Emissions Impact Dashboard provides a broad understanding of the carbon impact of cloud workloads across scopes. This allows for more complete and accurate emissions reporting and better decisions about sustainability.

RELX, the information and analytics group, has been using the Emissions Impact Dashboard during preview for the last six months. Sustainability is a priority for RELX, and the tool now forms one of the key components of its CO2.Hub, the Azure-based solution it has built using Power BI to measure its Scope 1, 2, and 3 emissions.

Another organization that’s successfully using the Emissions Impact Dashboard is The Bühler Group, a leading developer of grains and food solutions, consumer foods, and advanced materials that contribute to the production of energy-efficient vehicles and buildings. To reach its goal of becoming carbon neutral by 2030, Bühler saw the need to track Scopes 1, 2, and 3 emissions. With the Emissions Impact Dashboard, Bühler has determined Scope 3 emissions associated with its Microsoft cloud services in a format that’s easy to track and compile, adding to a more accurate view of their overall Scope 3 emissions.

Source: microsoft.com

Sunday, 10 October 2021

Join Microsoft at ISC 2021 and drive innovations with Azure high-performance computing

Azure High Performance Computing, Azure Exam Prep, Azure Tutorial and Materials, Azure Career, Azure Certification

Customers around the world rely on Microsoft Azure to drive innovations.

Whether it’s understanding our environment, public health, energy sustainability, weather modeling, economic growth, and many others finding solutions to these important challenges requires huge amounts of focused computing power. Customers are increasingly finding the best way to access such high-performance computing (HPC) is through the agility, scale, security, and leading-edge performance of Azure’s purpose-built HPC and AI cloud services.

Read More: MB-330: Microsoft Dynamics 365 Supply Chain Management

Azure’s market-leading vision for HPC and AI is based on a core of genuine and recognized HPC expertise, using proven HPC technology and design principles, enhanced with the best features of the cloud. The result is a complete platform that delivers performance, scalability, and value, unlike any other cloud. This means applications scaling 12 times higher than other public clouds. It means higher application performance per VM. It means powering AI workloads for one customer with a supercomputer fit to be among the top five in the world. And it means delivering massive computing power into the hands of medical researchers over a weekend to prove out life-saving innovations in the fight against COVID-19.

Join us at ISC 2021 to learn more about Azure HPC and AI Supercomputing and Quantum

Microsoft will deliver a keynote at the International Supercomputing Conference (ISC) 2021 on quantum computing, and participate in multiple places across the ISC 2021 technical program, including panel sessions, talks, and presentations by our customers.

Additionally, we invite you to join us at an informal off-hour event at ISC on Monday, June 28, 2021, from 6:30 PM – 7:30 PM CEST for our inaugural “open community social.” We will have a range of HPC and Quantum experts from research, engineering, and customer-facing roles. We will also officially launch our Azure HPC community program. Learn additional details about the Azure HPC after-hour social event at ISC 2021 and join our Discord group prior to the off-hour event.

Announcements for Azure HPC and AI Supercomputing in 2021

Azure FX-series Virtual Machine general availability announcement

We are announcing the general availability of Azure FX-series Virtual Machine, specifically designed to optimize electronic design automation (EDA) workload processing with a purpose-built virtual machine architected for EDA workloads.

NVIDIA ND A100 v4 launch

Azure announced the general availability of scale-out NVIDIA A100 GPU Clusters with HDR 200 Gb/s InfiniBand networking: the fastest public cloud supercomputer. The ND A100 v4 VM is powered by NVIDIA A100 Tensor Core GPUs and is designed to let our most demanding customers scale up and scale out without ever slowing down.

Xilinx NP launch

Azure launched general availability of the Azure NP-Series Virtual Machine. The NP-series virtual machines are powered by Xilinx U250 FPGAs for accelerating workloads including machine learning inference, video transcoding, and database search and analytics. NP-series VMs are also powered by Intel Xeon 8171M (Skylake) CPUs with an all-core turbo clock speed of 3.2 GHz.

AMD HBv3 launch

Azure announced the general availability of the Azure HBv3 virtual machine for HPC workloads advancing the velocity at which we bring the latest technologies to our HPC customers and the compute performance we put at their fingertips.

Azure HPC and AI Collaboration Centers launch

Microsoft announced the HPC and AI Collaboration Centers program with five inaugural partners. These partners will share best practices for unlocking customer innovation and productivity with HPC and AI.

Big moments for Azure HPC and AI Supercomputing in 2021

UK Met Office supercomputing announcement

The UK Met Office supercomputing announcement highlighted that Microsoft had been awarded a contract to deliver a ten-year managed supercomputing service, providing advanced supercomputing capabilities for weather and climate research, ensuring the continuation of the Met Office’s leadership in this area.

Jellyfish

Jellyfish Pictures needed to enable secure remote access to immense computing power to render visual effects and animation sequences. The studio gained burst rendering on up to 90,000 processor cores in the cloud with Microsoft Azure Virtual Machine Scale Sets, HBv2 virtual machines, and Avere vFXT for Azure. It takes 80 percent off those rendering costs with Azure Spot Virtual Machines and uses an Azure ExpressRoute connection to minimize latency while more securely managing storage in one place, without replication.

Van Gogh Museum

In an average year, tens of thousands of Chinese visitors head to the Netherlands to tour the Van Gogh Museum. In recognition of this interest, the museum has created a mini-program for the social media platform WeChat. Through this app-like program, Chinese users can take selfies and have them digitally reworked in the style of Vincent Van Gogh. To remain compliant with GDPR legislation, and to scale the mini-program to meet demand, the museum has deployed it in Microsoft Azure with Azure Blob Storage, Azure Functions, and Azure Kubernetes Service (AKS).

Wildlife Protection Agency

Wildlife Protection Solutions (WPS) sits in the nexus between the most remote places in the world and burgeoning animal conservation science. It provides the monitoring technology that conservation groups need to keep watch over wild places and protect wildlife. Conservationists use remote cameras to gather image data about the status of the species they protect, but the number of images that must be analyzed before action can be taken is overwhelming. WPS overcomes this barrier in collaboration with Microsoft AI for Earth, supported by Azure technologies for a species-preserving solution.

Source: microsoft.com

Saturday, 9 October 2021

Advancing reliability through a resilient cloud supply chain

Microsoft’s cloud supply chain is essential to deliver the infrastructure—servers, storage, and networking gear—that enables cloud reliability and growth. Our vision is for cloud capacity to be available like a utility so that customers can seamlessly turn it on when and where they need it. With the Microsoft Cloud powering everything from mission-critical business applications, governments, life and safety services, financial services and much more, it's crucial that customers are able to scale out when they need it - even with unplanned spikes in demand. To deliver this experience, a resilient, predictable and agile supply chain is key.

Our cloud is growing fast, and we’ll be adding 50 to 100 new datacenters each year for the foreseeable future to the 200 plus datacenters we currently have in operation across 34 countries.   More regions were announced last year than any time previously. The supply chain supports the scale of this growth through an end-to-end value chain of activities. We plan the products, source the materials, build the products, deliver and install them in the datacenters, manage the customer capacity experience, and finally service and decommission the hardware at end of life. Systems and data link the processes so that each activity informs the next.

Azure Cloud, Azure Tutorial and Materials, Azure Learning, Azure Preparation, Azure Career, Azure Prep

How Microsoft is handling supply chain disruptions


The pandemic has disrupted supply chains globally, across many industries. We plan for the unexpected, and yet the pandemic created dramatic changes in demand and supply that we continue to learn and grow from. These changes have required us to respond with agility. Here are some ways the cloud supply chain is building on lessons learned over the past year to strengthen our resiliency and mitigate risks:

◉ End-to-end visibility: Near real-time visibility to supply, inventory and factory status aligned with demand is fundamental to managing supply disruptions and responding to exceptions. We’re investing in this area and launched a “control tower” initiative leveraging Azure and partner services to enable increased visibility and digitally transform our supply chain. Our control tower capability is based on a unified data model built on top of a digital twin of our extended supply chain that will give us finer-grained control over operations. We’ll see more progress in this area over the next year.

◉ Improving lead time for servers: The Azure customer experience is a core priority. In order to maintain the best customer experience while responding to increasing demand, we've improved lead times for building, delivering, and bringing new server capacity live in our datacenters. We have improved these times by 70 percent by automating, accelerating, and removing processes out of the critical path. In addition, we shifted our fulfillment model from build-to-order to build-to-stock for high-volume products. With short lead times, we can respond to customer demand quickly and with agility.

◉ Increased supplier diversification: Over the past year, we have diversified our supply chain—instead of sourcing key subassemblies from one nation, we have diversified our sources to over five countries. This enhances Azure supply chain resiliency to provide stable services to customers during unforeseen circumstances such as the pandemic, natural disasters, or trade challenges.

◉ Intelligent buffering: We have top data scientists and software engineers at Microsoft who have helped us develop an intelligent buffering model, which has allowed us to “shift left” in the supply chain process. This means we carry buffer (inventory) in more agile or raw components rather than finished goods to give us more flexibility to respond to what we need in the datacenters. This has enabled us to better protect our business from demand surges and supply disruptions.
By applying all of these lessons over the last year and a half, we’ve come a long way in strengthening our resiliency to better respond to supply or demand volatility, in turn  improving capacity fulfillment success on our platform and advancing reliability for our customers.

What makes the Microsoft cloud supply chain different?


In addition to our supply chain resilience, Microsoft’s cloud supply chain has unique features that enhance our competitive advantage and our customer experience.

Sustainability

Microsoft has made bold sustainability commitments to be carbon negative, zero waste, and water positive by 2030. The supply chain plays a major role in helping the company achieve these goals. To achieve our zero waste targets, we are building Circular Centers, which are facilities that reuse and repurpose servers and hardware that are being decommissioned. We have one Circular Center in operation, are currently building another, and have nine more on the roadmap. Our Circular Centers use intelligent routing to process and decommission assets on site, maximizing sustainability and value return. We expect the Microsoft Circular Centers to increase the reuse of our servers and components by up to 90 percent by 2025.

To support our carbon-negative goals, we’ve introduced the Sustainability Calculator to enable our customers as well as Microsoft to have transparency into carbon emissions of cloud usage. We’re also serious about carbon accounting to measure and monitor across organizational, process, and product pathways. Every method we create is third-party verified and shared publicly.

Innovation

We seek to be an innovation engine that pushes the supply chain industry forward. We’re investing in our decision science to leverage machine learning, optimization algorithms, artificial intelligence, and digital twins for supply chain so we can make faster, smarter decisions. Our control tower will make it possible to manage all these processes end-to-end.

We’ve also developed a blockchain-based solution for supply chain to improve traceability and create trust in data across our supplier partners by digitizing items in a shared data structure. We won the Gartner Power of the Profession Award for Supply Chain Breakthrough of the Year for our blockchain technology. We’re excited to be currently in production with SSD and DRAM and expanding to the full high-value commodities supplier base. In the future, the technology will enable traceability all the way from mine to datacenter and beyond into recycling, reuse, and disposition.

Trust

We want Azure to be known as the most trusted cloud on the planet. This includes customer trust in Azure as a platform and in Microsoft as a partner in their success. Earning that trust requires two capabilities—security and resilience. We’ve put in place the Azure Security and Resiliency Architecture (ASRA) as an approach to drive security and resiliency consistently and comprehensively throughout the lifecycle of Azure hardware, systems, infrastructure, and supply base.

Microsoft spends $1 billion a year on security, and our hardware and datacenters are designed with security top of mind. Microsoft is leading the way in confidential computing, deploying hardware that is physically and logically isolated from someone that has access to the server while it’s operating.
Finally, we are building transparency into our supply chain so that we know we are sourcing ethically and are accountable partners along each step of the way. This includes risk management, anti-bribery and anti-corruption, human and labor rights, health and safety, and more.

Talent

Our strength is in our people, and we want to be known as a talent destination. At Microsoft we expect each of us to play a role in creating an inclusive environment where people of diverse backgrounds bring all of who they are to our work. Different perspectives help us achieve more and inclusive thinking drives our innovation.

Azure Cloud, Azure Tutorial and Materials, Azure Learning, Azure Preparation, Azure Career, Azure Prep

We’re also investing in our people and capabilities through a new initiative called Supply Chain Academy, which develops our muscle around supply chain excellence by offering online courses on supply chain best practices and disciplines.

It’s all about our customers


At the end of the day, everything we do comes back to providing the best experience for our customers. By investing in our agility, resiliency, innovation, security, and talent, we are building a world-class supply chain that will make Azure the most reliable and trusted cloud platform. We’re grateful to all our partners, suppliers, and customers who have joined us in this journey as we power the world’s computers and empower every person and organization on the planet to achieve more.

Thursday, 7 October 2021

Microsoft named a Leader in 2021 Gartner® Magic Quadrant™ for data integration tools

Microsoft Exam Prep, Microsoft Preparation, Microsoft Career, Microsoft Tutorial and Materials, Microsoft Preparation Guides, Microsoft

We are announcing that Microsoft has been named a Leader in the 2021 Gartner Magic Quadrant for Data Integration Tools. We believe this recognition reflects our continued commitment to delivering comprehensive and cost-effective data integration solutions, including both code-first and low- or no-code tooling for on-premises, hybrid, and cloud workloads.

Read More: MB-320: Microsoft Dynamics 365 Supply Chain Management, Manufacturing

Data integration is an essential component of our customers’ end-to-end analytics strategy. To understand and act upon data, companies must first be able to pull that data together and prepare it for analysis in a central, secure environment. As a result, reliable data pipelines are the foundation of a trusted analytics solution. By leveraging Microsoft’s data integration tooling to combine disparate data sources, automate processes, and manage their data estates, our customers set the stage to deliver value to their teams through clear and actionable insights.

The Gartner Magic Quadrant for Data Integration Tools evaluated companies on a range of use cases across data engineering, cloud migration, and operational data integration tasks.

Microsoft Exam Prep, Microsoft Preparation, Microsoft Career, Microsoft Tutorial and Materials, Microsoft Preparation Guides, Microsoft

A comprehensive and cost-effective solution


Microsoft’s data integration tooling focuses on delivering a comprehensive set of relevant capabilities while managing ease of use and total cost of ownership. We believe this makes it possible for customers to be more productive across use cases such as low or no-code ETL, data migration, enterprise business workflows, and metadata management.

Further, we seek to provide a broad set of resources to support customers, including community collaboration, trainings, and a wide range of third-party options for learning best practices around deployment.

The combination of relevant features, managed costs, and ample support resources ensure customers have access to both the technology and the technical knowledge necessary to successfully address their data integration requirements.

The future of cloud analytics


As data management grows more and more complex, we believe the future of analytics lies in a continuum where data integration occurs in the same environment as data lake management, data warehousing, and machine learning projects. This streamlines collaboration, simplifies compliance and security, and accelerates time to insight. In pursuit of these goals, Microsoft has invested in incorporating our data integration capabilities into a unified analytics platform: Azure Synapse Analytics.

Azure Synapse Analytics makes it possible to ingest, explore, prepare, transform, manage, and serve data for business intelligence and machine learning in a centralized, secure environment. This means customers can accomplish any analytics workload from a single workspace, eliminating data barriers and traditional siloes that can stall progress and reduce impact.

Azure Synapse delivers limitless scale and unmatched security as it unites the worlds of data integration, enterprise data warehousing, and big data analytics. As a result, customers can spend less time on data management and more time on identifying and acting on data insights.

Source: microsoft.com