Saturday, 31 December 2022

Zero downtime migration for Azure Front Door—now in preview

In March of this year, we announced the general availability of two new Azure Front Door tiers. Azure Front Door Standard and Premium is our native, modern cloud content-delivery network (CDN), catering to both dynamic and static content delivery acceleration with built-in turnkey security and a simple and predictable pricing model. It has already been widely adopted by many of our customers. We also promised to provide a zero downtime migration tool to migrate from Azure Front Door (classic) and Azure CDN from Microsoft (classic) to the new Azure Front Door tier.

Today, we are taking the next step in that journey, and we are excited to announce the preview of the Azure Front Door tier migration capability as well as some new additional features. The migration capability for Azure CDN from Microsoft (classic) will be coming soon.

New features/capabilities on the new Front Door since general availability


Along with the migration feature, we added more capabilities, and integrations to the new Front Door tiers to provide you a better cloud CDN solution and a more integrated Azure cloud experience.

◉ Preview—Upgrade from Standard to Premium tier without downtime: To learn more about upgrading to Premium tier, see Azure Front Door Tier Upgrade. This capability is also supported during the migration from Azure Front Door (classic) to the new Front Door tier.

◉ Preview—Managed identities integration: Azure Front Door now supports Managed Identities generated by Azure Active Directory to allow Front Door to easily and securely access other Azure AD–protected resources such as Azure Key Vault. This feature is in addition to the AAD Application access to Key Vault that is currently supported. 

◉ Integration with App Service: Front Door can now be deployed directly from the App Service resource with a few clicks. The previous deployment workflow only supported Azure Front Door (classic) and Azure CDN.

◉ Pre-validated domain integration with Static Web Apps: Static Web App (SWA) customers who have already validated custom domains at the SWA level can now skip domain validation on their Azure Front Door.

◉ Terraform support for Azure Front Door Standard and Premium, enabling the automation of Azure Front Door Standard and Premium provisioning using Terraform.

◉ Azure Advisor integration provides suggestions for best practices and configurations, including expired certificates, certificates about to expire, autorotation failure for managed certificates, domains pending validation after 24 hours, use the latest "secret" version.

Migration overview


Azure Front Door enables you to perform a zero-downtime migration from Azure Front Door (classic) to Azure Front Door Standard or Premium in just three simple steps. The migration will take a few minutes to complete depending on the complexity of your Azure Front Door (classic) instance, such as the number of domains, backend pools, routes, and other configurations.

Microsoft Networking, Microsoft Migration, Azure Networking, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Prep, Microsoft Preparation, Microsoft Learning

If your Azure Front Door (classic) instance has custom domains with your own certificates, there will be two extra steps to enable managed identities and grant managed identity to a key vault for the new Azure Front Door profile.

Microsoft Networking, Microsoft Migration, Azure Networking, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Prep, Microsoft Preparation, Microsoft Learning

The classic instance will be migrated to the Standard or Premium tier by default based on the Azure Front Door (classic) WAF configurations. Upgrading from the Standard tier to Premium during the migration is also supported. If your Azure Front Door (classic) qualifies to migrate to Azure Front Door Standard, but the number of resources exceeds the standard quota limit, the Azure Front Door (classic) instances will be migrated to a Premium profile instead.

If you have Web Application Firewall (WAF) policies associated with the Front Door profile, the migration process will create copies of your WAF policies and configurations for the new Front Door profile tier. You can also use an existing WAF policy that matches the tier you're migrating to.

Azure Front Door tier migration is supported using the Azure portal. Azure PowerShell, Azure CLI, SDK, and Rest API support will come soon.

You’ll be charged for the Azure Front Door Standard and Premium base fee from the moment the migration completes. Data transfer out from edge location to client, Outbound Data Transfer from Edge to the Origin, Requests will be charged based on the traffic flow after migration.

Notable changes after migration


◉ DevOps: Azure Front Door Standard and Premium uses a different resource provider namespace Microsoft.Cdn, while Azure Front Door (classic) uses Microsoft.Network. After migration from classic to the Standard or Premium tier, you’ll need to change your Dev-Ops scripts and infrastructure code to use the new namespace and updated ARM template, Bicep, PowerShell Module, Terraform, CLI commands, and API.

◉ Endpoint: The new Front Door endpoint gets generated with a hash value to prevent domain takeover, in the format of endpointname-hashvalue.z01.azurefd.net. The Azure Front Door (classic) endpoint name will continue to work after migration. However, we recommend replacing it with the newly created endpoint in Azure Front Door Standard and Premium.

◉ Diagnostic logs and metrics won’t be migrated. We recommend you enable diagnostic logs and monitoring metrics in your Azure Front Door Standard or Premium profile after migration. Azure Front Door Standard and Premium tier also offers built-in reports and health probe logs.

Source: microsoft.com

Thursday, 29 December 2022

Delivering consistency and transparency for cloud hardware security

When it comes to building the Microsoft Cloud, our work to standardize designs for systems, boards, racks, and other parts of our datacenter infrastructure is paramount to facilitating forward progress and innovation across the computing industry. Microsoft has made a number of contributions to and collaborated with various members of the Open Compute Project (OCP) community, the leading industry group dedicated to open source hardware innovation. This year, we are excited to showcase some of our newest projects at the OCP Global Summit and share our learnings on the path of building a more reliable, trusted, and sustainable cloud. One of the key areas where we’ve seen continued focus and opportunity is driving industrywide standards around platform security. To dive deeper into our contributions in this area, I’ve invited Mark Russinovich, CTO and Technical Fellow, Azure, and Bryan Kelly, Partner Architect, Azure Hardware Systems and Infrastructure, to share more about Microsoft’s newest security contributions to OCP that standardize the foundations of trust, integrity, and reliability in computing.

Securing customer workloads from the cloud to the edge


Microsoft Azure is a leader in cloud security and privacy offering a broad range of confidential computing services to help organizations run workloads that keep business and customer data private with advanced levels of security. As the demand for confidential computing grows from cloud to edge, so do the requirements for consistency and transparency of the security mechanisms that protect workloads. With the rise of edge computing, the resultant growth in the exposed attack surface also presents a need for stronger physical security solutions. In this context, there is an increased need for greater transparency in the infrastructure that underpins these technologies and upholds hardware security promises.

Caliptra: Integrating trust into every chip


At the Open Compute Project (OCP) Summit, we are jointly announcing Caliptra, an open source root of trust (RoT) that produces cryptographic proofs about the hardware protections in place for confidential workloads. Designed with security experts and industry leaders in confidential computing across AMD, Google, Microsoft, and NVIDIA, Caliptra is a forward-looking approach casting transparency into hardware security. As a reusable open source, silicon-level block for integration into systems on a chip (SoCs)—such as CPUs, GPUs, and accelerators—Caliptra provides trustworthy and easily verifiable attestation.

At its core, Caliptra provides foundational security properties that underpin the integrity of higher-level security protection for confidential workloads. The Caliptra RoT has the following essential security properties:

◉ Identity: A unique device manufacturer’s cryptographic identity for attestation endorsement. The identity is consistent with TCG DICE and includes intrinsic attestation of the Caliptra firmware.

◉ Compartmentalization: Hardware protection barriers that isolate Caliptra’s security assets.

◉ Measurement: Cryptographic digests that represent the SoC security configuration in a concise, cryptographically verifiable manner.

◉ Renewable security:
The security state of the hardware can be brought forward to a secure state and the life cycle management extends through manufacturing, field operations, circular economy, and eventually end of life.

◉ Ownership: Provides verification of both authenticity and integrity of all mutable firmware loaded into an SoC. This property uses hardware-enforced digital signatures to ensure integrity policies are upheld.

◉ Attestation: The endorsement of measurement claims with Caliptra’s cryptographic identity. These measurements provide precise information about the security state of the trusted computing base.

The initial Caliptra 0.5 contribution release to OCP contains a series of specifications describing architecture, integration, and implementation. An open sourced register-transfer level (RTL) code implementation of Caliptra that can be synthesized into current SoC designs will be made available, along with the cloud-designed firmware written entirely in Rust. With this trusted foundation designed for confidential cloud devices, Caliptra supports the consistent scaling of confidential workloads across distributed systems.

With deep ecosystem collaboration at the heart of Microsoft’s open source philosophy, we look forward to continuing working closely with our partners and engaging the industry to advance Caliptra. Caliptra RTL and firmware project collaboration will be done under the auspices of the CHIPS Alliance.

Hydra: A new secure Baseboard Management Controller (BMC)


We are also introducing Hydra, a new secure BMC in partnership with Nuvoton. A BMC is typically designed into every server system and expansion chassis—for example, JBOD or GPU. As a diagnostic and recovery controller, the BMC has special privileged hardware interfaces for acquiring debug data and telemetry from CPUs. These interfaces present security concerns, as they are targets for attacks that bypass conventional security defenses.

Azure uses Cerberus, a contribution we made to OCP in 2017 for hardware security, to improve BMC security by enforcing firmware integrity and preventing the persistence of malware in the BMC. However, as threat models evolve to restrict admins with physical access to hardware, the BMC needs security properties to establish secure links to an external RoT.

Microsoft collaborated with Nuvoton to design a new security-focused BMC, with enhanced hardware security throughout the BMC SoC. The silicon-integrated root of trust supports TCG DICE identity flows with hardware engines for fast cryptographic operations and hardware-managed keys. The RoT has a one-way bridge for activity monitoring and controlling the BMC security configuration, including which internal security peripherals the BMC can assess. This unique feature allows fine-grained BMC interface authorization, enabling scenarios whereby temporary access to a debug interface can be granted to the BMC only after it attests its trustworthiness.

Kirkland: A secure Trusted Platform Module (TPM)


While Microsoft provides multilayered security across our datacenters, infrastructure, and operations, we believe in defense-in-depth and that all interconnects should be cryptographically secured from interposer-based attack vectors. In partnership with Google, Infineon, and Intel, we are announcing Project Kirkland at OCP. Project Kirkland demonstrates how, using firmware-only updates to the TPM stack and CPU RoT, the interconnect between the TPM and CPU can be secured in a way that prevents substitution attacks, interposing, and eavesdropping. We are open sourcing this methodology and plan to work with the Trusted Computing Group on standardizing this approach while working with other TPM manufacturers to adopt the same methodology, so these techniques become available to all.

A discrete TPM is a chip typically used to protect secrets for the software running on the CPU and conditionally released based on the CPU's boot measurements. Historically, the bus between the CPU and the TPM is susceptible to attack from physical adversaries wishing to falsify attested measurements or obtain TPM-bound secrets. The standards-based firmware techniques used in Project Kirkland defend against such attacks by using cryptography to authenticate the caller and protect the transmission of secrets over the bus.
 

Open hardware innovation at cloud scale


A community-driven approach to infrastructure innovation is vital—not just for continued advancements in trust, efficiency, and scalability, but in service of a larger vision of empowering the ecosystem towards building the for computing needs of tomorrow.

We are also contributing several new hardware designs such as a new modular chassis (Mt. Shasta), a converged architecture that brings form factor, power, and management interface into a modular design—optimized for advanced workloads like high-performance computing, artificial intelligence, and video codecs. In partnership with Quanta and Molex, Mt. Shasta is designed to be fully compatible with Open Rack V3, with flexibility in changing module-module connectivity. Earlier this year, we also collaborated with Intel and contributed the Scalable I/O Virtualization (SIOV) specification to OCP. SIOV enables device and platform manufacturers to an industry standard for hyperscale virtualization of PCI Express and Compute Express Link devices in cloud servers, enabling more scalable, efficient, and cost-effective hardware designs for datacenters.

As the demand for cloud-scale computing and digital services continues to grow, Microsoft is committing to deep ecosystem collaboration with OCP and industry partners to deliver the systems and infrastructure that maximize performance, trust, and resiliency for cloud customers.

Source: microsoft.com

Tuesday, 27 December 2022

Microsoft Azure CLX: A personalized program to learn Azure

Microsoft Azure CLX, Azure Career, Azure Skills, Azure Jobs, Azure Certification, Azure Preparation, Azure Study

The rise of cloud computing has created demand for proven cloud experts. That’s why we’ve launched the Microsoft Azure Connected Learning Experience (CLX) program, designed to help aspiring learners and IT professionals become Microsoft Azure cloud pros. CLX is a personalized and self-paced journey that culminates in a certificate of completion—allowing you to maximize learning while minimizing time invested.

What is the CLX program?


The CLX program is a four-step program that prepares you for the Microsoft Azure certification exams while optimizing your learning experience and minimizing time invested. The program, which is curated to meet every learner’s unique needs, consists of four steps:

◉ A knowledge assessment
◉ A Microsoft Learn study materials review
◉ A practice test
◉ A cram session

At the start of the program, you’ll take a knowledge assessment to test your skills and create a personalized learning path. You’ll then take only the Microsoft Learn courses that are useful to you—saving you time and ensuring that you learn the skills you need to accelerate your career.

Microsoft Azure CLX, Azure Career, Azure Skills, Azure Jobs, Azure Certification, Azure Preparation, Azure Study

What courses will I take?

The courses you take are up to you. The self-paced program is catered to your skillset, and you can embark on six tracks: Microsoft Azure Fundamentals, Microsoft Azure AI Fundamentals, Microsoft Azure Data Fundamentals, Microsoft Azure Administrator, Administering Windows Server Hybrid Core Infrastructure, and Windows Server Hybrid Advanced Series—with more on the way. Learn more about these tracks below.

Course Learner Personas   Course Content 
Microsoft Azure Fundamentals Administrators, Business Users, Developers, Students, Technology Managers This course strengthens your knowledge of cloud concepts and Azure services, workloads, security, privacy, pricing, and support. It’s designed for learners with an understanding of general technology concepts, such as networking, computing, and storage.
Microsoft Azure AI Fundamentals  AI Engineers, Developers, Data Scientists  This course, designed for both technical and non-technical professionals, bolsters your understanding of typical machine learning and artificial intelligence workloads and how to implement them for Azure. 
Microsoft Azure Data Fundamentals  Database Administrators, Data Analysts, Data Engineers, Developers  The Data Fundamentals course instructs you on Azure core data concepts, Azure SQL, Azure Cosmos DB, and modern data warehouse analytics. It’s designed for learners with a basic knowledge of core data concepts and how they’re implemented in Azure. 
Microsoft Azure Administrator  Azure Cloud Administrators, VDI Administrators, IT Operations Analysts  In Azure Administrator, you’ll learn to implement cloud infrastructure, develop applications, and perform networking, security, and database tasks. It’s designed for learners with a robust understanding of operating systems, networking, servers, and virtualization. 
Administering Windows Server Hybrid Core Infrastructure  Systems Administrators, Infrastructure Deployment Engineers, Senior System Administrators, Senior Site Reliability Engineers  In this course, you’ll learn to configure on-premises Windows Servers, hybrid, and Infrastructure as a Service (IaaS) platform workloads. It’s geared toward those with the knowledge to configure, maintain, and deploy on-premises Windows Servers, hybrid, and IaaS platform workloads. 
Windows Server Hybrid Advanced Series  System Administrators, Infrastructure Deployment Engineers, Associate Database Administrators  This advanced series, which is designed for those with deep administration and deployment knowledge, strengthens your ability to configure and manage Windows Server on-premises, hybrid, and IaaS platform workloads. 

How do I get certified?


After you finish your personalized curriculum, you’ll complete a two-hour practice test that mimics the final certification exam. Next, you’ll attend a virtual, instructor-led cram session that dives deeply into the Microsoft Azure Certification Exam content. The four-hour session covers the entire course syllabus to ensure you’re well-prepared to pass with ease.

Once you’ve sharpened your understanding of the Azure platform and its solutions, you’ll receive your certificate of completion. You’ll also walk away with the skills to confidently pass the Microsoft Azure Certification Exams—and the proven expertise to advance your career and exceed your cloud computing goals today and in the future.

Source: microsoft.com

Saturday, 24 December 2022

Improve speech-to-text accuracy with Azure Custom Speech

Azure Career, Azure Skills, Azure Tutorial and Materials, Azure Job, Azure Certification, Azure Guides

With Microsoft Azure Cognitive Services for Speech, customers can build voice-enabled apps confidently and quickly in more than 140 languages. We make it easy for customers to transcribe speech to text (STT) with high accuracy, produce natural-sounding text-to-speech (TTS) voices, and translate spoken audio. In the past few years, we are inspired by the ways customers seek our customization features to fine-tune speech recognition to their use cases.

As our speech technology continues to change and evolve, we want to introduce four custom speech-to-text capabilities and their respective customer use cases. With these features, you can evaluate and improve the speech-to-text accuracy for your applications and products. A custom speech model is trained on top of a base model. With a custom model, you can improve recognition of domain-specific vocabulary by providing text data to train the model. You can also improve recognition based on the specific audio conditions of the application by providing audio data with reference transcriptions.

Custom Speech data types and use cases


Our Custom Speech features will let you customize Microsoft's speech-to-text engine. You will be able to customize the language model by tailoring it to the vocabulary of the application and customize the acoustic model to adapt to the speaking style of your users. By uploading text and/or audio data through Custom Speech, you'll be able to create these custom models, combine them with Microsoft's state-of-the-art speech models, and deploy them to a custom speech-to-text endpoint that can be accessed from any device.

Phrase list: A real-time accuracy enhancement feature that does not need model training. For example, in a meeting or podcast scenario, you can add a list of participant names, products, and uncommon jargon using phrase list to boost their recognition.

Plain text: Our simplest custom speech model can be made using just text data. Customers in the media industry use this in use cases such as commentary of sports events. Because each sporting event’s vocabulary differs significantly from others, building a custom model specific to a sport increases accuracy by biasing to the vocabulary of the event.

Structured text: This is text data that boosts patterns of sentences in speech. These patterns could be utterances that differ only by individual words or phrases, for example, “May I speak with name” where name is a list of possible names of individuals. The pattern can link to this list of entities (name in this case), and you can also provide their unique pronunciations.

Audio: You can train a custom speech model using audio data, with or without human-labeled transcripts. With human-labeled transcripts, you can improve recognition accuracy on speaking styles, accents, or specific background noises. For American English, you can now train without needing a labeled transcript to improve acoustic aspects such as slight accents, speaking styles, and background noises.

Research milestones


Microsoft’s speech and dialog research group achieved a milestone in reaching human parity in 2016 on the Switchboard conversational speech recognition task, meaning we had created technology that recognized words in a conversation as well as professional human transcribers. After further experimentation, we then followed up with a 5.1 percent word error rate, exceeding human parity in 2017. A technical report published outlines the details of our system. Today, Custom Speech helps enterprises and developers improve upon the milestones achieved by Microsoft Research.

Customer inspiration


Peloton: In the past, Peloton provided subtitles only for its on-demand classes. But that meant that the signature live experience so valued by members was not accessible to those who are deaf or hard of hearing. While the decision to introduce live subtitles was clear, executing on that vision proved a bit murkier. A primary challenge was determining how automated speech recognition software could facilitate Peloton’s specific vocabulary, including the numerical phrases used for class countdowns and to set resistance and cadence levels. Latency was another issue—subtitles wouldn’t be very useful, after all, if they lagged behind what instructors were saying. Peloton chose Azure Cognitive Services because it was cost-effective and allowed Peloton to customize its own machine learning model for converting speech to text—and was significantly faster than other solutions on the market. Microsoft also provided a team of engineers that worked alongside Peloton throughout the development process.

Speech Services and Responsible AI


We are excited about the future of Azure Speech with human-like, diverse, and delightful quality under the high-level architecture of the XYZ-code AI framework. Our technology advancements are also guided by Microsoft’s Responsible AI process, and our principles of fairness, inclusiveness, reliability and safety, transparency, privacy and security, and accountability. We put these ethical standards into practice through the Office of Responsible AI (ORA)—which sets our rules and governance processes, the AI Ethics and Effects in Engineering and Research (Aether) Committee—which advises our leadership on the challenges and opportunities presented by AI innovations, and Responsible AI Strategy in Engineering (RAISE)—a team that enables the implementation of Microsoft Responsible AI rules across engineering groups.

Source: microsoft.com

Friday, 23 December 2022

How to Begin Preparation for the Microsoft AZ-500 Certification Exam?

Suppose you are involved in security or working in information security across Azure and would like to enhance your Azure Security skills and pass the AZ-500 exam. In this situation, this article is for you and focuses on all the vital highlights, tips, and crucial points for Microsoft Azure Security Technologies - AZ- 500 exam preparation.

Overview of Azure AZ-500 Certification

AZ-500 is an associate-level certification exam. Passing this Microsoft exam is equal to becoming a subject matter expert in the field of Azure cloud. The Microsoft AZ-500 certification proves your skills and expertise in working in the Azure cloud with identity control and security.

The AZ-500 exam purports to demonstrate that the applicant understands the identity control and security process, and it confirms your skills and expertise in the field of cloud computing.

You can go for this certification exam if you have knowledge or experience of Microsoft Azure fundamentals or some former knowledge of advanced security.

Microsoft AZ-500 Exam Details

The number of questions may vary in all Microsoft exams, approximately 40-60. But, the time for this exam is 120 minutes. You’d have to score 700 marks out of 1000 if you want to pass the AZ-500 exam.

Microsoft Azure AZ-500 Exam Domains:

  • Manage identity and access (30–35%)
  • Implement platform protection (15–20%)
  • Manage security operations (20–25%)
  • Secure data and applications (20–25%)
  • How to Begin Preparation for the Microsoft AZ-500 Certification Exam?

    Let’s dive into the phase of Microsoft Azure Security Technologies AZ-500 examination preparation. These are a few of the confirmed phases to confirm the topmost preparation for the AZ-500 certificate examination.

    1. Visit the Microsoft Official website

    While you have determined to begin the AZ-500 exam preparation, your most fundamental phase is to explore the official webpage. You will be exploring all the essential details about the exam, such as prerequisites, exam cost, preconditions, and examination objectives there. Most importantly, you will find timely updates about the exam; that way, you will keep your preparations on the right track.

    2. Have Detailed Knowledge of the Exam Objectives

    Another crucial phase while preparing for the examination is understanding the AZ-500 syllabus in a straightforward way. Amusingly, you will utilize the studying guideline to consistently contemplate every examination objective with the understanding of subtopics in every topic. Most importantly, your understanding of the examination syllabus wouldn’t only smoothen your preparations and also give you the assurance to qualify for the examination easily.

    3. Enroll in Training Courses

    The importance of enrolling in an official online training course is also apparent in the exam preparation for AZ-500. The official training course for the “Course AZ-500T00: Microsoft Azure Security Technologies” offers a further advantage to exam preparation. With the instructor-led course, it will be simpler for you to acquire technical skills and conceptual knowledge.

    4. Microsoft AZ-500 Practice Test

    Practice tests are excellent study resources you can utilize to prepare for the AZ-500 exam. If you take up the practice tests from some authentic websites, such as edusum, you will find them really beneficial and productive also. Practice tests will give you an opportunity to gauge your skills in a simulation environment. Utilize this opportunity to familiarize yourself with question types that you will confront in the actual exam and boost your confidence.

    5. Online Community

    You can join an online community at any time while studying for the exam. Group study assures that you are in touch with everyone following the same path as you. Group study features a large number of people that can assist you with any questions you have regarding the AZ-500 exam.

    Is the Microsoft Azure Security Technologies AZ-500 Certification Worth It?

    There are always advantages to achieving IT certifications for professionals who work in the IT field. Here are some of the reasons that earning the AZ-500 Azure Security Technologies certification is worth it:

  • Career Advancement: Earning Microsoft often brings about advanced job opportunities and promotions. More and more organizations are looking at certifications to make employment decisions. Along with lucrative job opportunities come great salaries. Certifications can greatly boost annual salaries.
  • Cloud Computing Is Growing Like Crazy: Cloud computing is growing like crazy and doesn’t look like it will decelerate or be replaced anytime soon. IT professionals will be bound to possess cloud computing knowledge and skills. Careers in cloud computing and cloud security are roaring, and certifications will only improve applicants’ opportunities.
  • AZ-500 Certification Validate Your Skills: Passing the AZ-500 certification exam confirms an individual’s skills and knowledge about definite Azure concepts, and the evidence comes straight from Microsoft. There isn’t a better validation than that.
  • Conclusion

    According to experts, Azure certification programs like the Microsoft Azure AZ-500 are more than just exams. The AZ-500 certification will improve your confidence while also presenting you to renowned figures in the IT industry. Moreover, this has been categorized as one of the best cloud security certifications available.

    Thursday, 22 December 2022

    Microsoft Innovation in RAN Analytics and Control

    Currently, Microsoft is working on RAN Analytics and Control technologies for virtualized RAN running on Microsoft Edge platforms. Our goal is to empower any virtualized RAN solution provider and operators to realize the full potential of disaggregated and programmable networks. We aim to develop platform technologies that virtualized RAN vendors can leverage to gain analytics insights in their RAN software operations, and to use these insights for operational automations, machine learning, and AI-driven optimizations.

    Microsoft has recently made important progress in RAN analytics and control technology. Microsoft Azure for Operators is introducing flexible, dynamically loaded service models to both the RAN software stack and cloud/edge platforms hosting the RAN, to accelerate the pace of innovation in Open RAN.

    The goal of Open RAN is to accelerate innovation in the RAN space through the disaggregation of functions and exposure of internal interfaces for interoperability, controllability, and programmability. The current standardization effort of O-RAN by O-RAN Alliance, specifies the RAN Intelligent Controller (RIC) architecture that exposes a set of telemetry and control interfaces with predefined service models (known as the E2 interface). Open RAN vendors are expected to implement all E2 service models specified in the standard. Near-real-time RAN controls are made possible with xApp applications accessing these service models.

    Microsoft’s innovation extends this standard-yet-static interface. It introduces the capability of getting detailed internal states and real-time telemetric data out of the live RAN software in a dynamic fashion for new RAN control applications. With this technology, together with detailed platform telemetry, operators can achieve better network monitoring and performance optimization for their 5G networks, and enable new AI, analytics, and automation capabilities that were not possible before.

    This year, Microsoft, together with contributions from Intel and Capgemini, has developed an analytics and control approach that was recognized with the Light Reading Editor’s Choice award under the category of Outstanding Use case: Service provider AI. This innovation calls for dynamic services models for Open RAN.

    Dynamic service models for real-time RAN control


    There are many RAN control use cases that require dynamic service models beyond those specified in O-RAN today, such as access to IQ samples, RLC and MAC queue sizes, and packet retransmission information. These high-volume real-time data need to be aggregated and compressed before being delivered to the xApp. Also, detailed data from different RAN modules across different layers like L1, L2, and L3 may need to be collected and correlated in real-time before any useful insight can be derived and shared with xApp. Further, a virtualized RAN offers so many more possibilities, that any static interface or service model may be ineffective in meeting the more advanced real-time control needs.

    One such example occurs with interference detection. Today, operators typically need to do a drive test to detect external interference in a macro cell. But now, Open RAN has the potential to replace the expensive truck roll with a software program that detects interference signals at the RAN’s L1 layer. However, this will require a new data service model with direct access to raw IQ samples at the physical layer. Another example exists in dynamic power saving. If a RAN power controller can see the number of packets queued at various places in the live RAN system, then it can estimate the pending process loads and optimize the CPU frequency at a very high pace, in order to reduce the RAN server power consumption. Our study has shown that we can reduce the RAN power consumption by 30 percent through this method—even during busy periods. To support this in Open RAN, we will need a new service model that exposes packet queuing information.

    These new use cases are envisioned for the time after the current E2 interface has been standardized. To achieve them, though, we need new RAN platform technologies to quickly extend this interface to support these and future advanced RAN control applications.

    The Microsoft RAN analytics and control framework


    The Microsoft RAN analytics and control framework extends the current RIC service models in O-RAN architecture to be both flexible and dynamic. In the process, the framework allows RAN solution providers and operators to define their own service models for dynamic RAN monitoring and control. Here, the underlying technology is a runtime system that can dynamically load and execute third-party code in a trusted and safe manner.

    This system enables operators and trusted third-party developers to write their own telemetry, control, and inference pieces of code (called “codelets”) that can be deployed at runtime at various points in the RAN software stack, without disrupting the RAN operations. The codelets are executed inline in the live RAN system and on its critical paths, allowing them to get direct access to all important internal raw RAN data structures, to collect statistics, and to make real-time inference and control decisions.

    To ensure security and safety, the codelets checked with static verified with verification tools before they can be loaded, and they will be automatically pre-empted if running longer than the predefined execution budgets. The dynamic code extension system is the same as the Extended Berkeley Packet Filter (eBPF), which is a proven technology that has been entrusted to run custom codes in Linux kernels on millions of mission-critical servers around the globe. The inline execution is also extremely fast, typically incurring less than one percent of overhead on the existing RAN operations.

    The following image illustrates the overall framework and the dynamic service model denoted by the star circle with the letter D.

    Microsoft Innovation, RAN Analytics, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Tutorial and Materials, Microsoft Certification, Microsoft Guides

    The benefit of the dynamic extension framework with low-latency control is that it can open the opportunity for third-party real-time control algorithms. Traditionally, due to the tight timing constraint, a real-time control algorithm must be tightly implemented and integrated inside the RAN system. The Microsoft RAN analytics framework allows RAN software to delegate certain real-time control to RIC, potentially leading to a future marketplace where real-time control algorithms, machine learning, and AI models for optimizations may be possible.

    Microsoft, Intel, and Capgemini have jointly prototyped this technology in Intel’s FlexRAN™ reference software and Capgemini’s 5G RAN. We have also identified standard instrumentation points aligned with the standard 3GPP RAN architecture to achieve higher visibility into the RAN’s internal state. We have further developed 17 dynamic service models, and enabled many new and exciting applications that were previously not thought possible.

    Examples of new applications of RAN analytics


    With this new Analytics and Control Framework, applications of dynamic power savings and interference detection described earlier can now be realized.

    RAN-agnostic dynamic power saving

    5G RAN energy consumption is a major OPEX item for any mobile operator. As a result, it is paramount for a RAN platform provider to find any opportunity to save power when running the RAN software. One such opportunity can be found by stepping down the RAN server CPU frequency when the RAN processing load is not at full capacity. This is indeed promising because internet traffic is intrinsically “bursty”; even during peak hours, the network is rarely operated at full capacity.

    However, any dynamic RAN power controller must also have accurate load prediction and fast reaction in millisecond timescale. Otherwise, if one part of RAN is in hibernation, then any instant traffic burst will cause serious performance issues, or even crashes. The Microsoft RAN analytics framework with dynamic service models and low-latency control-loop makes it possible to write a novel CPU frequency prediction algorithm based on the number of active users, and changes in different queue sizes. We have implemented this algorithm on top of Capgemini 5G RAN and Intel FlexRAN™ reference software, and we achieved up to 30 percent energy savings—even during busy periods.

    Interference detection

    External wireless interference has long been a source of performance issues in cellular networks. Detecting external wireless interference is difficult and often requires a truck roll with specialized equipment and experts to detect it. With dynamic service models, we can turn an O-RAN 5G base station into a software-defined radio that can detect and characterize external wireless interference without affecting the radio performance. We have developed a dynamic service model that averages the received IQ samples across frequency chunks and times inside an L1 of the FlexRAN™ reference software stack. The service model in turn reports the averages to an application that runs an AI and machine learning model for anomaly detection, in order to detect when the noise floor increases.

    Virtualized and software-based RAN solution offer immense potential of programmable networks that can leverage AI, machine learning, and analytics to improve network efficiency. Dynamic service models for O-RAN interfaces further enhances the pace of innovation with added flexibility and security.

    Source: microsoft.com

    Saturday, 17 December 2022

    Learn how Microsoft datacenter operations prepare for energy issues

    Microsoft Data Center, Microsoft Exam, Microsoft Exam Prep, Microsoft Tutorial and Materials, Microsoft Preparation, Microsoft Certification

    The war in Ukraine and the resultant shortage of natural gas has forced the European Union (EU) and European countries to proactively prepare for the possibility of more volatile energy supplies—both this winter and beyond. Microsoft is working with customers, governments, and other stakeholders throughout the region to bring clarity, continuity, and compliance in the face of possible energy-saving strategies at the local and national level. In solidarity with Europe, where even essential services are likely to be asked to find energy savings, we have validated plans and contingencies in place to responsibly reduce energy use in our operations across Europe, and we will do so in a way that minimizes risk to customer workloads running in the Microsoft Cloud.

    We want to share some of the contingencies and mitigations that our teams have put in place to responsibly operate our cloud services.

    Supporting grid stability by responsibly managing our energy consumption


    The power that is consumed by Microsoft from the utilities is primarily used to power our network and servers, cooling systems, and other datacenter operations. We have contingency plans to contribute to energy grid stability, while working to ensure minimal disruption to our customers and their workloads, including:

    1. The scale and distribution of the Microsoft datacenters gives us the ability to reposition non-regional platform as a service (PaaS) services, internal infrastructure, and many of our internal non-customer research and development (R&D) workloads to other nearby regions, while still meeting our data residency and EU Data Boundary commitments.

    2. Actively working with local governments and large organizations to closely monitor and respond to power consumption to ensure grid stability and minimal disruption to our customers’ critical workloads. We are working with local utility providers to ensure our systems are ready for a range of circumstances.

    3. Our datacenter regions are planned and built to withstand grid emergencies. When needed, we quickly transition to backup power sources to reduce impact on the grid without impacting customer workloads.

    Resilient infrastructure investment


    Microsoft is responsible for providing our customers with a resilient foundation in the Microsoft Cloud—in how it is designed, operated, and monitored to ensure availability. We make considerable investments in the platform itself—physical things like our datacenters, as well as software things like our deployment and maintenance processes.

    We strive to provide our cloud-using customers with “five-nines” of service availability, meaning that the datacenter is operational 99.999 percent of the time. However, knowing that service interruptions and failures happen for a myriad of reasons, we build systems designed with failure in mind.

    We have Azure Availability Zones (AZs) in every country in which we operate datacenter regions. AZ’s are comprised of a minimum of three zone locations, each with independent power, cooling and networking, allowing customers to spread their infrastructure and applications across discrete and dispersed datacenters for added resiliency and availability.

    Battery backup and backup generators are an additional resiliency capability we implement and are utilized during power grid outages and other service interruptions so we can meet service levels and operational reliability. We have contracted access to additional fuel supplies to maintain generator operations, and we also hold critical spares to maintain generator health. We are ready to use backup generators across Europe, when necessary, to keep our services running in case of a serious grid emergency. 

    Across our global infrastructure, it’s not unusual for us to work with a heightened operational awareness, due to external factors. For instance, severe winter weather events in Texas in 2021 caused substantial pressure on the Texas energy grid. Microsoft was able to remove its San Antonio datacenter from using grid power. Although Microsoft’s onsite substations were designed with redundancy, we were able to quickly transition to our tertiary redundant systems—generators. These systems kept the datacenters running, with zero impact to our cloud customers, while the utility grid could ensure residential homes stayed warm. During this event, we maintained 100 percent uptime for our customers, while removing our workloads from the grid.

    Resiliency recommendations for cloud architectures


    This is a challenging time for organizations monitoring the growing energy concerns in Europe. We are providing important infrastructure for the communities where we operate, and our customers are counting on us to provide reliable cloud services to run their critical workloads. We recognize the importance of continuity of service for our customers, including those providing essential services: health care providers, police and emergency responders, financial institutions, manufacturers of critical supplies, grocery stores and health agencies. Organizations wondering what more they can do to improve the reliability of their applications, or wondering how they can reduce their own energy consumption, can consider the following:

    1. Customers who have availed themselves of high availability tools, including geo-redundancy, should be unaffected by impacts to a single datacenter region. For software as a service (SaaS) services like Microsoft 365, Microsoft Dynamics 365, and Microsoft Power Platform, the business continuity and resiliency are managed by Microsoft. For Microsoft Azure, customers should always consider designing their Azure workloads with high availability in mind.

    We always encourage customers to have a Business Continuity and Disaster Recovery (BCDR) plan in place as part of the Microsoft Well-Architected Framework, which you can read more about. Customers who want to proactively migrate their Azure resources from one region to another can do so at any time.

    2. On-premises customers can reduce their own energy consumption by moving their applications, workloads, and databases to the cloud. The Microsoft Cloud can be up to 93 percent more energy efficient than traditional enterprise datacenters, depending on the specific comparison being made. 

    3. Energy use in our datacenters is driven by customer use. Customers can play a part in reducing energy consumption by following green software development guidelines, including shutting down unused server instances, and sustainable application design.

    We continue to improve the energy efficiency of our datacenters, in our ongoing commitment to make our global infrastructure more sustainable and efficient. As countries and energy providers consider options to reduce their consumption of electricity in the event of an energy capacity shortage, we are working with grid operators on this evolving situation. With the scale, expertise, and partnerships that we operate, we are confident that our risk mitigation activities will offset any potential disruption to our customers running their critical workloads in the cloud.

    Source: microsoft.com

    Wednesday, 14 December 2022

    Your Best Help to Clear Microsoft DP-900 Exam

    Cloud computing professionals have been in great demand in recent years. Are you excited about working with data in the cloud? Then passing the Microsoft Azure Data Fundamentals DP-900 exam will be a smart move. This exam demonstrates that you have a concrete understanding of the core data concepts and can work with relational and non-relational data on Azure. Moreover, it proves your familiarity with the analytics workload on Azure.

    You may find this helpful article if you are considering taking the Microsoft DP-900 exam shortly. In the post, you will learn essential details of this exam and how to prepare for it. Let’s get it started!

    All About Microsoft DP-900 Certification Exam

    Microsoft DP-900 is an entry-level exam that familiarizes the applicants with the essential data concepts and equips them with an understanding of how these concepts are executed with the help of Microsoft data services.

    Although Microsoft doesn’t reveal details about the duration and number of questions, you can anticipate 40 to 60 questions in your actual exam that you will need to answer within 60 minutes. The exam is available in multiple languages and costs $99.

    Once you have cracked the Microsoft DP-900, you will be granted the Microsoft Certified - Azure Data Fundamentals certification, as this is a fundamental certification that will propose the jump-start you require to establish your IT career in Azure. After learning the fundamentals, this Microsoft certification will be your stepping stone to achieving Microsoft Azure role-based certifications such as Azure Database Administrator Associate or Azure Data Engineer Associate.

    Preparation for the Microsoft DP-900 Certification Exam

    If you plan to take Microsoft Azure Data Fundamentals DP-900, you don’t have to stress. With the right preparation strategy and the appropriate study materials, you can ace your exam without issues and with a high result. Below are some of the top preparation tips that will help you achieve your desired score.

    1. Know the DP-900 Syllabus Topics

    You first ought to explore the official webpage and understand the DP-900 syllabus topics. Become familiar with each topic and then its subtopics thoroughly and comprehend what you are expected to know. After reading all the syllabus topics, craft out a study plan that incorporates all the syllabus topics. Make sure that you are practical about your study schedule. Don’t specify your exam preparation over a few days before your scheduled exam date. Take time to learn and comprehend everything attentively and not miss a spot before taking the DP-900 exam.

    2. Choose the Best Study Resources

    There are lots of study guides and other learning materials available for DP-900 exam preparation, which suggests you can have enough of them to pass the exam with a good score. Don’t get baffled about the wealth of resources out there. To ensure you use only the reliable ones, it is advised that you limit your search to only reputable platforms such as Microsoft official website. Microsoft provides training courses for all Microsoft exams. This suggests that you will find enough materials on this site.

    3. Learn Time Management with DP-900 Practice Test

    Before taking the Microsoft Azure Data Fundamentals exam, it is crucial to acquire exam-taking and time-management skills. The ideal way to go about this is to take DP-900 practice tests independently in a mock environment. This will help you acquire proficiency in managing the time limit. You may feel you have sufficient time, but if you are not aware, you may find out that the 60‑minute duration is not enough to answer all the questions. Thus, take time to answer practice questions in a simulated environment.

    4. Give Yourself A Break

    When you have studied and done everything you ought to, the next thing is to unwind and believe in your preparation. Take small breaks in between your studies and take a good rest the day before the exam. Being fresh and peaceful will help you a lot on the actual exam day.

    Don’t get nervous. Relax and read the questions thoroughly on the day of your scheduled exam. If you come across the ones you are not familiar with, mark them for analysis and skip them. You can answer them when you are done with the exam. You mustn’t leave any question unanswered since you will not be penalized for answering incorrectly, but sometimes, your guesses might be right, considerably improving your overall score.


    Benefits of Microsoft Azure Data Fundamentals DP-900 Certification

  • This certification will offer a broad perception of how you may smear information in the cloud and prove your practical expertise in cloud information inside Microsoft Azure Services.
  • After passing this Microsoft Azure certification, you may offer various Azure role-primarily based completely certifications comprising Azure Database Administrator Associate or Azure Data Engineer Associate.
  • There is a great chance for people who can smear information and employ Microsoft Azure Services inside the enterprise presently, resume with this shining certification holds an extraordinary
  • In phrases of salary and job opportunities, a certification leads to a vertical increment in both.
  • Conclusion

    Microsoft DP-900 is an excellent step to boost a career in Azure. With the Microsoft Azure Data Fundamentals certification, you advance your career and salary potential in the IT field for complex tasks.

    Remember, passing any exam demands persistence and enough preparation. Taking benefit of the resources available makes it simpler to pass the exam without much ado.

    Tuesday, 13 December 2022

    Microsoft sponsors OWASP ModSecurity CRS to improve application security

    Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Career, Microsoft Tutorial and Materials, Microsoft Guides, Microsoft Learning

    Increased cloud adoption and the shift to hybrid work has resulted in increased usage of digital assets. While moving web applications and APIs to the cloud provides many advantages for organizations, including transforming business models and enhancing the customer experience, it also presents new security challenges. We have seen that attackers come up with new sophisticated attack patterns and we see new vulnerabilities (for example, Log4J, SpringShell, and Text4Shell) emerging constantly. Vulnerabilities in these applications could lead to breaches and allow cybercriminals to gain access to valuable and sensitive data.

    At Microsoft, we are committed to making Microsoft Azure the most secure and trusted cloud for all workloads. We are continuously innovating and seeking ways to enhance our products to help our customers protect against evolving threats. This includes supporting organizations and communities that share a common commitment as ours. We are pleased to announce the sponsorship for the Open Web Application Security Project (OWASP) ModSecurity Core Rule Set (CRS) project. We value the contributions of the CRS community and are looking forward to contributing to the success of the community and OWASP ModSecurity CRS open source project.

    Intelligent protection from edge to cloud


    Azure Web Application Firewall (Azure WAF) is our cloud-native service for protecting your applications and APIs in Azure or anywhere else from web attacks and vulnerabilities. Azure WAF provides built-in managed rules, based off the OWASP ModSecurity CRS, that offer application protection from a wide range of attacks, including the OWASP Top Ten, with minimum false positives. These managed rules provide protection against many common attack categories, including SQL injection, cross site scripting, local file inclusion, and much more.

    Azure WAF offers Microsoft Managed Rule Sets, proprietary rulesets, which extends the protection of OWASP ModSecurity CRS 3.x, and includes additional proprietary rules and updated signatures developed by the Microsoft Threat Intelligence Center to provide increased security coverage, patches for specific vulnerabilities, and reduced false positive. Azure WAF includes richer set of features including IP reputation, bot protection, rate limiting, IP restriction, and geo-filtering that further strengthens the security posture for your web application and APIs. Native integration with Azure Monitor, Microsoft Sentinel, and Azure Firewall Manager provides ease of management and advanced analytics capabilities to detect and respond to security threats timely.

    Better together


    Microsoft has invested heavily in building security-focused products and ensuring security is built into our core technologies. As a gold sponsor for the OWASP ModSecurity CRS project, we are furthering our commitment in contributing to a strong and vibrant security community. We are excited to join efforts to help advance the CRS open source project that serves as a first line of defense for many applications. The collaboration between Microsoft and OWASP CRS teams will help improve signature patterns, reduce false positives, and address critical zero-day vulnerabilities quickly. This is an important step in ensuring we provide the best security possible for all.

    Source: microsoft.com

    Saturday, 10 December 2022

    Azure Storage Mover–A managed migration service for Azure Storage

    File storage is a critical part of any organization’s on-premises IT infrastructure. As organizations migrate more of their applications and user shares to the cloud, they often face challenges in migrating the associated file data. Having the right tools and services is essential to successful migrations.

    Across workloads, there can be a wide range of file sizes, counts, types, and access patterns. In addition to supporting a variety of file data, migration services must minimize downtime, especially on mission-critical file shares.

    In February of 2022, we launched the Azure file migration program that provides no-cost migration to our customers, via a choice of storage migration partners.

    Today, we are adding another choice for file migration with the preview launch of Azure Storage Mover, which is a fully managed, hybrid migration service that makes migrating files and folders into Azure a breeze.

    The key capabilities of the Azure Storage Mover preview are:

    NFS share to Azure blob container


    With this preview release, we focus on the migration of an on-premises network file system (NFS) share to an Azure blob container. Storage Mover will support many additional source and target combinations over the coming months.

    Azure Storage Mover, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Tutorial and Materials, Microsoft Guides, Microsoft Preparation

    Cloud-driven migrations


    Managing copy jobs at scale without a coordinating service can be time consuming and error-prone. Individual jobs have to be monitored and any errors resolved. It’s hard to maintain comprehensive oversight to ensure a complete and successful migration of your data.

    With Azure Storage Mover you can express your migration plan in Azure and when you are ready, conveniently start and track migrations right from the Azure portal, PowerShell, or CLI. This allows you to utilize Azure Storage Mover for a one-time migration project or for any repeated data movement needs.

    Azure Storage Mover is a hybrid service with migration agents that you’ll deploy close to your source storage. All agents can be managed from the same place in Azure, even if they are deployed across the globe.

    Azure Storage Mover, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Tutorial and Materials, Microsoft Guides, Microsoft Preparation

    Scale and performance


    Many aspects contribute to a high-performance migration service. Fast data movement through the Azure Storage REST protocol and a clear separation of the management path from the data path are among the most important. Each agent will send your files and folders directly to the target storage in Azure.

    Directly sending the data to the target optimizes the performance of your migration because the data doesn’t need to be processed through a cloud service or through a different Azure region from where the target storage is deployed in. For example, this optimization is key for migrations that happen across geographically diverse branch offices that will likely target Azure Storage in their region.

    Azure Storage Mover, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Tutorial and Materials, Microsoft Guides, Microsoft Preparation

    What’s next for Storage Mover?


    There are many steps in a cloud migration that need to happen before the first byte can be copied. A deep understanding of your data estate is essential to a balanced cloud solution design for your workloads.

    When we combine that with a strategy to minimize downtime, and manage and monitor migration jobs at scale, then we’ve arrived at our vision for the Storage Mover service. This roadmap for this vision includes:

    ◉ Support for more sources and Azure Storage targets.
    ◉ More options to tailor a migration to your needs.
    ◉ Automatically loading possible sources into the service. That’s more than just convenience; it enables large-scale migrations and reduces mistakes from manual input.
    ◉ Deep insights about selected sources for a sound cloud solution design.
    ◉ Provisioning target storage automatically based on your migration plan.
    ◉ Running post-migration tasks such as data validation, enabling data protection, and completing migration of the rest of the workload, etc.

    Azure Storage Mover, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Tutorial and Materials, Microsoft Guides, Microsoft Preparation

    Source: microsoft.com

    Thursday, 8 December 2022

    Microsoft Azure's defense in depth approach to cloud vulnerabilities

    Microsoft Azure's Defense, Azure Exam, Azure Career, Azure Skills, Azure Jobs, Azure Tutorial and Materials

    Our digital world is changing, with more persistent, sophisticated, and driven cybercriminals. As risks increase and threats compound, trust is more important than ever. Customers need to be able to trust in the technology platforms they invest in to build and run their organizations. As one of the largest cloud service providers, we build trust by helping our customers be secure from the start and do more with the security of our cloud platforms that’s built in, embedded, and out of the box.

    Our security approach focuses on defense in depth, with layers of protection built throughout all phases of design, development, and deployment of our platforms and technologies. We also focus on transparency, making sure customers are aware of how we’re constantly working to learn and improve our offerings to help mitigate the cyberthreats of today and prepare for the cyberthreats of tomorrow.

    In this blog, we highlight the extensive security commitments from our past, present, and into the future, as well as where we see opportunities for continued learning and growth. This piece kicks off a 4-part Azure Built-In Security series intended to share lessons we’ve learned from recent cloud vulnerabilities and how we're applying these learnings to ensure our technologies and processes are secure for customers. Transparently sharing our learnings and changes is part of our commitment to building trust with our customers, and we hope it encourages other cloud providers to do the same.

    Past, present, and future of our security commitments 


    For decades Microsoft has been, and continues to be, deeply focused on customer security and improving the security of our platforms. This commitment is evident in our long history of leading security best practices from our on-premises and software days to today’s cloud-first environments. A shining example of this is when in 2004, we pioneered the Security Development Lifecycle (SDL), a framework for how to build security into applications and services from the ground up whose influence has been far reaching. SDL is currently used as the basis for built-in security in key initiatives including international application security standrards (ISO/IEC 27034-1) and the White House’s Executive Order on Cyber Security.

    As security leaders and practitioners know though, security’s job is never done. Constant vigilance is vital. This is why Microsoft currently invests heavily in internal security research as well as a comprehensive bug bounty program. Internally, Microsoft boasts more than 8,500 security experts constantly focused on vulnerability discovery, understanding attack trends and addressing patterns of security issues. Our world-class security research and threat intelligence helps protect customers, Microsoft, open-source software, and our industry partners alike.

    We also invest in one of the industry’s most proactive Bug Bounty Programs. In 2021 alone, Microsoft awarded $13.7 million in bug bounties across a broad range of technologies. An emerging trend over the last year has been an uptick in externally reported vulnerabilities impacting several cloud providers, including Azure. While vulnerabilities are not uncommon across the industry, as a leading cloud provider and the number one security vendor, Microsoft is of greater interest to researchers and security competitors alike. This is why our public bounty program was the first to include cloud services, beginning in 2014, and in 2021 we further expanded the program to include higher rewards for cross-tenant bug reports. As anticipated, this clearly drew even more external security researcher interest in Azure, culminating in multiple cross-tenant bug bounties being awarded. Regardless of the reasons, these findings helped further secure specific Azure services and our customers.

    Finally, we firmly believe that security is a team sport, and our focus on collaboration is evidenced in our contributions to the security ecosystem, such as our involvement in the NIST Secure Software Development Framework (SSDF), and improving the security posture of Open Source Software (OSS) through our $5 million investment in the OpenSSF Alpha-Omega project.

    Our commitment to security is unwavering, as seen in our decades-long leadership of SDL to present day vulnerability discovery, bug bounty programs, collaboration contributions, and continues well into the future with our commitment of investing more than $20 billion over five years in cybersecurity. While building-in security from the start is not new at Microsoft, we understand the security landscape is continually changing and evolving, and with it so should our learnings.

    Our latest learnings and improvements for a more secure cloud


    At Microsoft, a core part of our culture is a growth mindset. Findings from internal and external security researchers are critical to our ability to further secure all our platforms and products. For each report of a vulnerability in Azure, we perform in-depth root cause analysis and post-incident reviews whether discovered internally or externally. These reviews help us reflect and apply lessons learned, at all levels of the organization, and are paramount to ensuring that we constantly evolve and build in security at Microsoft.

    Based on the insights we’ve gained from recent Azure vulnerability reports, we are improving in three key dimensions. These developments enhance our response process, extend our internal security research, and continually improve how we secure multitenant services.

    1. Integrated response

    Several lessons from the past year focused our attention in areas we recognize the need to improve, such as accelerating response timelines. We are addressing this throughout our Integrated Response processes and unifying internal and external response mechanisms. We started by increasing both the frequency and scope of our Security LiveSite Reviews at the executive level and below. We are also improving the integration of our external security case management and our internal incident communication and management systems. These changes reduce mean time to engagement and remediation of reported vulnerabilities, further refining our rapid response. 

    2. Cloud Variant Hunting

    In response to cloud security trends, we have expanded our variant hunting program to include a global and dedicated Cloud Variant Hunting function. Variant hunting identifies additional and similar vulnerabilities in the impacted service, as well as identify similar vulnerabilities across other services, to ensure discovery and remediation is more thorough. This also leads to a deeper understanding of vulnerability patterns and subsequently drives holistic mitigations and fixes. Below are a few highlights from our Cloud Variant Hunting efforts:

    ◉ In Azure Automation we identified variants and fixed more than two dozen unique issues.

    ◉ In Azure Data Factory/Synapse we identified significant design improvements that further harden the service and address variants. We also worked with our supplier, and other cloud providers, to ensure that risks were addressed more broadly.

    ◉ In Azure Open Management Infrastructure we identified multiple variants, our researchers published CVE-2022-29149, and we drove the creation of Automatic Extension Upgrade capabilities to reduce time to remediate for customers. Our Automatic Extension Upgrade feature is already benefiting Azure Log Analytics, Azure Diagnostics, and Azure Desired State Configuration customers.

    Additionally, Cloud Variant Hunting proactively identifies and fixes potential issues across all our services. This includes many known as well as novel classes of vulnerabilities, and in the coming months we will share more details of our research to benefit our customers and the community at large

    3. Secure multitenancy

    Based on learnings from all our security intelligence sources, we continue to evolve our Secure Multitenancy requirements as well as the automation we use at Microsoft to provide early detection and remediation of potential security risk. As we analyzed Azure and other cloud security cases over the last couple of years, both our internal and external security researchers have found unique ways to break through some isolation barriers. Microsoft invests heavily in proactive security measures to prevent this, so these new findings helped determine the most common causes and ensure we were committed to addressing them within Azure through a small number of highly leveraged changes.

    We are also doubling down on our defense in depth approach by requiring and applying even more stringent standards for Compute, Network, and Credential isolation across all Azure services, especially when consuming third-party or OSS components. We are continuing to collaborate with the OSS community, such as PostgreSQL, as well as other cloud providers, on features which are highly desirable in multitenant cloud environments. 

    This work has already resulted in dozens of distinct findings and fixes with the majority (86 percent) attributed to our specific improvements in Compute, Network, or Credential isolation. Among our automation improvements, we are extending internal Dynamic Application Security Tests (DAST) to include more checks for validating Compute and Network isolation as well as adding net new runtime Credential isolation check capabilities. In parallel, our security experts continue to scrutinize our cloud services, validate they meet our standards, and innovate new automated controls for the benefit of our customers and Microsoft.

    From the cloud security’s shared responsibility model, we recommend our customers use the Microsoft cloud security benchmark to improve their cloud security posture. We are developing a set of new recommendations focusing on multi-tenancy security best practices and will publish that in our next release.

    In short, while Microsoft has a long and continued commitment to security, we are continually growing and evolving our learnings as the security landscape also evolves and shifts. In this spirit of constant learning, Microsoft is addressing recent Azure cloud security issues by enhancing secure multitenancy standards, expanding our cloud variant hunting capacity, and developing integrated response mechanisms. Our enhancements, and the scale of our security efforts, further demonstrate our leadership and decades-long commitment to continual improvement of our security programs and raising the bar for security industry-wide. We continue to be committed to integrating security into every phase of design, development, and operations so that our customers, and the world, can build on our cloud with confidence.

    Source: microsoft.com