Tuesday, 30 May 2023

Microsoft PaaS: Your Ticket to Effortless Scalability and Rapid Innovation!

Microsoft PaaS, Microsoft Career, Microsoft Skill, Microsoft Jobs, Microsoft Prep, Microsoft Preparation, Microsoft Guides, Microsoft Learning, Microsoft Tutorial and Materials, Microsoft Certification

In the fast-paced digital era, businesses are constantly seeking innovative solutions to stay ahead of the competition. One of the key aspects of success lies in the ability to scale seamlessly and foster rapid innovation. This is where Microsoft PaaS (Platform as a Service) comes into play, offering a powerful and efficient platform that empowers organizations to achieve their goals with utmost ease. In this article, we delve into the world of Microsoft PaaS and explore how it can become your ticket to effortless scalability and rapid innovation!

Unveiling Microsoft PaaS: A Paradigm Shift in Cloud Computing


What is Microsoft PaaS? Microsoft PaaS is a cloud computing service that provides a complete platform for developers to build, deploy, and manage applications without the need to worry about underlying infrastructure. It eliminates the complexities associated with infrastructure management and allows organizations to focus solely on their application development and innovation.

Seamless Scalability: Scalability is a crucial factor for businesses to accommodate the growing demands of users and changing market dynamics. With Microsoft PaaS, scalability becomes effortless. The platform offers automatic scaling features, enabling applications to seamlessly handle increased traffic and workload without any disruption. This means your business can quickly adapt to spikes in user activity and ensure a smooth user experience, even during peak times.

Rapid Innovation: Innovation is the driving force behind sustainable business growth. Microsoft PaaS empowers organizations to unleash their creative potential and bring ideas to life faster than ever before. By leveraging pre-built services and tools, developers can accelerate the development cycle and focus on building unique features and functionalities. With Microsoft PaaS, your team can embrace an agile mindset, experiment with new ideas, and iterate rapidly, enabling you to stay ahead of the curve and deliver cutting-edge solutions to your customers.

Key Benefits of Microsoft PaaS


1. Increased Productivity: Microsoft PaaS offers a range of developer-friendly features and tools that streamline the development process. With pre-configured infrastructure and automated deployment, developers can focus on writing code and delivering value, rather than dealing with infrastructure complexities. This results in increased productivity, shorter time-to-market, and higher efficiency.

2. Cost Efficiency: Traditional on-premises infrastructure comes with substantial costs, including hardware, maintenance, and operational expenses. Microsoft PaaS eliminates the need for such investments, allowing businesses to leverage a pay-as-you-go model. You only pay for the resources and services you use, optimizing cost efficiency and ensuring that your IT budget is allocated strategically.

3. Enhanced Security: Security is a top priority for any organization, and Microsoft takes it seriously. Microsoft PaaS offers robust security measures, including data encryption, identity management, and threat detection, to safeguard your applications and data. By leveraging Microsoft's expertise and advanced security capabilities, you can protect your assets and build trust with your customers.

4. Integration and Extensibility: Microsoft PaaS provides seamless integration with other Microsoft services and tools, such as Azure Active Directory, Azure Functions, and Azure DevOps. This enables you to extend the functionality of your applications and integrate them into your existing ecosystem effortlessly. With a wide range of APIs and connectors, you can connect with external systems, databases, and services, enhancing the capabilities of your applications.

5. Global Availability and Reliability: Microsoft operates a vast network of data centers worldwide, ensuring high availability and reliability for your applications. With PaaS, you can deploy your applications in multiple regions, ensuring low latency and a great user experience for your global audience. Additionally, Microsoft PaaS provides built-in disaster recovery mechanisms, offering peace of mind in the face of unforeseen events.

Real-World Applications of Microsoft PaaS


1. E-commerce Solutions: In the highly competitive e-commerce industry, businesses require robust and scalable platforms to handle a large volume of transactions, provide personalized shopping experiences, and ensure secure payment processing. Microsoft PaaS offers a range of services, such as Azure App Service and Azure Functions, which enable businesses to rapidly develop and deploy e-commerce applications with ease. With built-in scalability and integration capabilities, Microsoft PaaS empowers businesses to deliver seamless online shopping experiences and drive customer satisfaction.

2. Internet of Things (IoT) Solutions: As the Internet of Things continues to expand, organizations are seeking efficient ways to process and analyze the massive amounts of data generated by connected devices. Microsoft PaaS provides a comprehensive set of services, including Azure IoT Hub and Azure Stream Analytics, that enable businesses to develop and deploy IoT solutions quickly. By leveraging these services, organizations can collect, process, and gain valuable insights from IoT data, leading to improved operational efficiency and innovative product offerings.

3. Business Intelligence and Analytics: In today's data-driven world, businesses rely on actionable insights to make informed decisions. Microsoft PaaS offers a range of services, such as Azure Synapse Analytics and Power BI, that empower organizations to gather, analyze, and visualize data effectively. These services enable businesses to uncover hidden patterns, identify trends, and gain valuable business insights, ultimately driving data-led decision-making and competitive advantage.

4. Mobile App Development: Mobile apps have become an integral part of the digital landscape, and organizations need to deliver exceptional user experiences across various devices and platforms. Microsoft PaaS provides services like Azure Mobile Apps and Azure App Center, which simplify the development, testing, and deployment of mobile applications. With cross-platform support, seamless integration with backend services, and automated testing capabilities, organizations can accelerate their mobile app development process and deliver high-quality apps to their customers.

5. DevOps and Continuous Integration/Continuous Deployment (CI/CD): In the era of rapid software development and deployment, organizations need robust DevOps practices to streamline the development lifecycle and ensure efficient collaboration between development and operations teams. Microsoft PaaS offers services like Azure DevOps and Azure Pipelines, which enable organizations to adopt CI/CD practices seamlessly. By automating build, test, and deployment processes, organizations can achieve faster time-to-market, improved software quality, and increased development efficiency.

The Power of Microsoft PaaS: Success Stories


Numerous organizations have harnessed the power of Microsoft PaaS to drive their digital transformation and achieve remarkable success. Here are a few examples:

1. Alaska Airlines: Alaska Airlines leveraged Microsoft PaaS to develop and deploy their customer-facing mobile app. By using Azure App Service, they achieved seamless scalability, enabling them to handle millions of transactions during peak times. The app's success resulted in increased customer satisfaction and streamlined flight bookings.

2. Schneider Electric: Schneider Electric, a global leader in energy management and automation, used Microsoft PaaS to build an IoT platform for their customers. By utilizing Azure IoT Hub and Azure Stream Analytics, they successfully connected and managed thousands of devices, gaining valuable insights to optimize energy consumption and improve operational efficiency.

3. The Coca-Cola Company: The Coca-Cola Company adopted Microsoft PaaS for their data analytics needs. By leveraging Azure Synapse Analytics and Power BI, they gained real-time visibility into sales and marketing data, allowing them to make data-driven decisions and enhance their marketing strategies.

Embrace Microsoft PaaS and Transform Your Business Today!


Microsoft PaaS opens a world of opportunities for businesses seeking effortless scalability and rapid innovation. By eliminating infrastructure management complexities, enabling seamless scalability, and providing a comprehensive suite of services, Microsoft PaaS empowers organizations to focus on what truly matters—building exceptional applications and driving business growth.

By leveraging Microsoft PaaS, businesses can:

1. Accelerate Time-to-Market: With the pre-built services and tools offered by Microsoft PaaS, development teams can streamline the development process and bring new applications to market faster. This increased speed-to-market gives businesses a competitive edge by enabling them to deliver innovative solutions ahead of their competitors.

2. Enhance Customer Experience: Microsoft PaaS enables businesses to create applications that deliver exceptional user experiences. With built-in scalability, reliability, and security features, organizations can ensure a seamless and secure experience for their customers, fostering trust and loyalty.

3. Scale on Demand: One of the standout advantages of Microsoft PaaS is its ability to seamlessly scale applications based on demand. Whether your business experiences sudden spikes in traffic or plans for future growth, Microsoft PaaS provides automatic scaling capabilities that allow applications to handle increased workloads without interruptions. This ensures that your applications can grow alongside your business needs, providing a consistent and reliable experience for users.

4. Foster Innovation: By removing the burden of infrastructure management, Microsoft PaaS empowers developers to focus on what they do best—innovating. With a wide range of services and tools at their disposal, developers can experiment, iterate, and bring new ideas to life more rapidly. This culture of innovation can drive breakthrough solutions and propel your business forward.

5. Optimize Cost Efficiency: Microsoft PaaS follows a pay-as-you-go pricing model, allowing businesses to optimize their costs. Instead of investing in expensive infrastructure upfront, organizations can leverage the scalability and flexibility of Microsoft PaaS to pay only for the resources they use. This cost-efficient approach enables businesses to allocate their IT budget strategically and allocate resources where they provide the most value.

In summary, Microsoft PaaS offers a powerful platform that unlocks the potential for effortless scalability and rapid innovation. With its extensive range of services, seamless scalability, enhanced security, and global availability, Microsoft PaaS provides organizations with the tools they need to drive digital transformation, remain competitive, and deliver exceptional experiences to their customers.

Saturday, 27 May 2023

Defend against DDoS attacks with Azure DDoS IP Protection

Distributed denial of service (DDoS) attacks continue to rise as new threats and attack techniques emerge. With DDoS attacks becoming more frequent, it’s important for organizations of all sizes to be proactive and stay protected all year round. Small and medium businesses (SMBs) face the same risks as larger organizations though are more vulnerable as they often lack resources and specialized expertise.

We are committed to providing security solutions to all our customers. We are announcing the general availability of Azure DDoS IP Protection SKU, a new SKU of Azure DDoS Protection designed to meet the needs of SMBs.

Enterprise-grade DDoS protection at an affordable price point


Azure DDoS IP Protection provides enterprise-grade DDoS protection at an affordable price point. It offers the same essential capabilities as Azure DDoS Network Protection (previously known as Azure DDoS Protection Standard) to protect your resources and applications against evolving DDoS attacks. Customers also have the flexibility to enable protection on individual public IP addresses.

“DDoS protection is a must have today for critical websites. Azure DDoS Protection provides comprehensive protection though the existing DDoS Network Protection SKU did not fit the price point for smaller organizations. We are happy that the DDoS IP Protection SKU provides the same level of protection as the Network Protection SKU at an affordable price point and the flexibility to protect individual public IPs.”—Derk van der Woude, CTO, Nedscaper.

“We are excited that the DDoS IP Protection SKU provides enterprise-grade, cost effective DDoS protection for customers with smaller cloud environments with only a few public IP endpoints in the cloud.”—Markus Lintuala, Senior Technical Consultant, Elisa.

Key features of Azure DDoS IP Protection


◉ Massive mitigation capacity and scale—Defend your workloads against the largest and most sophisticated attacks with cloud scale DDoS protection backed by Azure’s global network. This ensures that we can mitigate the largest attacks reported in history and thousands of attacks daily.

◉ Protection against attack vectors—DDoS IP Protection mitigates volumetric attacks that flood the network with a substantial amount of seemingly legitimate traffic. They include UDP floods, amplification floods, and other spoofed-packet floods. DDoS IP Protection mitigates these potential multi-gigabyte attacks by absorbing and scrubbing them, with Azure’s global network scale, automatically. It also protects against protocol attacks that may render a target inaccessible, by exploiting a weakness in the layer 3 and layer 4 protocol stack. They include SYN flood attacks, reflection attacks, and other protocol attacks. DDoS IP Protection mitigates these attacks, differentiating between malicious and legitimate traffic, by interacting with the client, and blocking malicious traffic. Resource (application) layer attacks target web applications and include HTTP/S floods and low and slow attacks. Use Azure Web Application Firewall to defend against these attacks.

◉ Native integration into Azure portal—DDoS IP Protection is natively integrated into the Azure portal for easy setup and deployment. This level of integration enables DDoS IP Protection to identify your Azure resources and their configuration automatically.

◉ Seamless protection—DDoS IP Protection seamlessly safeguards your resources. There’s no need to deploy anything in your Azure Virtual Network (VNet), or to change your current networking architecture. DDoS is deployed as an overlay on top of your current networking services.

◉ Adaptive tuning—Protect your apps and resources while minimizing false-negatives with adaptive tuning tuned to the scale and actual traffic patterns of your application. Applications running in Azure are inherently protected by the default infrastructure-level DDoS protection. However, the protection that safeguards the infrastructure has a much higher threshold than most applications have the capacity to handle, so while a traffic volume may be perceived as harmless by the Azure platform, it can be devastating to the application that receives it. Adaptive tuning guarantees your applications are protected when application-targeted attacks are undetected by Azure’s DDoS infrastructure-level protection offered to all Azure customers.

◉ Attack analytics, metrics, and logging—Monitor DDoS attacks near real-time and respond quickly to attacks with visibility into attack lifecycle, vectors, and mitigation. With DDoS IP Protection, customers can monitor when the attack is taking place, collect statistics on mitigation, and view the detection thresholds assigned by the adaptive tuning engine to make sure they align with expected traffic baselines. Diagnostic logs offer a deep-dive view on attack insights, allowing customers to investigate attack vectors, traffic flows, and mitigations to support them in their DDoS response strategy.

◉ Integration with Microsoft Sentinel and Microsoft Defender for Cloud– Strengthen your security posture with rich attack analytics and telemetry integrated with Microsoft Sentinel. We offer a Sentinel solution that includes comprehensive analytics and alert rules to support customers in their Security Orchestration, Automation, and Response (SOAR) strategy. Customers can setup and view security alerts and recommendations provided by Defender for Cloud.

DDoS Attacks, Azure DDoS IP Protection, Azure Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure Prep, Azure Preparation

Choosing the right Azure DDoS protection SKU for your needs


Azure DDoS protection is available in two SKUs:

◉ DDoS IP Protection is recommended for SMB customers with a few public IP resources who need a comprehensive DDoS protection solution that is fully managed, easy to deploy, and monitor.

◉ DDoS Network Protection is recommended for larger enterprises and organizations looking to protect their entire deployment that spans multiple virtual networks and includes many public IP addresses. It also offers additional features like cost protection, DDoS Rapid Response, and discounts on Azure Web Application Firewall.

Let’s see a detailed comparison between these two SKUs:

DDoS Attacks, Azure DDoS IP Protection, Azure Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure Prep, Azure Preparation

Get started


DDoS IP Protection can be enabled from the public IP address resource Overview blade.

DDoS Attacks, Azure DDoS IP Protection, Azure Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure Prep, Azure Preparation

Protection status in the Properties tab shows if the resource is DDoS protected, and what is the protection type (either Network or IP Protection).

DDoS Attacks, Azure DDoS IP Protection, Azure Exam, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure Prep, Azure Preparation

Azure DDoS IP Protection pricing


With DDoS IP Protection, you only pay for the public IP resources protected. The cost is a fixed monthly amount for each public IP resource protected with no additional variable costs.

Source: microsoft.com

Thursday, 25 May 2023

The Net Zero journey: Why digital twins are a powerful ally

Climate impacts raise stakes for Net Zero transition


Following weeks of vital discussions at COP27 in Egypt, the urgency to bring the world to a more sustainable path has never been greater. Scientists have warned that the world needs to cut global emissions by 5 percent to 7 percent per year to limit the damage caused by climate change. At present, however, emissions are rising by 1 percent to 2 percent per year. Discovering new routes to a Net Zero economy is critical if we are to limit the economic and social damage of a rapidly changing climate. And that means we all have a part to play in ensuring we strike the optimal balance between greenhouse gas production and the amount of greenhouse gas that gets removed from the atmosphere.

Azure Exam, Azure Prep, Azure Tutorial and Materials, Azure Guides, Azure Skills, Azure Job

A Microsoft and PWC blueprint for the transition to Net Zero highlights the importance of innovation and the harnessing of new technologies that enable organizations to deliver on their Net Zero ambitions, at pace. A key innovation that aims to accelerate organizations’ journey to Net Zero is digital twin technology supported by AI Infrastructure capabilities. A digital twin can be considered as a virtual working representation of assets, products, and production plants. Powered by Microsoft Azure AI-optimized infrastructure that leverages NVIDIA accelerated computing and networking technologies, digital twins allow organizations to visualize, simulate, and predict operations, whether those are at a manufacturing plant, a wind farm, a mining operation, or any other type of operation.

Adoption of digital twin technology offers early adopters the potential of truly accelerated and differentiated business value realization. Innovative companies can leverage this potent toolset to accelerate their innovation journeys and drive strategic business outcomes powered by technology innovation at scale. A recent study by Microsoft and Intel found that globally, only 28 percent of manufacturers have started rolling out a digital twin solution, and of those, only one in seven have fully deployed it at their manufacturing plants. One of the key findings of this study highlighted that when digital twins are utilized effectively, they can realize huge efficiency, optimization, and cost-saving gains while unlocking mission-critical insights that can drive innovation and improve decision-making for those who adopt the technology.

Maximizing wind energy production with digital twins


Digital twins have emerged as a powerful tool for renewable energy producers seeking optimization gains in their production processes too. Take South Korea’s Doosan Heavy Industries & Construction as an example. As a leader in engineering, procurement, heavy manufacturing, power generation and desalination services, Doosan Heavy Industries & Construction was appointed by the South Korean government to help it meet the goals of its Green New Deal plan, which includes a target of generating 20 percent of the country’s electricity needs through renewables by 2030.

Seeking improvements in the efficiency of their wind turbines, Doosan Heavy Industries & Construction partnered with Microsoft and Bentley Systems to develop a digital twin of its wind farms that helps it maximize energy production and reduce maintenance costs. The company currently has 16 South Korean wind farms in operation, which generate enough electricity to power as many as 35,000 homes per year. Its innovative digital controls and operations enables Doosan to remotely monitor wind farm operations, predict maintenance before failures occur, and limit the need for maintenance teams to physically inspect the wind turbines.

Leveraging Azure Digital Twins and Azure IoT Hub powered by NVIDIA-accelerated Azure AI Infrastructure capabilities, Doosan can simulate, visualize, and optimize every aspect of its infrastructure planning, deployment, and ongoing monitoring. This has led to greater energy efficiency, boosted employee safety, and improved asset resilience. And with Bentley seeing their Azure-powered digital twin technology reduce operational and maintenance costs by 15 percent at other facilities, Doosan is well-positioned to continue benefiting from their digital twin solution and unlocking new efficiency gains by leveraging the power of cloud-based AI infrastructure capabilities.

Leveraging digital twins to power Net Zero transition


In the oil and gas sector, digital twin technology is helping one of the world’s leading carbon-emitting industries to identify opportunities for optimization and carbon reduction. A noteworthy showcase can be found with Tata Consulting Services who delivered a Clever Energy solution to a global consumer goods giant. Using digital twins, real-time data and cognitive intelligence to improve energy savings at this consumer goods customer’s production plants, the solution helped reduce energy use by up to 15 percent as well as an equivalent CO2 emissions reduction. Considering that buildings consume nearly 40 percent of the world’s energy and emit one third of greenhouse gasses, this solution also helps the customer alleviate some of the pressures of significant energy cost increases in Europe.

In another example, a large multinational supplier that aims to achieve Net Zero carbon status by no later than 2050 is today leveraging the power of digital twins to support its sustainability goals.

From the vast global network of complex assets this company manages, a digital twin of one of their facilities was developed to calculate real-time carbon intensity and energy efficiency. Microsoft Azure provided the perfect platform: the IoT Hub receives more than 250 billion data signals per month from the company’s global operating assets, with AI providing key insights into how they could become a safer and more efficient business and Azure AI Infrastructure and High-Performance Computing enabling the seamless processing of huge volumes of data.

With long-term plans in place to scale the digital twin solution to all of the company’s global facilities, Microsoft Azure’s security, scalability, and powerful high-performance computing capabilities will be key supporting factors in how successfully they could transition to more carbon-aware operations.

Powering the Next Era of Industrial Digitalization


At NVIDIA GTC, a global AI conference, NVIDIA and Microsoft announced a collaboration to connect the NVIDIA Omniverse platform for developing and operating industrial metaverse applications with Azure Cloud Services. Enterprises of every scale will soon be able to use the Omniverse Cloud platform-as-a-service on Microsoft Azure to fast-track development and deployment of physically accurate, connected, secure, AI-enabled digital twin simulations.

Key takeaways about a Net Zero economy and digital twins


Shifting to a Net Zero economy is one of the defining challenges of our time. As the devastating impact of climate change continues to disrupt global economies, businesses will need novel ways of reducing their carbon footprint and help bring the world to a more sustainable path.

Considering the vast complexity of modern businesses—especially resource-intensive industries such as oil and gas, and manufacturing—finding ways to optimize processes, reduce waste, and accelerate time to value can be extremely cumbersome unless novel technology solutions are found to help provide differentiated strategic capabilities.

Digital twin technology offers organizations a powerful option to run detailed simulations generating vast amounts of data. By integrating that data to the power and scalability of Azure high performance computing (HPC) and leveraging the visualization power of Nvidia’s GPU-accelerated virtual computing capabilities, organizations can discover new opportunities for greater efficiency, optimization, and carbon-neutrality gains.

Source: microsoft.com

Tuesday, 23 May 2023

Announcing Project Health Insights Preview: Advancing AI for health data

We live in an era with unprecedented increases in the size of health data. Digitization of medical records, medical imaging, genomic data, clinical notes, and more all contributed to an exponential increase in the amount of medical data. The potential benefit of leveraging this health data is enormous. However, with this growth in health data, new challenges arise, including the focus on data privacy and security, the need for data standardization and interoperability. There is a need for effective tools for extracting information that is buried in this data and using it to derive valuable insights, inferences, and deep analytics that can make sense of the data and support clinicians.

Today, I’m excited to announce Project Health Insights Preview. Project Health Insights is a service that derives insights based on patient data and includes pre-built models that aim to power key high value scenarios in the health domain. The models receive patient data in different modalities, perform analysis, and enable clinicians to obtain inferences and insights with evidence from the input data. These insights can assist healthcare professionals in understanding clinical data, like patient profiling, clinical trials matching, and more.

Microsoft Career, Microsoft Skill, Microsoft Jobs, Microsoft Prep, Microsoft Preparation, Microsoft Guides, Microsoft Learning, Microsoft Tutorial and Materials, Microsoft Certification, Microsoft

Project Health Insights—leveraging patient data to power actionable insights


Project Health Insights supports pre-built models that receive patient data in multiple modalities as their input, and produce insights and inferences that include:

◉ Confidence scores: The higher the confidence score is, the more certain the model was about the inference value provided.

◉ Evidence: linking model output with specific evidence within the input provided, such as references to spans of text reflecting the data that led to an insight.
Project Health Insights Preview includes two enterprise grade AI models that can be provisioned and deployed in a matter of minutes: Oncology Phenotype and Clinical Trial Matcher.

Oncology Phenotype is a model that enables healthcare providers to rapidly identify key cancer attributes within their patient populations with an existing cancer diagnosis. The model identifies cancer attributes such as tumor site, histology, clinical stage, tumor, nodes, and metastasis (TNM) categories and pathologic stage TNM categories from unstructured clinical documents.

Key features of the Oncology Phenotype model include:

◉ Cancer case finding.
◉ Clinical text extraction for solid tumors.
◉ Importance ranking of evidence.

Clinical Trial Matcher is a model that matches patients to potentially suitable clinical trials, according to the trial’s eligibility criteria and patient data. The model helps with finding relevant clinical trials, that patients could be qualified for, as well as with finding a cohort of potentially eligible patients for a list of clinical trials.

Key Features of the Clinical Trial Matcher model include:

◉ Support for scenarios that are:
    ◉ Patient Centric: Helping patients find potentially suitable clinical trials and assess their eligibility against the trials criteria.
    ◉ Trial Centric: Matching a trial with a database of patients to locate a cohort of potentially suitable patients.
◉ Interactive Matching where the model provides insights into missing information that is needed to further narrow down the potential clinical trial list via an interactive experience.
◉ Support for various modalities of patient data such as unstructured clinical notes, structured patient data, and Fast Healthcare Interoperability Resources (FHIR®) bundles.
◉ Support for search across built-in knowledge graphs for clinical trials from clinicaltrials.gov as well as against a custom trial protocol with specific eligibility criteria.

Streamlining clinical trial matching and cancer research


According to the World Health Organization, the number of registered clinical trials increased by more than 4800 percent from 1999 to 2021. Today there are more than 82,000 clinical trials actively recruiting participants worldwide (based on clinicaltrials.gov), with increasingly complicated trial eligibility criteria. However, enrollment in clinical trials is based on manual screening of millions of patients, each with up to hundreds of clinical notes requiring review and analysis by a healthcare professional, making it an unsustainable process. Given this, it is not surprising that up to 80 percent of clinical trials miss their clinical trial enrollment timelines, and up to 48 percent fail to meet clinical trial enrollment targets according to data provided by Tufts University. The Clinical Trial Matcher model aims to solve this exact problem by effectively matching patients with diverse conditions to clinical trials for which they are potentially eligible through analysis of patient’s data and the complex eligibility criteria of clinical trials.

The Oncology Phenotype model allows physicians to effectively analyze cancer patients’ data based on their tumor site, tumor histology, and cancer staging. These models deliver crucial building blocks to realize the goals set out by the White House Cancer Moonshot initiative: to develop and test new treatments, to share more data and knowledge, to collaborate on tools that can benefit all, and to make progress towards ending cancer as we know it.

Providing value across the health and life sciences industry


John’s Hopkins University Medical Center is an early user of Project Health Insights. Dr. Srinivasan Yegnasubramanian is using the Oncology Phenotype model to leverage unstructured data to accelerate Cancer Registry curation efforts for patients with solid tumors.

Pangaea Data is a Microsoft partner working in health AI. “At Pangaea Data we help companies discover 22 times more undiagnosed, misdiagnosed, and miscoded patients by characterizing them through unlocking and summarization of clinically valid actionable intelligence from patient records in a federated privacy-preserving, scalable, and evolving manner. We are exploring using Project Health Insights to augment our own advanced capabilities for characterizing patients.”—Vibhor Gupta, Director and Founder, Pangaea Data.

Akkure Genomics helps patients utilize their own genomic data or DNA to improve their chances of finding a clinical trial. “At AKKURE GENOMICS we leverage Project Health Insights, which empowers our own AI and digital DNA platform capabilities, to help patients get matched to clinical trials based on their individual medical diagnoses, thus boosting enrollment, improving the chances of finding a precision-matched trial and accelerating discovery of new therapeutics and cures.”—Professor Oran Rigby, Chief Engineering Officer and Founder, Akkure.

Built with the end user in mind


Initial models were validated in a research setting through a strategic partnership between Microsoft and Providence to accelerate digital transformation in health and life sciences. These models can enable oncologists to substantially scale up their precision oncology capabilities and generate intelligence and insights useful to clinicians as well as beneficial to patients.

“Microsoft’s ability to structure complex concepts with their natural language processing tools for cancer has contributed significantly to our ability to build research cohorts and discuss cancer treatment options.”—Dr. Carlo Bifulco, Chief Medical Officer, Providence Genomics.

Microsoft will continue to expand capabilities within Project Health Insights to support additional health workloads and enable insights that will guide key decision-making in healthcare.

Microsoft continues to grow its portfolio of AI services for health


Microsoft continues to invest in AI services for the health and life sciences industry. Along with other new offerings in the Microsoft Cloud for Healthcare, we are pleased to announce new enhancements to Text Analytics for Health (TA4H).

The new enhancements include:

◉ Social Determinants of Health (SDoH) and Ethnicity information extraction. The newly introduced SDoH and Ethnicity features enable extraction of social, environmental, and demographics factors from unstructured text. These factors will empower the development of more inclusive healthcare applications.

◉ Temporal assertions—past, present, and future. The ability to identify the temporal context of TA4H entities whether in the past, present or future.

Microsoft Career, Microsoft Skill, Microsoft Jobs, Microsoft Prep, Microsoft Preparation, Microsoft Guides, Microsoft Learning, Microsoft Tutorial and Materials, Microsoft Certification, Microsoft

◉ Customers can now extend TA4H to support custom entities based on their own data. Customers can now also extend the entities extracted by the service.

We are also excited to share that Azure Health Bot now has a new Azure OpenAI template in preview. The Azure Health Bot OpenAI template allows customers to extend their Azure Health Bot instance with Azure OpenAI Service for answering unrecognized utterances in a more intelligent way. This feature will be enabled through the Azure Health Bot template catalogue. Customers can choose to import this template into their bot instance using their Azure OpenAI resource endpoint and key, enabling fallback answers generated by GPT from trusted, medically viable sources that can be provisioned by customers. This feature provides a mechanism for customers to experiment with this capability as preview.

We look forward to what the coming years will bring for the health and life sciences industry empowered by these new capabilities and the continued innovation we are seeing across AI and machine learning. The potential for improved precision care, quicker and more efficient clinical trials, and thereby drug and therapy availability and medical research is unparalleled. Microsoft looks forward to partnering with you and your organizations on this journey to improve the health of humankind.

Source: microsoft.com

Saturday, 20 May 2023

Transforming containerized applications with Azure Container Storage—now in preview


Container is the new virtual machine (VM). Whether you are a CTO, enterprise architect, leading DevOps teams, or building applications, you have already embarked on the journey to containerize your applications or are raring to go—to maximize the benefits of scale, flexibility, and cost. With Kubernetes at the helm, containers have rapidly become a hotbed of innovation and a critical area of transformation for enterprises and startups alike. After the initial focus on stateless containers, running high scale stateful workloads on containers has now become the norm. To run business-critical, enterprise-grade applications on Kubernetes in the cloud, customers need highly scalable, cost-efficient, and performant storage—built-for and with intrinsic support for containers. Today, we are excited to announce the preview of Azure Container Storage, the industry’s first platform-managed container native storage service in the public cloud, providing end to end storage management and orchestration for stateful applications to run efficiently at scale on Azure.

Why Azure Container Storage?


With rapid adoption of Kubernetes, we see a surge of production workloads, both cloud-first as well as app modernization, that need container-native persistent storage for databases (such as MySQL), big data (such as ElasticSearch), messaging applications (such as Kafka), and continuous integration and continuous delivery (CI/CD) systems (such as Jenkins). To run these stateful applications, customers need operational simplicity to deploy and scale storage tightly coupled with the containerized applications. Customers today, however, need to choose between using VM centric cloud storage options, retrofitted to containers, or deploying and self-managing open-source container storage solutions in the cloud—leading to huge operational overhead, scaling bottlenecks, and high cost.

To provide customers with a seamless end-to-end experience, container native storage needs to enable:

◉ Seamless volume mobility across the cluster to maximize pod availability without bottlenecks on volume attaches and deletes.
◉ Rapid scaling of large number of volumes as application pods scale up or scale out as needed.
◉ Optimal price-performance for any volume sizes, especially small volumes that require higher input/output operations per second (IOPS).
◉ Simple and consistent volume management experience across backing storage types to match workload requirements, such as extremely low latency ephemeral disks versus persistent or scalable remote storage.

Azure Container Storage addresses these requirements by enabling customers to focus their attention on running workloads and applications rather than managing storage. Azure Container Storage is our first step towards providing a transformative storage experience. As a critical addition to Azure’s suite of container services, it will help organizations of all sizes to streamline their containerization efforts and improve their overall storage management capabilities.

Leveraging Azure Container Storage


Azure Container Storage, Azure Career, Azure Skills, Azure Jobs, Azure Tutorial and Materials, Azure Prep, Azure Preparation

Azure Container Storage is a purpose-built, software-defined storage solution that delivers a consistent control plane across multiple backing storage options to meet the needs of stateful container applications. This fully managed service provides a volume management layer for stateful container applications enabling storage orchestration, data management, Kubernetes-aware data protection, and rule-based performance scaling.

Aligning with open-source container native storage approaches, Azure Container Storage runs microservices-based storage controllers in Kubernetes, to abstract the storage management layer from pods and backing storage, enabling portability across Kubernetes nodes and ability to mount different storage options as shown in the figure.

Azure Container Storage components include:

◉ A Storage Pool, which is a collection of storage resources grouped and presented as a unified storage entity for your AKS cluster.
◉ A data services layer, responsible for replication, encryption, and other add-on functionality absent in the underlying storage provider.
◉ A protocol layer, which exposes provisioned volumes via NVMe-oF protocol to application pods.

With this approach Azure Container Storage offers several differentiated experiences to customers on Azure, including:

Lowering the total cost of ownership (TCO) by providing ability to scale IOPS on smaller volume sizes, to support containerized applications that have dynamic and fluctuating input/output (IO) requirements. This is enabled using shared provisioning of capacity and performance on a storage pool, which can be leveraged by multiple volumes. With shared provisioning, customers can now maximize performance across application containers while keeping TCO down. Instead of allocating capacity and IOPS per persistent volume (PV), which commonly leads to overprovisioning, customers can now create PVs and dynamically share resources from a Storage Pool.

Rapid scale-out of stateful pods, achieved using remote network protocols like NVME-oF and iSCSI to mount PV, enabling effortless scaling on AKS across Compute and Storage. This is specifically beneficial for container deployments that start small and iteratively add resources. Responsiveness is key to ensure that applications are not starved or disrupted, either during initialization or scaling in production. Additionally, application resiliency is key with pod respawns across the cluster requiring rapid PV movement. Leveraging remote network protocols allows us to tightly couple with the pod lifecycle to support highly resilient high scale stateful applications on AKS.

Simplified consistent volume management interface backed by local and remote storage options enabling customers to allocate and use storage via the Kubernetes control plane. This means that customers can leverage ephemeral disks, Azure Disks as well as Azure Elastic SAN via a unified management interface to meet workload needs. For instance, ephemeral storage may be preferable for Cassandra to achieve the lowest latency, while Azure Disks is suitable for PostgreSQL or other database solutions. This unified experience provided by Azure Container Storage simplifies the management of persistent volumes, while delivering a comprehensive solution to address the broad range of performance requirements of various containerized workloads.

Fully integrated day-2 experiences, including data protection, cross-cluster recovery, and observability providing operational simplicity for customers who need to create customer scripts or stitch together disparate tools today. Customers can orchestrate Kubernetes-aware backup of the persistent volumes integrated with AKS generally available to streamline the end-to-end experiences for running stateful container workloads on Azure.

Source: microsoft.com

Thursday, 18 May 2023

Optimize performance and strengthen security with Policy Analytics for Azure Firewall

Network security policies are a critical component of modern IT environments, especially with the increasing adoption of cloud workloads. As workloads move to the cloud, network security policies like Azure Firewall policies evolve and adapt to the changing demands of the infrastructure. These policies can be updated multiple times a week, making it challenging for IT security teams to optimize the Firewall rules.

As the number of network and application rules grow over time, they can become suboptimal, resulting in degraded firewall performance and security. For instance, high volume and frequently hit rules may be unintentionally deprioritized, leading to potential performance gaps. Similarly, after migrating an application to a different network, firewall rules referencing older networks may not be deleted, creating security risks.

Optimizing Azure Firewall policies is a challenging task for any IT team, particularly for large, geographically dispersed organizations. It can be a manual and complex process, involving multiple teams across the world. Any updates to these policies can be risky and potentially impact critical production workloads, causing serious downtime. At Microsoft, we strive to help enterprises to manage and secure their environments at scale.

Today, we are excited to announce the general availability of Policy Analytics for Azure Firewall to help IT teams manage the rules in the Azure Firewall policy over time. This feature provides critical insights and surfaces recommendations for optimizing Azure Firewall policies to strengthen security posture. Policy Analytics can detect suboptimal rules and suggest changes to improve performance and security. It can also detect and recommend the deletion of rules referencing older networks that are no longer in use.

Optimize Azure Firewall rules with Policy Analytics


Policy Analytics helps IT teams address these challenges by providing visibility into traffic flowing through the Azure Firewall. Key capabilities available in the Azure portal include:

Policy insight panel: Aggregates policy insights and highlights policy recommendations to optimize your Azure Firewall policies.

Firewall flow logs: Displays all traffic flowing through the Azure Firewall alongside hit rate and network and application rule match. This view helps identify top flows across all rules. You can filter flows matching specific sources, destinations, ports, and protocols.

Rule analytics: Displays traffic flows mapped to destination network address translation (DNAT), network, and application rules. This provides enhanced visibility of all the flows matching a rule over time. You can analyze rules across both parent and child policies.

Single-rule analysis: The single-rule analysis experience analyzes traffic flows matching the selected rule and recommends optimizations based on those observed traffic flows.

Deep dive into network rule hits


Let’s look into the network rule hits. Here we have selected to analyze the hits of our network rules. The time granularity on the right-hand side (highlighted in red) can be set from one day to 30 days. We can expand the rules to see the top 10 flows based on the hit count or drill down on the number of matching flows to see all the flows.

In the below example, we see rule “DefendTheFlag” had 1,500 unique flows in the last seven days, with a total of 152,167 hits. To get visibility into the top flows that generated the traffic, we can expand the rule and continue looking deeper to uncover additional insights. You can review the flows to decide if they must be continued to be allowed or blocked and update the rules appropriately.

Announcements, Networking, Security, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Learning, Azure Guides
Figure 1: Network rule hits.

Deep dive into single-rule analysis


Let’s investigate single-rule analysis. Here we select a rule of interest to analyze the matching flows and optimize thereof. Users can analyze Azure Firewall rules with a few easy clicks.

Announcements, Networking, Security, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Learning, Azure Guides
Figure 2: Start by selecting single-rule analysis.

With Policy Analytics for Azure Firewall, you can perform rule analysis by picking the rule of interest. You can pick a rule to optimize, for instance, you may want to analyze rules with a wide range of open ports or a large number of sources and destinations.

Announcements, Networking, Security, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Learning, Azure Guides
Figure 3: Select a rule and Run analysis.

Policy Analytics surfaces the recommendations based on the actual traffic flows. You can review and apply the recommendations, including deleting rules which don’t match any traffic or prioritizing them lower. Alternatively, you can lock down the rules to specific ports, IPs, fully qualified domain names (FQDNs), or URLs matching traffic.

Announcements, Networking, Security, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Learning, Azure Guides
Figure 4: Review the results and apply selected changes.

Pricing


Policy analytics is a priced feature, with new pricing in effect for general availability. The number of firewalls attached to the policy does not affect the pricing for Policy Analytics.

Source: microsoft.com

Tuesday, 16 May 2023

Microsoft Azure security evolution: Embrace secure multitenancy, Confidential Compute, and Rust

Microsoft Azure Security Evolution, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs

In the first blog of our series on Azure Security, we delved into our defense-in-depth approach for tackling cloud vulnerabilities. The second blog highlighted our use of variant hunting to detect patterns of vulnerabilities across our services. In this installment, we will introduce our game-changing bets that will enable us to deliver industry-leading security architectures with built-in security for years to come, ensuring a secure cloud experience for our customers. We will discuss our focus on secure multitenancy and share our vision for harnessing the power of Confidential Compute and the Rust programming language to protect our customers’ data from cyber threats. By investing in groundbreaking security strategies, such as Secure Multitenancy, Confidential Compute, and the Rust programming language, Azure provides customers with robust, built-in security measures that not only protect their data but also enhance the overall cloud experience, giving customers the confidence to innovate and grow their businesses securely.

Secure multitenancy with robust compute, network, and credential isolation


We touched on the benefits we’ve seen from improvements in compute, network, and credential isolation. Now, we want to dive deeper into what this means. For compute isolation, we’re investing heavily in hardware-based virtualization (HBV), the foundation of running untrusted code in Azure. Traditional Virtual Machines are at the core of many Azure Services hosting customer workloads. Our current bounty of up to USD250,000 on Microsoft Hyper-V vulnerabilities demonstrates our strong defense and highlights the importance of this boundary.

Our innovations with HBV extends beyond traditional virtual machines (VMs). Azure Container Instances (ACI) serve as our platform for running container workloads, utilizing HBV to isolate container groups from each other. ACI container groups take advantage of the same HBV that powers Azure Virtual Machines, but they offer a platform tailored for modern container-based applications. Numerous new and existing services are moving to ACI as a simple, high-performance model for secure multitenancy. Building services atop secure foundations like ACI enables us to address many isolation problems centrally, allowing multiple services to benefit from fixes simultaneously. Furthermore, we’re excited to introduce HBV to Kubernetes workloads via industry-standard Kata Container support in Azure Kubernetes Service. Similar to ACI container groups, Kata Container pods utilize HBV for robust isolation of untrusted workloads. In the coming months, we’ll share more about our efforts to bring this approach to WebAssembly hosting, boasting single-millisecond overhead compared to hosting WebAssembly without HBV. For network isolation, we’re shifting services towards dedicated virtual networks per tenant and ensuring support for Private Links which enable our services to communicate directly with customer-managed virtual networks. Shared networks have proven error-prone, with mistakes in network Access Control Lists or subnets leading to inadequate network isolation between tenants. Dedicated virtual networks make it difficult to accidentally enable connectivity between tenants that should remain separate.

Microsoft Azure Security Evolution, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs
Credential isolation, on the other hand, involves using credentials scoped to the resources of a single tenant whenever possible. Employing credentials with minimal permissions ensures that even if vulnerabilities are discovered, credentials providing access to other tenants’ data aren’t readily available.

Through significant investments in HBV and a focus on compute, network, and credential isolation, Azure is providing customers with enhanced security and isolation for their workloads. By developing innovative solutions such as Azure Container Instances, and bringing HBV to Kubernetes and WebAssembly hosting, we are creating a robust and secure multitenancy environment that protects data and improves the overall cloud experience. As we continue to strengthen Azure’s security foundation, we are also exploring new opportunities to further enhance our defense-in-depth approach. In the next section, we will discuss the role of Confidential Compute in adding an extra layer of protection to our customers’ data and workloads.  

Confidential Compute: A new layer of defense


Since the dawn of cloud computing in Azure, we’ve recognized the crucial role of HBV in running customer workloads on VMs. However, VMs only protect the host machine from malicious activity within the VM. In many cases, a vulnerability in the VM interface could allow a bad actor to escape to the host, and from there they could fully access other customers’ VM. Confidential Compute presents a new layer of defense against these attacks by preventing bad actors with hosting environment access from accessing the content running in a VM. Our goal is to leverage Confidential VMs and Confidential Containers broadly across Azure Services, adding this extra layer of defense to VMs and containers utilized by our services. This has the potential to reduce the blast radius of a compromise at any level in Azure. While ambitious, one day using Confidential Compute should be as ubiquitous as other best practices have become such as encryption in transit or encryption at rest.

Rust as the path forward over C/C++


Decades of vulnerabilities have proven how difficult it is to prevent memory-corrupting bugs when using C/C++. While garbage-collected languages like C# or Java have proven more resilient to these issues, there are scenarios where they cannot be used. For such cases, we’re betting on Rust as the alternative to C/C++. Rust is a modern language designed to compete with the performance C/C++, but with memory safety and thread safety guarantees built into the language. While we are not able to rewrite everything in Rust overnight, we’ve already adopted Rust in some of the most critical components of Azure’s infrastructure. We expect our adoption of Rust to expand substantially over time.

Our unwavering commitment


Our commitment to secure multitenancy, Confidential Compute, and Rust represents a major investment that we’ll be making in the coming years. Fortunately, Microsoft’s security culture is among the strongest in the industry, empowering us to deliver on these ambitious bets. By prioritizing security as an integral component of our services, we are helping our customers to build and maintain secure, reliable, and scalable applications in the cloud, while ensuring their trust in our platform remains steadfast.

Source: microsoft.com

Saturday, 13 May 2023

Announcing the general availability of Azure CNI Overlay in Azure Kubernetes Service

Azure CNI, Azure Kubernetes Service, Azure Exam, Azure Exam Prep, Azure Certification, Azure Learning, Azure Guides, Azure Tutorial and Materials, Azure Preparation

Today, we are thrilled to announce the general availability of Azure CNI Overlay. This is a big step forward in addressing networking performance and the scaling needs of our customers.

As cloud-native workloads continue to grow, customers are constantly pushing the scale and performance boundaries of our existing networking solutions in Azure Kubernetes Service (AKS). For Instance, the traditional Azure Container Networking Interface (CNI) approaches require planning IP addresses in advance, which could lead to IP address exhaustion as demand grows. In response to this demand, we have developed a new networking solution called “Azure CNI Overlay”.

In this blog post, we will discuss why we needed to create a new solution, the scale it achieves, and how its performance compares to the existing solutions in AKS.

Solving for performance and scale


In AKS, customers have several network plugin options to choose from when creating a cluster. However, each of these options have their own challenges when it comes to large-scale clusters.

The “kubenet” plugin, an existing overlay network solution, is built on Azure route tables and the bridge plugin. Since kubenet (or host IPAM) leverages route tables for cross node communication it was designed for, no more than 400 nodes or 200 nodes in dual stack clusters.

The Azure CNI VNET provides IPs from the virtual network (VNET) address space. This may be difficult to implement as it requires a large, unique, and consecutive Classless Inter-Domain Routing (CIDR) space and customers may not have the available IPs to assign to a cluster.

Bring your Own Container Network Interface (BYOCNI) brings a lot of flexibility to AKS. Customers can use encapsulation—like Virtual Extensible Local Area Network (VXLAN)—to create an overlay network as well. However, the additional encapsulation increases latency and instability as the cluster size increases.

To address these challenges, and to support customers who want to run large clusters with many nodes and pods with no limitations on performance, scale, and IP exhaustion, we have introduced a new solution: Azure CNI Overlay.

Azure CNI Overlay


Azure CNI Overlay assigns IP addresses from the user-defined overlay address space instead of using IP addresses from the VNET. It uses the routing of these private address spaces as a native virtual network feature. This means that cluster nodes do not need to perform any extra encapsulation to make the overlay container network work. This also allows this overlay addressing space to be reused for different AKS clusters even when connected via the same VNET.

When a node joins the AKS cluster, we assign a /24 IP address block (256 IPs) from the Pod CIDR to it. Azure CNI assigns IPs to Pods on that node from the block, and under the hood, VNET maintains a mapping of the Pod CIDR block to the node. This way, when Pod traffic leaves the node, VNET platform knows where to send the traffic. This allows the Pod overlay network to achieve the same performance as native VNET traffic and paves the way to support millions of pods and across thousands of nodes.

Datapath performance comparison


This section sneaks into some of the datapath performance comparisons we have been running against Azure CNI Overlay.

Note: We used the Kubernetes benchmarking tools available at kubernetes/perf-tests for this exercise. Comparison can vary based on multiple factors such as underlining hardware and Node proximity within a datacenter among others. Actual results might vary.

Azure CNI Overlay vs. VXLAN-based Overlay


As mentioned before, the only options for large clusters with many Nodes and many Pods are Azure CNI Overlay and BYO CNI. Here we compare Azure CNI Overlay with VXLAN-based overlay implementation using BYO CNI.

TCP Throughput – Higher is Better (19% gain in TCP Throughput)

Azure CNI, Azure Kubernetes Service, Azure Exam, Azure Exam Prep, Azure Certification, Azure Learning, Azure Guides, Azure Tutorial and Materials, Azure Preparation

Azure CNI Overlay showed a significant performance improvement over VXLAN-based overlay implementation. We found that the overhead of encapsulating CNIs was a significant factor in performance degradation, especially as the cluster grows. In contrast, Azure CNI Overlay’s native Layer 3 implementation of overlay routing eliminated the double-encapsulation resource utilization and showed consistent performance across various cluster sizes. In summary, Azure CNI Overlay is a most viable solution for running production grade workloads in Kubernetes.

Azure CNI Overlay vs. Host Network


This section will cover how pod networking performs against node networking and see how native L3 routing of pod networking helps Azure CNI Overlay implementation.

Azure CNI Overlay and Host Network have similar throughput and CPU usage results, and this reinforces that the Azure CNI Overlay implementation for Pod routing across nodes using the native VNET feature is as efficient as native VNET traffic.

TCP Throughput – Higher is Better (Similar to HostNetwork)

Azure CNI, Azure Kubernetes Service, Azure Exam, Azure Exam Prep, Azure Certification, Azure Learning, Azure Guides, Azure Tutorial and Materials, Azure Preparation

Azure CNI Overlay powered by Cilium: eBPF dataplane


Up to this point, we’ve only taken a look at Azure CNI Overlay benefits alone. However, through a partnership with Isovalent, the next generation of Azure CNI is powered by Cilium. Some of the benefits of this approach include better resource utilization by Cilium’s extended Berkeley Packet Filter (eBPF) dataplane, more efficient intra cluster load balancing, Network Policy enforcement by leveraging eBPF over iptables, and more.

In Azure CNI Overlay Powered by Cilium, Azure CNI Overlay sets up the IP-address management (IPAM) and Pod routing, and Cilium provisions the Service routing and Network Policy programming. In other words, Azure CNI Overlay Powered by Cilium allows us to have the same overlay networking performance gains that we’ve seen thus far in this blog post plus more efficient Service routing and Network Policy implementation.

It’s great to see that Azure CNI Overlay powered by Cilium is able to provide even better performance than Azure CNI Overlay without Cilium. The higher pod to service throughput achieved with the Cilium eBPF dataplane is a promising improvement. The added benefits of increased observability and more efficient network policy implementation are also important for those looking to optimize their AKS clusters.

TCP Throughput – Higher is better

Azure CNI, Azure Kubernetes Service, Azure Exam, Azure Exam Prep, Azure Certification, Azure Learning, Azure Guides, Azure Tutorial and Materials, Azure Preparation

To wrap up, Azure CNI Overlay is now generally available in Azure Kubernetes Service (AKS) and offers significant improvements over other networking options in AKS, with performance comparable to Host Network configurations and support for linearly scaling the cluster. And pairing Azure CNI Overlay with Cilium brings even more performance benefits to your clusters. We are excited to invite you to try Azure CNI Overlay and experience the benefits in your AKS environment.

Source: microsoft.com