Saturday, 30 March 2024

Introducing modern data governance for the era of AI

Introducing modern data governance for the era of AI

The era of generative AI has arrived, offering new possibilities for every person, business, and industry. At the same time, the speed, scale, and sophistication of cyberattacks, increasing regulations, an ever-expanding data estate, and business demand for data insights are all converging. This convergence pressurizes business leaders to adopt a modern data governance and security strategy to confidently ensure AI readiness.

A modern data governance and security solution unifies data protection and governance capabilities, simplifies actions through business-friendly profiles and terminology with AI-powered business efficiency, and enables federated governance across a disparate multi-cloud data estate.

Microsoft Purview is a comprehensive set of solutions that can help your organization govern, protect, and manage data, wherever it lives. Microsoft Purview provides integrated coverage and helps address the fragmentation of data across organizations, the lack of visibility that hampers data protection and governance, and the blurring of traditional IT management roles. 

Today, we are excited to announce a reimagined data governance experience within Microsoft Purview, available in preview April 8, 2024. This new software-as-a-service (SaaS) experience offers sophisticated yet simple business-friendly interaction, integration across data sources, AI-enabled business efficiency, and actions and insights to help you put the ‘practice’ into your data governance practice.

Modern data governance with Microsoft Purview 


I led Microsoft through our own modern data governance journey the past several years and this experience exposed the realities, challenges, and key ingredients of the modern data governance journey.

Our new Microsoft Purview data governance solution is grounded in years of applied learning and proven practices from navigating this data transformation journey along with the transformation journeys of our enterprise customers. To that end, our vision for a modern data governance solution is based on the following design principles: 

Anchored on durable business concepts 

The practice of data governance should enable an organization to accelerate the creation of responsible value from their data. By anchoring data governance investments to measurable business objectives and key results (OKRs), organizations can align their data governance practice to business priorities and demonstrate business value outcomes.

A unified, integrated, and extensible experience 

A modern data governance solution should offer a single-pane-of-glass experience that integrates across multi-cloud data estate sources for data curation, management, health controls, discovery, and understanding, backed with compliant, self-serve data access. The unified experience reduces the need for laborious and costly custom-built or multiple-point solutions. This enables a focus on accelerating data governance practices, activating federated data governance across business units, and ensuring leaders have real-time insights into governance health. 

Scale success with AI-enabled experiences 

An ever-growing and changing data estate demands simplicity in how it is governed and to ensure business adoption and implementation efficiencies. Natural language interactions and machine learning (ML)-based recommendations across governance capabilities are critical to this simplification and accelerating data governance adoption.

A culture of data governance and protection 

Data governance solutions must be built for the practice of federated data governance, unique to each organization. Just as adopting cloud solutions requires one to become a cloud company, adopting data governance requires one to become a data governance company. Modern data governance success requires C-Suite alignment and support, and must be simple, efficient, customizable, and flexible to activate your unique practice. 

Introducing data governance for the business, by the business 


We are thrilled to introduce the new Microsoft Purview data governance experience. Our new data governance capabilities will help any organization of any size to accelerate business value creation in the era of AI.

A business-friendly approach to govern multi-cloud data estates 

Designed with the business in mind, the new governance experience supports different functions across the business with clear role definitions for governance administrators, business domain creators, data health owners, and data health readers.

Within Data Management, customers can easily define and assign business-friendly terminology (such as Finance and Claims). Business-friendly language follows the data governance experience through Data Products (a collection of data assets used for a business function), Business Domains (ownership of Data Products), Data Quality (assessment of quality), Data Access, Actions, and Data Estate Health (reports and insights). 

This new data governance experience allows you to scan and search data across your data estate assets.

Introducing modern data governance for the era of AI

Built-in data quality capabilities and rules which follow the data 

The new data quality model enables your organization to set rules top down with business domains, data products, and the data assets themselves. Policies can be set on a term or rule which flows through and helps save data stewards hours to days of manual work depending on the scale of your estate. Once rules and policies are applied, the data quality model will generate data quality scores at the asset, data product, or business domain level giving you snapshot insights into your data quality relative to your business rules. 

Within the data quality model, there are two metadata analysis capabilities: 1) profiling—quick sample set insights 2) data quality scans—in-depth scans of full data sets. These profiling capabilities use your defined rules or built-in templates to reason over your metadata and give you data quality insights and recommendations. 

Introducing modern data governance for the era of AI

Apply industry standard controls in data estate health management

In partnership with EDM Council, new data health controls include a set of 14 standards for cloud data management controls. These standards govern how data is to be managed while controls create fidelity of how data assets are used/accessed. Examples are metadata completeness, cataloging, classification, access entitlement, and data quality. A data office can configure rules which determine the score and define what constitutes a red/yellow/green indicator score, ensuring your rules and indicators reflect the unique standards of your organization. 

Introducing modern data governance for the era of AI

Summarized insights help activate and sustain your practice 

Data governance is a practice which is nurtured over time. Aggregated insights help you put the “practice” into your data governance practice by showcasing the overall health of your governed data estate. Built-in reports surface deep insight across a variety of dimensions: assets, catalog adoption, classifications, data governance, data stewardship, glossary, and sensitivity labels.

The image below is the Data Governance report which can be filtered by business domain, data product, and status for deeper insights.

Introducing modern data governance for the era of AI

Stay on top of data governance health with aggregated actions

The new Actions center aggregates and summarizes governance-related actions by role, data product, or business domain. Actions stem from usage or implementation being out of alignment from defined controls. This interactive summary makes it easy for teams to manage and track actions—simply click on the action to make the change required. Cleaning up outstanding actions helps improve the overall posture of your data governance practice—key to making governance a team sport. 

Introducing modern data governance for the era of AI

Announcing technology partnerships for even greater customer value 


We are excited to announce a solution initiative with Ernst & Young LLP (EY US), who will bring their extensive experience in data solutions within financial services, to collaborate with Microsoft on producing data governance reports and playbooks purpose-built for US-oriented financial services customers. These reports and playbooks aim to accelerate the customer time to value for activating a governance practice that adheres to the unique regulation needs of the financial sector. These assets will be made available in Azure Marketplace over the course of preview and the learnings from this will also help inform future product roadmap.

Additionally, a modern data governance solution integrates and extends across your technology estate. With this new data governance experience, we are also excited to announce technology partnerships that will help seamlessly extend the value of Microsoft Purview to customers through pre-built integration. Integrations will light up over the course of preview and be available in Azure Marketplace.

Master Data Management

◉ CluedIn brings native Master Data Management and Data Quality functionality to Microsoft Fabric, Microsoft Purview, and the Azure stack.
◉ Profisee Master Data Management is a complimentary and necessary piece of your data governance strategy.
◉ Semarchy combines master data management, data intelligence, and data integration into a singular application in any environment.

Data Lineage

◉ Solidatus empowers data-rich enterprises to visualize, understand, and govern data like never before.

Source: microsoft.com

Thursday, 28 March 2024

Microsoft Azure delivers game-changing performance for generative AI Inference

Microsoft Azure delivers game-changing performance for generative AI Inference

Microsoft Azure has delivered industry-leading results for AI inference workloads among cloud service providers in the most recent MLPerf Inference results published publicly by MLCommons. The Azure results were achieved using the new NC H100 v5 series virtual machines (VMs) powered by NVIDIA H100 NVL Tensor Core GPUs and reinforced the commitment from Azure to designing AI infrastructure that is optimized for training and inferencing in the cloud.

The evolution of generative AI models


Models for generative AI are rapidly expanding in size and complexity, reflecting a prevailing trend in the industry toward ever-larger architectures. Industry-standard benchmarks and cloud-native workloads consistently push the boundaries, with models now reaching billions and even trillions of parameters. A prime example of this trend is the recent unveiling of Llama2, which boasts a staggering 70 billion parameters, marking it as MLPerf’s most significant test of generative AI to date (figure 1). This monumental leap in model size is evident when comparing it to previous industry standards such as the Large Language Model GPT-J, which pales in comparison with 10x fewer parameters. Such exponential growth underscores the evolving demands and ambitions within the AI industry, as customers strive to tackle increasingly complex tasks and generate more sophisticated outputs.

Tailored specifically to address the dense or generative inferencing needs that models like Llama 2 require, the Azure NC H100 v5 VMs marks a significant leap forward in performance for generative AI applications. Its purpose-driven design ensures optimized performance, making it an ideal choice for organizations seeking to harness the power of AI with reliability and efficiency. With the NC H100 v5-series, customers can expect enhanced capabilities with these new standards for their AI infrastructure, empowering them to tackle complex tasks with ease and efficiency. 

Microsoft Azure delivers game-changing performance for generative AI Inference
Figure 1: Evolution of the size of the models in the MLPerf Inference benchmarking suite. 

However, the transition to larger model sizes necessitates a shift toward a different class of hardware that is capable of accommodating the large models on fewer GPUs. This paradigm shift presents a unique opportunity for high-end systems, highlighting the capabilities of advanced solutions like the NC H100 v5 series. As the industry continues to embrace the era of mega-models, the NC H100 v5 series stands ready to meet the challenges of tomorrow’s AI workloads, offering unparalleled performance and scalability in the face of ever-expanding model sizes.

Enhanced performance with purpose-built AI infrastructure


The NC H100 v5-series shines with purpose-built infrastructure, featuring a superior hardware configuration that yields remarkable performance gains compared to its predecessors. Each GPU within this series is equipped with 94GB of HBM3 memory. This substantial increase in memory capacity and bandwidth translates in a 17.5% boost in memory size and a 64% boost in memory bandwidth over the previous generations. . Powered by NVIDIA H100 NVL PCIe GPUs and 4th-generation AMD EPYC™ Genoa processors, these virtual machines feature up to 2 GPUs, alongside up to 96 non-multithreaded AMD EPYC Genoa processor cores and 640 GiB of system memory.

In today’s announcement from MLCommons, the NC H100 v5 series premiered performance results in the MLPerf Inference v4.0 benchmark suite. Noteworthy among these achievements is a 46% performance gain over competing products equipped with GPUs of 80GB of memory (figure 2), solely based on the impressive 17.5% increase in memory size (94 GB) of the NC H100 v5-series. This leap in performance is attributed to the series’ ability to fit the large models into fewer GPUs efficiently. For smaller models like GPT-J with 6 billion parameters, there is a notable 1.6x speedup from the previous generation (NC A100 v4) to the new NC H100 v5. This enhancement is particularly advantageous for customers with dense Inferencing jobs, as it enables them to run multiple tasks in parallel with greater speed and efficiency while utilizing fewer resources.

Microsoft Azure delivers game-changing performance for generative AI Inference
Figure 2: Azure results on the model Llama2 (70 billion parameters) from MLPerf Inference v4.0 in March 2024 (4.0-0004) and (4.0-0068). 

Performance delivering a competitive edge


The increase in performance is important not just compared to previous generations of comparable infrastructure solutions In the MLPerf benchmarks results, Azure’s NC H100 v5 series virtual machines results are standout compared to other cloud computing submissions made. Notably, when compared to cloud offerings with smaller memory capacities per accelerator, such as those with 16GB memory per accelerator, the NC H100 v5 series VMs exhibit a substantial performance boost. With nearly six times the memory per accelerator, Azure’s purpose-built AI infrastructure series demonstrates a performance speedup of 8.6x to 11.6x (figure 3). This represents a performance increase of 50% to 100% for every byte of GPU memory, showcasing the unparalleled capacity of the NC H100 v5 series. These results underscore the series’ capacity to lead the performance standards in cloud computing, offering organizations a robust solution to address their evolving computational requirements.

Microsoft Azure delivers game-changing performance for generative AI Inference
Figure 3: Performance results on the model GPT-J (6 billion parameters) from MLPerf Inference v4.0 in March 2024 on Azure NC H100 v5 (4.0-0004) and an offering with 16GB of memory per accelerator (4.0-0045) – with one accelerator each.

In conclusion, the launch of the NC H100 v5 series marks a significant milestone in Azure’s relentless pursuit of innovation in cloud computing. With its outstanding performance, advanced hardware capabilities, and seamless integration with Azure’s ecosystem, the NC H100 v5 series is revolutionizing the landscape of AI infrastructure, enabling organizations to fully leverage the potential of generative AI Inference workloads. The latest MLPerf Inference v4.0 results underscore the NC H100 v5 series’ unparalleled capacity to excel in the most demanding AI workloads, setting a new standard for performance in the industry. With its exceptional performance metrics and enhanced efficiency, the NC H100 v5 series reaffirms its position as a frontrunner in the realm of AI infrastructure, empowering organizations to unlock new possibilities and achieve greater success in their AI initiatives. Furthermore, Microsoft’s commitment, as announced during the NVIDIA GPU Technology Conference (GTC), to continue innovating by introducing even more powerful GPUs to the cloud, such as the NVIDIA Grace Blackwell GB200 Tensor Core GPUs, further enhances the prospects for advancing AI capabilities and driving transformative change in the cloud computing landscape.

Source: microsoft.com

Tuesday, 26 March 2024

Using Microsoft Azure Virtual Network Manager to enhance network security

Challenges in enforcing company security policy at scale


As one of the largest and most diverse technology companies in the world, Microsoft faces a unique challenge in securing its network. With over 160,000 employees, thousands of devices, and hundreds of applications, the company needs to ensure that its network security policy is consistent, compliant, and effective across the entire organization. This also means that network security policy is applied across all services and can modify rules to meet special needs across the entire organization.

However, enforcing network security policy at scale isn’t an easy task. Traditional models of network security administration rely on manual processes, complex configurations, and rigid hierarchies that can introduce human errors, inconsistencies, and bottlenecks. Moreover, these models are not designed to cope with the dynamic and heterogeneous nature of modern networks, where devices, users, and applications can change frequently and unpredictably.

What are the traditional models?


Network security groups (NSGs) are a core component of Microsoft Azure network security, allowing users to define and apply granular rules for inbound and outbound traffic. However, managing NSGs across multiple applications and teams can be challenging, especially when there is a need to enforce some common security policies across the organization. There are three traditional models for managing NSGs:

  • Centralized model—A central governance team manages all the NSGs and their security rules. This ensures consistent and effective security enforcement, but also adds operational overhead and reduces agility.
  • Decentralized model—Individual application teams manage their own NSGs and security rules. This gives them flexibility and autonomy, but also introduces security risks, as the central governance team cannot enforce any critical security rules or audit the compliance of the NSGs.
  • Hybrid model—Individual application teams manage their own NSGs, but with some guidance and oversight from the central governance team. The central team can use Microsoft Azure Policy to create standard rules for the NSGs and monitor the changes made by the application teams. This combines some of the benefits of the centralized and decentralized models, but also has some drawbacks. For example, there is still no hard enforcement of the security policies, and the notifications can be overwhelming and hard to manage.

A new approach to network security with Azure Virtual Network Manager


In the past, Microsoft used a hybrid model of network security, where some NSGs were centrally managed by the governance team, and some were locally managed by the application teams. This model had some drawbacks, such as inconsistency, complexity, and lack of enforceability. To overcome these challenges, Microsoft is moving to a new model based on Azure Virtual Network Manager, which allows the governance team to create and apply admin rules across multiple NSGs, while still enabling the application teams to manage their own NSG rules.

To allow the management of security rules easier, Azure Virtual Network Manager introduced the concept of network group, which is a collection of network resources that can be defined using logical conditions. With Azure Policy, you can define membership rules conditionally for your network groups. Azure Virtual Network Manager integrates with Azure Policy to automatically apply the security admin rules to virtual networks that appear in these network groups. In the example below, users can let Azure add to the network group those virtual networks with the key value pair of environment=production, and the security admin rules will automatically apply to these virtual networks.

Using Microsoft Azure Virtual Network Manager to enhance network security

This way, we can ensure that security policies are consistently enforced across your network groups and resources, without manual intervention.

Using Azure Virtual Network Manager coupled with Azure Policy, Microsoft defines security policies for different units as below and manages them cohesively to make sure both Microsoft and our customers are secured by default.

Using Microsoft Azure Virtual Network Manager to enhance network security

One of the main use cases of Azure Virtual Network Manager is to create network baselines (policies) for blocking high-risk ports and implementing zero-trust principles. These baselines are important for customers’ security because:

◉ High risk ports are a list of network applications and the normal Transmission Control Protocol/User Datagram Protocol (TCP/UDP) ports that they use, which are considered to present a very high security risk to Microsoft and its customers. These ports are often associated with malware, ransomware, or unauthorized access, and should be blocked by default on all NSGs.

◉ Zero-trust baseline is a policy that assumes that all network traffic poses some level of risk, and therefore only allows the minimum required traffic for each service. This is the concept of network security by least privilege. In the past, when new services were released on the physical network, a security review was performed to determine what ports and protocols were absolutely required to be exposed and to what addresses. The routers that the physical computers were behind were then configured to only allow the traffic that was approved by the security review. With the development of Azure Virtual Network Manager, this process can be automated and applied to the entire organization.

By using Azure Virtual Network Manager, the governance team can create and update these network baselines at the network manager level, and apply them to multiple NSGs at once, ensuring that some critical security policies are enforced across the organization. At the same time, the application teams can still manage their own NSG rules, if they do not conflict with the admin rules, allowing them to adapt to their specific needs and scenarios, without waiting for the approval or intervention of the central team. This way, Azure Virtual Network Manager provides security for Microsoft and its customers.

Source: microsoft.com

Saturday, 23 March 2024

The business impact of migrating Red Hat Enterprise Linux to Azure

Red Hat Enterprise Linux (RHEL) is a leading enterprise Linux platform. Microsoft Azure is trusted by enterprise class customers with their business-critical applications, and it offers a range of services and solutions to help them innovate and modernize in the cloud. Together, RHEL and Azure deliver a powerful combination of stability, security, scalability, agility, and cost-efficiency. 

But don’t just take our word for it. Microsoft and Red Hat commissioned Forrester Consulting to conduct a Total Economic ImpactTM (TEI) study of Red Hat Enterprise Linux on Azure, based on interviews with customers who have migrated their RHEL workloads to Azure. Using the data derived from these interviews, Forrester analysts created a composite organization representative of actual customers and their economic realities. 

Grounded in a composite organization, trusted financial model, and interviews with decision-makers, the new Forrester study shows how Red Hat Enterprise Linux on Microsoft Azure resulted in a 192% return on investment and $11.94 million in benefits present value over three years. Forrester breaks this big number down into tangible value by category, calculating $3.9 million from business continuity savings and $4.4 million in data center cost savings.

The business impact of migrating Red Hat Enterprise Linux to Azure

Boost performance and business resilience


One of the most significant benefits highlighted in the study is the improved business continuity that results from a reduction in both the frequency and duration of outages. Prior to deploying Red Hat Enterprise Linux on Microsoft Azure, the composite organization experienced 12 outages per year due to hardware failures and natural disasters, with each outage lasting an average of 4 hours. After migrating to Azure, the organization saw a 50% reduction in outage frequency and an 85% reduction in outage downtime. The quantified benefits include a 60% reduction in legacy solution costs, and 40% Full-Time Equivalent (FTE) reallocation towards value-added business initiatives.

Remember, this isn’t theoretical modeling based on unrealistic assumptions: these numbers are derived from data provided by Red Hat and Microsoft customers, based on their actual experiences, direct to Forrester’s seasoned analysts. 

“There are a number of reasons that sold us. First, we have flexibility to use our Red Hat subscriptions on Azure so we can mix and match, which is huge for us. Second, we have guaranteed compatibility with Microsoft running Red Hat Enterprise Linux on Azure. Third, we have access in the different markets, so our facilities can get access to core systems that are locally deployed.” 

— Global director of IT, Wholesale retail

Mitigate security risks

Security is often considered table stakes and a top concern if not a potential barrier to cloud adoption. Azure provides comprehensive security from code to cloud. Migration of Red Hat Enterprise Linux to Microsoft Azure ensured seamless workload transition, efficient data recovery, resource scalability, proactive issue detection, and faster deployment and updates for the interviewees’ organizations. Furthermore, integrated support for RHEL on Azure between Red Hat and Microsoft ensures a single point of contact for comprehensive support and issue resolution.  

“Azure provides all the tools to make your life easier. Microsoft goes out of their way to provide all the security tools and dashboards so you can see what your servers are. We use Defender for Linux, Automanage, and Azure Update Management, which is the patching mechanism that we use.” 

—Product owner for Operating Systems, Energy
 

Simplify procurement

Forrester’s interviews with executive decision makers also highlighted the advantages of Azure Marketplace, which simplifies the procurement process by providing a range of prebuilt solutions and services that can be easily deployed. You can quickly find and purchase the necessary tools and applications to meet specific business needs, accelerating time to market and reducing procurement complexities. 

“With Azure Marketplace, the actual time to market is much faster because you’re not trying to come up with solutions. The solutions are already there for you. You just use the native tools to be able to migrate and build out and those are tools we are already familiar with.”

—Global director of IT, wholesale retail

The road ahead with RHEL

We often talk about technology as a journey, and in this journey Red Hat on Azure propels organizations in their digital transformation initiatives. This collaboration offers a holistic solution for modern businesses. Munich Re realized increased flexibility with their SAP on Azure migration which includes Red Hat Enterprise Linux for SAP Apps. It used to take from four to six months to add resources, but with SAP on Azure and infrastructure automation, they can now do it within an hour. AIA, a Hong Kong based insurance and financial services provider, delivered more than 100 major AI and analytics projects, enhancing multiple areas of their business by moving to Azure and running its Enterprise Service Bus solution, which is core for the entire company, on Red Hat Enterprise Linux on Azure. 

Source: microsoft.com

Thursday, 21 March 2024

Microsoft open sources Retina: A cloud-native container networking observability platform

Microsoft open sources Retina: A cloud-native container networking observability platform

The Microsoft Azure Container Networking team is excited to announce Retina, a cloud-native container networking observability platform that enables Kubernetes users, admins, and developers to visualize, observe, debug, and analyze Kubernetes’ workload traffic irrespective of Container Network Interface (CNI), operating system (OS), and cloud. We are excited to release Retina as an open-source repository that helps with DevOps and SecOps related networking cases for your Kubernetes clusters and we invite the open-source community to innovate along with us.

Embracing and advancing open-source software


Cloud native technologies like Kubernetes have made building applications that can run anywhere, easier. At the same time, many applications have become more complex, and managing them in the cloud is increasingly difficult. As companies build cloud-native applications composed of interconnected services and then deploy them to multiple public clouds as well as their private infrastructure, network related observability, troubleshooting, and debugging has become increasingly difficult.

With the power of extended Berkley Packet Filter (eBPF), it is now possible to offer actionable network insights including how containerized micro-services interact and do so in non-intrusive ways without any change in the applications itself—that’s exactly what Retina sets out to achieve. Retina will help democratize network observability and troubleshooting by bringing new focus to the experience of application developers. Retina provides developers with simple ways to observe and troubleshoot their applications for issues such as packet drops and latency without worrying about the complexities of the underlying network infrastructure and transformations.

Based on our positive experience in the community with eBPF and Cilium, we are excited to build on this relationship and engage both more closely and with more communities. We believe that by opening Retina to the community, we can benefit from informed feedback, innovative ideas, and collaborative efforts that will help enhance and expand Retina’s capabilities.

Retina solutions and capabilities


Drawing from our extensive experience managing multiple container networking services for the Azure Kubernetes Service (AKS), we identified critical gaps in network monitoring, the collection of network metrics and traces from Kubernetes clusters. Retina is a cutting-edge solution that closes these gaps and is designed to tackle the complex challenges of managing and supporting Kubernetes networks providing infrastructure- and site-reliability engineers comprehensive insights into cluster networking. Retina also provides deep traffic analysis with Kubernetes-specific context, translating metrics into either industry-standard Prometheus or network flow logs.

Existing open-source solutions are often tightly coupled with specific CNI’s, OS, or data planes, thereby limiting their versatility and use. For this reason, Retina has been designed and developed to be a highly versatile, adaptable, and extensible framework of plugins capable of working seamlessly with any CNI, OS, or cloud provider—making it a valuable addition to any existing toolset. Retina supports both Linux and Windows data planes, ensuring it meets the diverse needs of infrastructure- and site-reliability engineers, while maintaining a minimal memory and CPU footprint on the cluster—this remains true even at scale. Retina’s pluggability design ethos helps us easily extend and adapt to address new use cases without depending on any specific CNI, OS, or data plane.

Microsoft open sources Retina: A cloud-native container networking observability platform
Figure 1: Architecture overview of Retina

One of Retina’s key features provides deep network traffic insights that include Layer 4 (L4) metrics, Domain Name System (DNS) metrics, and distributed packet captures. It seamlessly integrates the Kubernetes app model offering pod-level metrics with detailed context. It emits actionable networking observability data into industry-standard Prometheus metrics providing node-level metrics (for example, forward, drop, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Linux utility) and pod-level metrics (such as basic metrics, DNS, and API server latency.)

Retina’s distributed packet captures are label-driven—allowing users to specify what, where, and who to capture packets from. Additionally, it provides historical context of network flow logs and advanced debugging capabilities that enhance network troubleshooting and performance optimization.

Our vision for Retina


Many enterprises are multi-cloud and want solutions that work well not just on Microsoft Azure, but on other clouds as well as on-premises. Retina is open-source and multi-cloud from day one. By open-sourcing Retina, we aim to share our knowledge and vision for Kubernetes networking observability with the broader cloud-native ecosystem. Our hope is that Retina will evolve and grow through collaboration with other developers and organizations who share similar experiences and goals in this field.

In terms of architecture, extensibility was key from the outset and will remain going forward. Retina offers extensibility in data collection—allowing users to easily add new metrics and insights. It also offers extensibility in exporters—enabling users to integrate with other monitoring systems and tools. This flexibility ensures that Retina can adapt to different use cases and environments, making it a versatile and powerful platform for Kubernetes networking observability. In conclusion, we envision Retina as a platform allowing anyone to contribute, extend, and innovate on ultimately creating a robust, purpose-built, and comprehensive solution for Kubernetes networking observability.

Source: microsoft.com

Tuesday, 19 March 2024

Azure AI Health Bot helps create copilot experiences with healthcare safeguards

Azure AI Health Bot helps create copilot experiences with healthcare safeguards

The generative AI era is driving demand for chatbots and copilots for health that assist patients and medical professionals with various administrative and clinical tasks. These chatbots would potentially use large language models (LLMs) to generate conversational AI chat experiences that can provide accurate and reliable information based on large amounts of medical literature and data.

As a result of the growing demand, many healthcare organizations are striving to build their own healthcare copilot experiences that deliver intelligent and engaging chat experiences leveraging LLMs and generative AI.

In the process, healthcare organizations have realized that as part of healthcare’s unique needs, they need a way to combine the benefits of using generative AI for engaging chat experiences with the benefits of protocol-based flows and custom workflows to provide accurate and relevant information. A hybrid approach that combines both would allow them to offer a more personalized and comprehensive service to their customers and end users.

Moreover, healthcare chat experiences need to leverage domain-specialized models and health-specific safeguards to meet the healthcare industry quality bar.

To address these needs, we’re adding new healthcare-specific safeguards for generative AI in private preview within the Azure AI Health Bot services. Preview customers can experience an integration with Microsoft Copilot Studio, allowing healthcare organizations to build their own copilot experiences. Customers can sign up for the private preview here.

  • Providing reusable healthcare-specific functionality: providing healthcare-specific, pre-built capabilities, use cases and scenarios—including pre-packaged healthcare intelligence plugins, templates, content, and healthcare-specialized skills and connectors.
  • Answering the unique needs of healthcare: enabling customers to build copilots for their patients and doctors, supporting protocol-based workflows side-by-side with generative AI-based answers, and allowing customers to keep alignment with up-to-date industry standards, guidelines, and protocols.
  • Applying healthcare-specific safeguards: allowing customers to build copilots responsibly adapted to healthcare needs, apply health-adapted compliance controls, and implement health-specific safeguards and quality measures that are specialized for healthcare.

Generative AI capabilities 


In April 2023, we announced the preview of Azure AI Health Bot with Azure OpenAI Service, enabling fallback answers based on generative AI.

Today, we are expanding those capabilities beyond fallback answers, enabling our healthcare customers to further enrich their copilot experiences with the following capabilities in private preview:  

  • Power generative answers that are grounded on customer’s own sources. The sources are incorporated during the copilot experience, alongside authored descriptive scenarios, protocol-based pre-built flows, and skills. Customers are able to bring in their Azure OpenAI Service endpoint and index to enable generative answers grounded on their desired sources.
  • Generative answers that are grounded on the customer’s websites. These sources are real-time queried and can include medical guidelines, health articles, patient treatments, frequently asked questions, appointment scheduling information, and many more. This approach ensures that patients receive not only medical guidance but also support for the many aspects of their healthcare journey.
  • New healthcare intelligence capabilities to incorporate generative answers grounded on credible healthcare sources. Sources include the National Institutes of Health (NIH), the Food and Drug Administration (FDA), and others.
  • Seamlessly use pre-built protocol-based healthcare intelligence capabilities such as symptom checkers and triage, and a rich gallery of pre-built protocol templates side-by-side with generative AI based answers.
  • Credible generative AI fallback ensures reliable and accurate responses in healthcare-related scenarios. In cases where answers are not available, this feature leverages credible content to enhance responses, providing users with reliable guidance backed by clinical Retrieval-Augmented Generation (RAG) support. Helping to mitigate potential errors and ensures the delivery of trusted information in healthcare settings.

Built-in safeguards


Azure AI Health Bot with generative AI technology provides built-in healthcare safeguards now in private preview, for building copilot experiences that fits healthcare’s unique requirements and needs. Those include:

◉ Clinical safeguards include healthcare-adapted filters and quality checks to allow verification of clinical evidence associated with answers, identifying hallucinations and omissions in generative answers, credible sources enforcement, and more.
◉ Healthcare chat safeguards include customizable AI-related disclaimers that are incorporated into the chat experience presented to users, enabling the collection of end-user feedback, and analyzing the engagement through built-in dedicated reporting, as well as healthcare-adapted abuse monitoring, among other things.
◉ Healthcare-adapted compliance controls include built-in Data Subject Rights (DSRs), pre-built consent management, out-of-the-box audit trails, and more.

Azure AI Health Bot helps create copilot experiences with healthcare safeguards

Source: microsoft.com

Saturday, 16 March 2024

Foster a more data-driven culture with generative AI in Microsoft Fabric

Foster a more data-driven culture with generative AI in Microsoft Fabric

From the invention of steam power to putting the world’s knowledge at our fingertips with the advent of smartphones, the right tools have always sparked transformation. And now we are seeing the potential of the next great shift: the new era of AI. It is one of the most exciting shifts of our generation, and we, along with other leaders are seeing this impact take shape across individuals, entire teams, and every industry. Everything from enterprise chat for better knowledge mining, to content generation and speech analytics, to data analysis to uncover more insights and make data more accessible.

Take PricewaterhouseCoopers (PwC) for example, a leader in the professional services sector and a long-time technology innovator. PwC is applying generative AI to acquire, transform, and analyze data faster to better support its employees and provide better audit experiences for clients.

“We’re implementing Microsoft infrastructure to help future-proof NGA design, increasing the potential future adaptability of our assurance services and processes. Generative AI capabilities within the Azure OpenAI Service open up possibilities for us to enable natural language interfaces for enterprise data.” 

Winnie Cheng, Director of AI in Products and Technology at PwC

Connect your data with Fabric


PwC is not alone. Increasingly, organizations are turning to AI to transform their data cultures for better business outcomes. Traditionally, building this culture requires a few key ingredients:

1. Organizing your data into a logical data mesh to make it easier for users to discover, reuse, and enhance the best data available.
2. Creating a seamless analytics engine that can meet the demands of the business to uncover insights in real-time.
3. Infusing those insights into the applications your people use every day so they can make data-driven decisions.  

These steps are still vital, but now you can employ generative AI to accelerate the path to a data-rich culture by enhancing the productivity of your data teams and making analytics tools more accessible to everyone. In my webinar, Infusing AI into your Data Culture: A Guide for Data Leaders I walk through exactly how Microsoft can help you accomplish each step along this journey.

First, we’ll explore the blockers preventing users from discovering, accessing, and using data to innovate and make better decisions. I will also show you the promise of Microsoft Fabric’s single, SaaS, multi-cloud data lake, OneLake, designed to connect to any data across the organization and serve everyone who needs access to data in an organized, intuitive data hub. Your data teams can use the OneLake data hub to manage your data, endorse high-quality data to encourage use, and manage access. Users can easily find, explore, and use the data items that they have access to—from inside data tools like Fabric or even applications like Teams and Excel. For more on OneLake, read our e-book, Lakehouse Analytics with Microsoft Fabric and Azure Databricks. 

With your data accessible to those who need it, you also need to equip them with powerful analytics tools that can help them scale to the needs of the business. That’s where Microsoft Fabric further shines. With Fabric, data teams can use a single product with a unified experience and architecture that provides all the capabilities required for analysts to extract insights from data and present them to the business user. Each role in the analytics process has the tools they need, so data engineers, data scientists, data analysts, business users, and data stewards feel right at home. By delivering the experience as a SaaS platform, everything is automatically integrated and optimized, and users can sign up within seconds and unlock significant business value within minutes.  

With your data in a single place and your data teams empowered to uncover insights faster than ever, the next step is to get insights into the hands of everyone in your organization. I’ll show you how Power BI can infuse reports and insights into your apps like Microsoft 365, Dynamics 365, Power Platform, and even third-party apps like Salesforce and SAP.  

And now, as we enter a future built on AI, I’ll walk you through three key ways generative AI can help foster a more data-rich culture: 

1. Take advantage of out-of-the-box experiences like Copilot in Fabric which helps you accelerate the productivity of your data teams.  
2. Employ powerful AI models right from Fabric to draw deeper insights from your data. 
3. Create custom AI experiences, grounded on your data, with native integration between Microsoft Fabric and Azure AI Studio.

Source: microsoft.com

Thursday, 14 March 2024

Accelerate your productivity with the Whisper model in Azure AI now generally available

Accelerate your productivity with the Whisper model in Azure AI now generally available

Human speech remains one of the most complex things for computers to process. With thousands of spoken languages in the world, enterprises often struggle to choose the right technologies to understand and analyze audio conversations while keeping right data security and privacy guardrails in place. Thanks to generative AI, it has become easier for enterprises to analyze every customer interaction and derive actionable insights from these interactions.

Azure AI offers an industry-leading portfolio of AI services to help customers make sense of their voice data. Our speech-to-text service in particular offers a variety of differentiated features through Azure OpenAI Service and Azure AI Speech. These features have been instrumental in helping customers develop multilingual speech transcription and translation, both for long audio files and for near-real-time and real-time assistance for customer service representatives.


Today, we are excited to announce that OpenAI Whisper on Azure is generally available. Whisper is a speech to text model from OpenAI that developers can use to transcribe audio files. Starting today, developers can begin using the generally available Whisper API in both Azure OpenAI Service as well as Azure AI Speech services on production workloads, knowing that it is backed by Azure’s enterprise-readiness promise. With all our speech-to-text models generally available, customers have greater choice and flexibility to enable AI powered transcription and other speech scenarios.

Accelerate your productivity with the Whisper model in Azure AI now generally available

Since the public preview of the Whisper API in Azure, thousands of customers across industries across healthcare, education, finance, manufacturing, media, agriculture, and more are using it to translate and transcribe audio into text across many of the 57 supported languages. They use Whisper to process call center conversations, add captions for accessibility purposes to audio and video content, and mine audio and video data for actionable insights. 

We continue to bring OpenAI models to Azure to enrich our portfolio and address the next generation of use-cases and workflows customers are looking to build with speech technologies and LLMs. For instance, imagine building an end-to-end contact center workflow—with a self-service copilot carrying out human-like conversations with end users through voice or text; an automated call routing solution; real-time agent assistance copilots; and automated post-call analytics. This end-to-end workflow, powered by generative AI, has the potential to bring a new era in productivity to call centers around the world.

Whisper in Azure OpenAI Service 


Azure OpenAI Service enables developers to run OpenAI’s Whisper model in Azure, mirroring the OpenAI Whisper model functionalities including fast processing time, multi-lingual support, and transcription and translation capabilities. OpenAI Whisper in Azure OpenAI Service is ideal for processing smaller size files for time-sensitive workloads and use-cases. 

Lightbulb.ai, an AI innovator, is looking to transform call center workflows, has been using Whisper in Azure OpenAI Service.

“By merging our call center expertise with tools like Whisper and a combination of LLMs, our product is proven to be 500X more scalable, 90X faster, and 20X more cost-effective than manual call reviews and enables third-party administrators, brokerages, and insurance companies to not only eliminate compliance risk; but also to significantly improve service and boost revenue. We are grateful for our partnership with Azure, which has been instrumental in our success, and we’re enthusiastic about continuing to leverage Whisper to create unprecedented outcomes for our customers.”

Tyler Amundsen, CEO and Co-Founder, Lightbulb.AI

Try out the Whisper REST (representational state transfer) API in the Azure OpenAI Studio. The API supports translation services from a growing list of languages to English, producing English-only output. 

OpenAI Whisper model in Azure AI Speech


Users of Azure AI Speech can leverage OpenAI’s Whisper model in conjunction with the Azure AI Speech batch transcription API. This enables customers to easily transcribe large volumes of audio content at scale for non-time-sensitive batch workloads.

Developers using Whisper in Azure AI Speech also benefit from the following additional capabilities:

  • Processing of large file sizes up to 1GB in size with the ability to process large amounts of files with up to 1000 files in a single request that processes multiple audio files simultaneously.
  • Speaker diarization which allows developers to distinguish between different speakers, accurately transcribe their words, and create a more organized and structured transcription of audio files.
  • And lastly, developers can use Custom Speech in Speech Studio or via API to finetune the Whisper model using audio plus human labeled transcripts.

Customers are using Whisper in Azure AI Speech for post-call analysis, deriving insights from audio and video recordings, and many more such applications.

Source: microsoft.com

Tuesday, 12 March 2024

Modernize and build intelligent apps with support from Microsoft partner solutions

Modernize and build intelligent apps with support from Microsoft partner solutions

AI transformation drives significant business value, as a recent study of over 2000 business leaders and decision-makers found:

  • For every USD1 a company invests in AI, it realizes an average return of USD3.50.
  • Organizations realize a return on their AI investments within 14 months.
  • 92% of AI deployments take 12 months or less. 
  • 71% of respondents say their companies are already using AI.

Clearly, we’re witnessing rapid expansion of AI wherein organizations globally are unlocking productivity within their businesses, but also bending the curve on innovation by building on an open AI platform and partner ecosystem. These organizations are engaging Microsoft experts to build differentiated, intelligent applications and modernize existing, business-critical applications. These intelligent applications use real-time and historical data to deliver personalized and adaptable digital experiences with meaningful outcomes, that close the gap between the user’s current state and the desired outcome. New or modernized, when built on Microsoft Azure, these applications benefit from one of the largest interconnected networks on the planet, high availability, and trusted security and compliance.

Azure brings together capabilities for modern app development, cloud-scale data, and leading generative AI in one place. Customers see great value using these services together. In a recent Forrester Total Economic Impact of Microsoft Azure App Innovation report, customers were able to gain significant time savings of one to one and a half months when delivering new applications to the market, increase developer efficiency up to 25%, and reduce app downtime up to 25%. This leads to compelling business benefits such as beating competitors in the innovation race, capturing incremental revenue, minimizing lost revenue and fines from downtime, and increasing the engagement and retention of key talent.

ISV solutions help accelerate your AI transformation


While Azure provides the tools to build and modernize intelligent applications, it’s important to consider the broader tech stack. Independent Software Vendor (ISV) solutions complement Azure services by allowing you to meet specific use-case requirements, modernize existing tech stacks onto Azure, and mitigate the need to build new skillsets. If your organization routinely uses ISV solutions as part of the app infrastructure or development process, chances are that you can continue to use them even as you build new or modernize existing apps onto Azure. An example is apps built on Azure Spring Apps or Azure Red Hat OpenShift.

1. Azure Spring Apps Enterprise

Azure Spring Apps Enterprise is a fully managed service for the Spring Framework, built in collaboration with VMware. Building upon the Spring Framework and incorporating features from VMware Tanzu, Azure Spring Apps Enterprise helps accelerate development with ready-made, enterprise-conformant templates. Azure Spring Apps Enterprise offers full integration into Azure’s ecosystem and services, including fully managed infrastructure, built-in app lifecycle management, and ease of monitoring for app development and modernization. If you have existing apps in the Spring Framework, you can efficiently modernize them onto Azure while managing costs and enhancing the apps with AI. Here’s how to get started: Migrate Spring Boot applications to Azure Spring Apps.

2. Azure Red Hat OpenShift

Azure Red Hat OpenShift is a turnkey application platform. It is jointly engineered, operated, and supported by Red Hat and Microsoft. With Azure Red Hat OpenShift, you can deploy fully managed Red Hat OpenShift clusters without worrying about building and managing the infrastructure and get ready access to and integration with Azure tools, singular billing, integrated support and access to committed spend, and discount programs. This increases operational efficiency, time to value, and allows developers to refocus on innovation to quickly build, deploy, and scale applications. 

Microsoft also supports pure third-party solutions as part of its ISV ecosystem, to complement native Azure services. While these solutions meet a diverse set of use-cases, ranging from analytics to storage, here’s one that’s likely common to many app development or modernization projects—HashiCorp Terraform.

3. HashiCorp Terraform on Azure

An infrastructure as code tool for provisioning and managing cloud infrastructure, HashiCorp Terraform on Azure allows you to define infrastructure as code with declarative configuration files that can be used to create, manage, and update infrastructure. If your organization currently uses Terraform, developers can use their familiarity with the tool to deploy and manage Azure infrastructure using familiar and consistent syntax and tooling. To support this, HashiCorp offers a library of pre-built modules for Azure services, including Azure AI, Azure Kubernetes Service, and Azure Cosmos DB. And as your developers build new modules, perhaps with GitHub Copilot, those modules can be templatized using HashiCorp Terraform for reuse within your organization, setting up your developer teams for greater productivity and velocity.

Build and modernize apps with Azure and our partner ecosystem


So, as you look through your app infrastructure and decide to modernize your existing apps, any Spring apps or Red Hat OpenShift apps can easily be moved to Azure, with HashiCorp Terraform on Azure to assist. While we have only looked at three solutions in this blog, your preferred vendors are likely part of the Azure ISV ecosystem. Microsoft’s ecosystem of partners also includes partners that specialize in offering services to build custom intelligent apps, with industry-specific experience.

Connect with experts from Azure who will be able to guide you on your app architecture that utilizes the appropriate technology and services— Microsoft or partner—for your needs.

Source: microsoft.com

Saturday, 9 March 2024

Tackle large volumes of data with new solution from SLB for Azure Data Manager for Energy

Tackle large volumes of data with new solution from SLB for Azure Data Manager for Energy

As the energy industry continues to innovate, companies find themselves balancing the ever-growing demand for energy with the desire to work toward more efficient, sustainable operations. Decision makers spread across the globe rely on accurate data to improve confidence and minimize risk, so harnessing the power of data has become a central tenant of energy companies’ success as they push to evolve. However, different types of data and the variety of file types that energy companies manage daily make it difficult to access and analyze the data efficiently.

That is why energy companies around the world are using Microsoft technologies, including Azure Data Manager for Energy. This cloud-based OSDU® Data Platform maximizes customer performance by being truly interoperable across the exploration and production data landscape. It can integrate with industry datasets, applications, and other cloud services, which makes it easier for employees throughout an organization to access the data they need quickly and efficiently. These sources may include other data platforms for asset performance, the internet of things (IoT), or production, but also databases that may still be deployed on-premises before migration to the cloud can take place. Data siloes slow productivity and complicate workflows—optimizing access to high quality data is crucial for success.

The Microsoft partner ecosystem is a key component in how Microsoft delivers technology, services, and cloud-to-edge solutions for our customers. SLB—one of Microsoft’s strategic partners in the energy sector—designed the Enterprise Data Solution to work seamlessly with Azure Data Manager for Energy, easing data ingestion, management, and discoverability for domain applications.

Enterprise Data Solution from SLB facilitates day-to-day workflows


Designed to reduce the friction of dealing with large volumes of data traditionally stored in different silos, Enterprise Data Solution from SLB features a user-friendly, inclusive interface that facilitates the process of data ingestion, curation, and accessibility. Enterprise Data Solution from SLB allows customers to unlock data with a single click instead of running multiple steps to get the same results. This makes it easier for a variety of end users to ingest different data types and formats and access data faster. In addition, the curation of data along this pipeline is aided by several AI-enhanced tools, from more streamlined unstructured data ingestion, data quality control, contextualization, and data mastering, reducing the need for manual intervention.

Tackle large volumes of data with new solution from SLB for Azure Data Manager for Energy

Both traditional and new energy technical workflows are optimized when data and application interoperability are achieved. For example, handling large volumes of seismic data (access, ingestion, and streaming) can be dramatically improved with Enterprise Data Solution from SLB. The typical cost for handling these workloads is also reduced significantly, speeding up workflows and allowing users to incorporate as much diverse domain data as possible. Geoscientists and engineers can run more scenarios in different applications, compare results with their colleagues, and ultimately liberate the most accurate data quickly.

“Having the Enterprise Data Solution from SLB working with ADME is transformative. Workflow efficiency is vastly increased, and decision-making accelerated to levels that are impossible to achieve without it. Pairing the two technologies enables AI to work at full effect in the energy sector. AI plays a dual role, it enhances data quality, accessibility, and utilization, which in turn enables it to drive innovation across the entire planning to operations life cycle.”

—Jamie Cruise, Product Leader, SLB

Key features of Enterprise Data Solution from SLB


The key features of Enterprise Data Solution from SLB include:

Ingest: Collect, ingest, and stage data easily from desktop or other file sources for document transcription and parsing. For example, you can use a cloud-style file drop for changes to files and get automated translation into the correct schema. This is increasingly aided by AI to reduce human workload requirements.

Curate: Data quality is at the heart of many of the challenges in the industry and causes a lot of wasted time. The Enterprise Data Solution from SLB accelerates curation and refining of datasets into trusted data products, liberating them with streamlined tools for data quality so they’re ready for consumption. These tools include AI for quality control and mastering.

Discover: Ready-to-use data products enable quick data-asset analysis and screening online for quick-to-create collections and consumption from domain workflows. Locating and accessing quality data is critical for decision-making. Enterprise Data Solution from SLB enables workflows to access the right data for stakeholders to easily discover, visualize, and use.

Consume: The Enterprise Data Solution from SLB makes it easy for analysts and others to access trusted data products from within their familiar applications and workflows, whether that is the Delfi™ digital platform, Petrel™ subsurface software, or Techlog™ wellbore software. With an intuitive user interface that’s accessible from a web browser, full data lineage, and project versioning are always available in an optimized format. Data can be shared and consumed in workflows such as seismic processing, machine learning seismic interpretation, and machine learning property modeling. In addition, this data is now easily consumable in machine learning workbenches such as Delfi Data Science or tooling available in Microsoft Azure. This cuts time to decision and action, critical components for a smooth production workflow.

Unlock downstream advantages within the Microsoft ecosystem


Once data is in Azure Data Manager for Energy, it conforms to the OSDU® Technical Standard and is a reliable platform for other applications to consume and re-ingest data iteratively. The interoperability of Azure Data Manager for Energy improves accessibility, traceability, and validity of data, allowing domain users and data scientists to deliver business outcomes faster.

Enterprise Data Solution from SLB helps customers take full advantage of Azure Data Manager for Energy by making it more efficient to ingest large amounts of quality, trusted, and traceable data into the platform. Ultimately, Azure Data Manager for Energy’s interoperability empowers customers by harmonizing data that can be leveraged across the Microsoft ecosystem.

Once this data is in the platform, there are many opportunities to take advantage of Microsoft Azure OpenAI Service to drive additional insights and efficiencies further downstream in Microsoft 365 applications. For example, end users can extract tables directly from ingested documents and generate Open XML documents that are ready to use in Microsoft Excel, where they can be more easily visualized.

Source: microsoft.com