Thursday, 30 November 2023

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Cloud computing has revolutionized the way you build, deploy, and scale applications and services. While you have unprecedented flexibility, agility, and scalability, you also face greater challenges in managing cost, security, and compliance. While IT security and compliance are often managed by central teams, cost is a shared responsibility across executive, finance, product, and engineering teams, which is what makes managing cloud cost such a challenge. Having the right tools to enable cross-group collaboration and make data-driven decisions is critical.

Fortunately, you have everything you need in the Microsoft Cloud to implement a streamlined FinOps practice that brings people together and connects them to the data they need to make business decisions. And with new developments like Copilot in Microsoft Cost Management and Microsoft Fabric, there couldn’t be a better time to take a fresh look at how you manage cost within your organization and how you can leverage the FinOps Framework and the FinOps Open Cost and Usage Specification (FOCUS) to accelerate your FinOps efforts.

There’s a lot to cover in this space, so I’ll split this across a series of blog posts. In this first blog post, I’ll introduce the core elements of Cost Management and Fabric that you’ll need to lay the foundation for the rest of the series, including how to export data, how FOCUS can help, and a few quick options that anyone can use to setup reports and alerts in Fabric with just a few clicks.

No-code extensibility with Cost Management exports


As your FinOps team grows to cover new services, endpoints, and datasets, you may find they spend more time integrating disparate APIs and schemas than driving business goals. This complexity also keeps simple reports and alerts just out of reach from executive, finance, and product teams. And when your stakeholders can’t get the answers they need, they push more work on to engineering teams to fill those gaps, which again, takes away from driving business goals.

We envision a future where FinOps teams can empower all stakeholders to stay informed and get the answers they need through turn-key integration and AI-assisted tooling on top of structured guidance and open specifications. And this all starts with Cost Management exports—a no-code extensibility feature that brings data to you.

As of today, you can sign up for a limited preview of Cost Management expands where you can export five new datasets directly into your storage account without a single line of code. In addition to the actual and amortized cost and usage details you get today, you’ll also see:

◉ Cost and usage details aligned to FOCUS
◉ Price sheets
◉ Reservation details
◉ Reservation recommendations
◉ Reservation transactions

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Of note, the FOCUS dataset includes both actual and amortized costs in a single dataset, which can drive additional efficiencies in your data ingestion process. You’ll benefit from reduced data processing times and more timely reporting on top of reduced storage and compute costs due to fewer rows and less duplication of data.

Beyond the new datasets, you’ll also discover optimizations that deliver large datasets more efficiently, reduced storage costs by updating rather than creating new files each day, and more. All exports are scheduled at the same time, to ensure scheduled refreshes of your reports will stay in sync with the latest data. Coupled with file partitioning, which is already available and recommended today, and data compression, which you’ll see in the coming months, the exports preview removes the need to write complex code to extract, transfer, and load large datasets reliably via APIs. This better enables all FinOps stakeholders to build custom reports to get the answers they need without having to learn a single API or write a single line of code.

FOCUS democratizes cloud cost analytics


In case you’re not familiar, FOCUS is a groundbreaking initiative to establish a common provider and service-agnostic format for billing data that empowers organizations to better understand cost and usage patterns and optimize spending and performance across multiple cloud, software as a service (SaaS), and even on-premises service offerings. FOCUS provides a consistent, clear, and accessible view of cost data, explicitly designed for FinOps needs. As the new “language” of FinOps, FOCUS enables practitioners to collaborate more efficiently and effectively with peers throughout the organization and even maximize transferability and onboarding for new team members, getting people up and running quicker.

FOCUS 0.5 was originally announced in June 2023, and we’re excited to be leading the industry with our announcement of native support for the FOCUS 1.0 preview as part of Cost Management exports on November 13, 2023. We believe FOCUS is an important step forward for our industry, and we look forward to our industry partners joining us and collaboratively evolving the specification alongside FinOps practitioners from our collective customers and partners.

FOCUS 1.0 preview adds new columns for pricing, discounts, resources, and usage along with prescribed behaviors around how discounts are applied. Soon, you’ll also have a powerful new use case library, which offers a rich set of problems and prebuilt queries to help you get the answers you need without the guesswork. Armed with FOCUS and the FinOps Framework, you have a literal playbook on how to understand and extract answers out of your data effortlessly, enabling you to empower FinOps stakeholders regardless of how much knowledge or experience they have, to get the answers they need to maximize business value with the Microsoft Cloud.

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Microsoft Fabric and Copilot enable self-service analytics


So far, I’ve talked about how you can leverage Cost Management exports as a turn-key solution to extract critical details about your costs, prices, and reservations using FOCUS as a consistent, open billing data format with its use case library that is a veritable treasure map for finding answers to your FinOps questions. While these are all amazing tools that will accelerate your FinOps efforts, the true power of democratizing FinOps lies at the intersection of Cost Management and FOCUS with a platform that enables you to provide your stakeholders with self-serve analytics and alerts. And this is exactly what Microsoft Fabric brings to the picture.

Microsoft Fabric is an all-in-one analytics solution that encompasses data ingestion, normalization, cleansing, analysis, reporting, alerting, and more. I could write a separate blog post about how to implement each FinOps capability in Microsoft Fabric, but but to get you acclimated, let me introduce the basics.

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Your first step to leveraging Microsoft Fabric starts in Cost Management, which has done much of the work for you by exporting details about your prices, reservations, and cost and usage data aligned to FOCUS.

Once exported, you’ll ingest your data into a Fabric lakehouse, SQL, or KQL database table and create a semantic model to bring data together for any reports and alerts you’ll want to create. The database option you use will depend on how much data you have and your reporting needs. Below is an example using a KQL database, which uses Azure Data Explorer under the covers, to take advantage of the performance and scale benefits as well as the powerful query language.

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Fabric offers several ways to quickly explore data from a semantic model. You can explore data by simply selecting the columns you want to see, but I recommend trying the auto-create a report option which takes that one step further by generating a quick summary based on the columns you select. As an example, here’s an auto-generated summary of the FOCUS EffectiveCost broken down by ChargePeriodStart, ServiceCategory, SubAccountName, Region, PricingCategory, and CommitmentDiscountType. You can apply quick tweaks to any visual or switch to the full edit experience to take it even further.

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Those with a keen eye may notice the Copilot button at the top right. If we switch to edit mode, we can take full advantage of Copilot and even ask it to create the same summary:

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Copilot starts to get a little fancier with the visuals and offers summarized numbers and a helpful filter. I can also go further with more specific questions about commitment-based discounts:

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

Of course, this is barely scratching the surface. With a richer semantic model including relationships and additional details, Copilot can go even further and save you time by giving you the answers you need and building reports with less time and hassle.

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

In addition to having unparalleled flexibility in reporting on the data in the way you want, you can also create fine-grained alerts in a more flexible way than ever before with very little effort. Simply select the visual you want to measure and specify when and how you want to be alerted:

Democratizing FinOps: Transform your practice with FOCUS and Microsoft Fabric

This gets even more powerful when you add custom visuals, measures, and materialized views that offer deeper insights.

This is just a glimpse of what you can do with Cost Management and Microsoft Fabric together. I haven’t even touched on the data flows, machine learning capabilities, and the potential of ingesting data from multiple cloud providers or SaaS vendors also using FOCUS to give you a full, single pane of glass for your FinOps efforts. You can imagine the possibilities of how Copilot and Fabric can impact every FinOps capability, especially when paired with rich collaboration and automation tools like Microsoft Teams, Power Automate, and Power Apps that can help every stakeholder accomplish more together.

Source: microsoft.com

Thursday, 23 November 2023

Unleashing the Power of Microsoft Azure AI: A Comprehensive Guide

Unleashing the Power of Microsoft Azure AI: A Comprehensive Guide

Introduction


In the ever-evolving landscape of technology, businesses strive to stay ahead, leveraging cutting-edge solutions to enhance efficiency and productivity. One such revolutionary tool that has been a game-changer in the realm of artificial intelligence is Microsoft Azure AI. In this comprehensive guide, we delve into the intricacies of Microsoft Azure AI, exploring its functionalities, applications, and how it stands out in the competitive tech ecosystem.

Understanding Microsoft Azure AI


What Sets Microsoft Azure AI Apart?

Microsoft Azure AI is not just another artificial intelligence platform; it's a robust ecosystem designed to empower businesses with intelligent, data-driven insights. What sets it apart is its seamless integration with other Microsoft services, creating a unified environment for comprehensive AI development and deployment.

Key Features and Capabilities

1. Machine Learning at Scale

Microsoft Azure AI boasts unparalleled machine learning capabilities, allowing businesses to scale their AI projects effortlessly. The platform provides a diverse range of pre-built models and tools, enabling developers to build and deploy models with efficiency and speed.

2. Natural Language Processing (NLP)

With NLP capabilities, Microsoft Azure AI excels in understanding and interpreting human language. This feature proves invaluable in developing applications for sentiment analysis, language translation, and chatbots, elevating user experiences to new heights.

3. Computer Vision

The platform's computer vision capabilities empower businesses to extract valuable insights from images and videos. From object recognition to image classification, Microsoft Azure AI opens up possibilities for innovative applications in industries such as healthcare, manufacturing, and retail.

Applications Across Industries


Transforming Healthcare with Microsoft Azure AI

In the healthcare sector, the integration of Microsoft Azure AI has been a game-changer. From predictive analytics for disease outbreaks to personalized treatment plans based on patient data, the platform's capabilities are reshaping the future of healthcare.


Revolutionizing Manufacturing Processes

Microsoft Azure AI plays a pivotal role in revolutionizing manufacturing processes. Predictive maintenance, quality control through computer vision, and supply chain optimization are just a few areas where the platform is making significant strides.

Enhancing Customer Experiences in Retail

Retailers leverage Microsoft Azure AI to enhance customer experiences. From personalized recommendations to efficient inventory management, the platform enables retailers to stay agile in the dynamic world of e-commerce.

Implementing Microsoft Azure AI in Your Business


Getting Started: A Step-by-Step Guide

1. Setting Up Your Azure Account

The first step in harnessing the power of Microsoft Azure AI is setting up your Azure account. Navigate through the user-friendly interface, and in no time, you'll be ready to explore the myriad possibilities.

2. Choosing the Right AI Service

Microsoft Azure AI offers a plethora of AI services. Whether you're diving into machine learning, natural language processing, or computer vision, selecting the right service for your business goals is crucial. Our guide helps you navigate these choices with ease.

3. Developing Your First AI Model

Embark on your AI journey by developing your first model. With comprehensive documentation and a supportive community, Microsoft Azure AI ensures that even those new to AI can navigate the development process seamlessly.

Overcoming Challenges with Microsoft Azure AI


Addressing Concerns and Misconceptions

As with any transformative technology, there are often concerns and misconceptions. In this section, we address common apprehensions about Microsoft Azure AI, providing clarity on security, data privacy, and the integration process.

Conclusion

In conclusion, Microsoft Azure AI stands as a beacon of innovation in the AI landscape. Its robust features, seamless integration, and diverse applications make it a go-to choice for businesses aiming to thrive in the era of digital transformation. Embrace the power of Microsoft Azure AI and propel your business into a future where intelligent insights drive success.

Tuesday, 21 November 2023

Come build with us: Microsoft and OpenAI partnership unveils new AI opportunities

Come build with us: Microsoft and OpenAI partnership unveils new AI opportunities

At OpenAI’s first DevDay Conference on November 6, 2023, Microsoft Chairman and CEO Satya Nadella made a surprise appearance during OpenAI CEO Sam Altman’s keynote to deliver a powerful message: “Our job number one is to build the best systems, so you can build the best models and deliver those to developers.” This was a testament to the deep partnership between Microsoft and OpenAI. We’re excited about the latest announcements from OpenAI’s first DevDay event and want to highlight the opportunities it presents for all AI builders.

New models: GPT-4 Turbo on Azure OpenAI Service


We are very enthusiastic about all the new models introduced, including GPT-3.5 Turbo, and updates to models including DALL-E 3, and Whisper 3. Among them, the eagerly awaited GPT-4 Turbo offers lower pricing, extended prompt length, and structured JSON formatting with improved efficiency and control. We’re looking forward to making these great Turbo models available on Azure OpenAI Service by the end of this year in keeping with our standard practice of bringing new model innovation from our partners at OpenAI to the Azure OpenAI Service.

Increasing access for all AI Builders


OpenAI’s announcement of lower pricing is significant. It will make the models more accessible and increase their utilization, allowing a broader range of applications to harness their power and ushering in a new era of generative AI. On Azure OpenAI Service, token pricing for the new models will be at parity with OpenAI’s prices.

And in an exciting development, Microsoft made GitHub Enterprise available to all DevDay conference in-person attendees to use for free for 90 days. GitHub Enterprise is a powerful tool for developers, assisting in code completion and development. Its integration with Microsoft’s ecosystem aligns with the mission of helping developers easily bring ideas to life on Azure.

GPTs: New ways to create and monetize


GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others. No coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Just like with plug-ins, we are looking forward to building deep ecosystem support for GPTs, which we’ll share more on next week at our Microsoft Ignite conference.

Microsoft and OpenAI partnership


OpenAI’s introduction of a Custom Models program will be of particular interest to enterprises, and Microsoft will continue to offer the convenience of integrating OpenAI’s services seamlessly within Microsoft’s existing ecosystem and support infrastructure, providing a comprehensive solution for all enterprise needs.

Sam Altman, OpenAI’s CEO, echoed the sentiment of a strong and productive partnership with Microsoft. “I think we have the best partnership in tech,” Altman told Nadella onstage.

Nadella went on to talk about the companies’ alignment. “Our mission is to empower every person and every organization on the planet to achieve more. And to me, ultimately, AI is only going to be useful if it truly does empower…it’s about being able to get the benefits of AI broadly disseminated to everyone,” Nadella said.

With these announcements, developers and enterprises are now poised to explore new horizons, empowered by the combined strengths of Microsoft and OpenAI, and the limitless possibilities of generative AI.

Source: microsoft.com

Saturday, 18 November 2023

Azure sets a scale record in large language model training

Azure sets a scale record in large language model training

Azure empowers intelligent services like Microsoft Copilot, Bing, and Azure OpenAI Service that have captured our imagination in recent days. These services, facilitating various applications like Microsoft Office 365, chatbots, and search engines with generative AI, owe their magic to large language models (LLMs). While the latest LLMs are transcendental, bringing a generational change in how we apply artificial intelligence in our daily lives and reason about its evolution, we have merely scratched the surface. Creating more capable, fair, foundational LLMs that consume and present information more accurately is necessary.

How Microsoft maximizes the power of LLMs


However, creating new LLMs or improving the accuracy of existing ones is no easy feat. To create and train improved versions of LLMs, supercomputers with massive computational capabilities are required. It is paramount that both the hardware and software in these supercomputers are utilized efficiently at scale, not leaving performance on the table. This is where the sheer scale of the supercomputing infrastructure in Azure cloud shines and setting a new scale record in LLM training matters. 

Figure 1: Scale records on the model GPT-3 (175 billion parameters) from MLPerf Training v3.0 in June 2023 (3.0-2003) and Azure on MLPerf Training v3.1 in November 2023 (3.1-2002). 

Customers need reliable and performant infrastructure to bring the most sophisticated AI use cases to market in record time. Our objective is to build state-of-the-art infrastructure and meet these demands. The latest MLPerf™ 3.1 Training results are a testament to our unwavering commitment to building high-quality and high-performance systems in the cloud to achieve unparalleled efficiency in training LLMs at scale. The idea here is to use massive workloads to stress every component of the system and accelerate our build process to achieve high quality.

The GPT-3 LLM model and its 175 billion parameters were trained to completion in four minutes on 1,344 ND H100 v5 virtual machines (VMs), which represent 10,752 NVIDIA H100 Tensor Core GPUs, connected by the NVIDIA Quantum-2 InfiniBand networking platform (as shown in Figure 1). This training workload uses close to real-world datasets and restarts from 2.4 terabytes of checkpoints acting closely a production LLM training scenario. The workload stresses the H100 GPUs Tensor Cores, direct-attached Non-Volatile Memory Express disks, and the NVLink interconnect that provides fast communication to the high-bandwidth memory in the GPUs and cross-node 400Gb/s InfiniBand fabric. 

“Azure’s submission, the largest in the history of MLPerf Training, demonstrates the extraordinary progress we have made in optimizing the scale of training. MLCommons’ benchmarks showcase the prowess of modern AI infrastructure and software, underlining the continuous advancements that have been achieved, ultimately propelling us toward even more powerful and efficient AI systems.”—David Kanter, Executive Director of MLCommons 

Microsoft’s commitment to performance


In March 2023, Microsoft introduced the ND H100 v5-series which completed training a 350 million parameter Bidirectional Encoder Representations from Transformers (BERT) language model in 5.4 minutes, beating our existing record. This resulted in a four times improvement in time to train BERT within just 18 months, highlighting our continuous endeavor to bring the best performance to our users.

Figure 2: Relative size of the models BERT (350 million parameters) and GPT-3 (175 billion parameters) from MLPerf Training v3.1.  

Today’s results are with GPT-3, a large language model in the MLPerf Training benchmarking suite, featuring 175 billion parameters, a remarkable 500 times larger than the previously benchmarked BERT model (figure 2). The latest training time from Azure reached a 2.7x improvement compared to the previous record from MLPerf Training v3.0. The v3.1 submission underscores the ability to decrease training time and cost by optimizing a model that accurately represents current AI workloads.

The power of virtualization


NVIDIA’s submission to the MLPerf Training v3.1 LLM benchmark on 10,752 NVIDIA H100 Tensor Core GPUs achieved a training time of 3.92 minutes. This amounts to just a 2 percent increase in the training time in Azure VMs compared to the NVIDIA bare-metal submission, which has the best-in-class performance of virtual machines across all offerings of HPC instances in the cloud (figure 3).

Figure 3: Relative training times on the model GPT-3 (175 billion parameters) from MLPerf Training v3.1 between the NVIDIA submission on the bare-metal platform (3.1-2007) and Azure on virtual machines (3.1-2002). 

The latest results in AI Inferencing on Azure ND H100 v5 VMs show leadership results as well, as shown in MLPerf Inference v3.1. The ND H100 v5-series delivered 0.99x-1.05x relative performance compared to the bare-metal submissions on the same NVIDIA H100 Tensor Core GPUs (figure 4), echoing the efficiency of virtual machines.

Figure 4: Performance of the ND H100 v5-series (3.1-0003) compared to on-premises and bare metal offerings of the same NVIDIA H100 Tensor Core GPUs (3.1-0107 and 3.1-0121). All the results were obtained with the GPT-J benchmark from MLPerf Inference v3.1, scenarios: Offline and Server, accuracy: 99 percent.

In conclusion, created for performance, scalability, and adaptability, the Azure ND H100 v5-series offers exceptional throughput and minimal latency for both training and inferencing tasks in the cloud and offers the highest quality infrastructure for AI.

Source: microsoft.com

Thursday, 16 November 2023

Microsoft Azure delivers purpose-built cloud infrastructure in the era of AI

Microsoft Azure delivers purpose-built cloud infrastructure in the era of AI

This year’s Microsoft Ignite brings us together to experience AI transformation in action. AI is driving a new wave of innovation, rapidly changing what applications look like, how they’re designed and built, and how they’re delivered. At the same time, business leaders continue to face challenges, needing to juggle various priorities to offset rising costs, be sustainable, and outmaneuver economic uncertainty. Today’s customers are looking for AI solutions that will meet all their needs.

At Ignite, we’re announcing innovation in Microsoft Azure that is powering more AI capabilities for our customers and helping enterprises with their cloud management and operations. We’re committed to bringing your AI ambitions to production and meeting you where you are. Whether you choose to build hybrid, cloud-native, or open source solutions, we’re rapidly expanding our infrastructure and adding intuitive tools for customers to help take your ideas to production safely and responsibly in this new era of AI. 

With Azure, you can trust that you are on a secure and well-managed foundation to utilize the latest advancements in AI and cloud-native services. Azure is adaptive and purpose-built for all your workloads, helping you seamlessly unify and manage all your infrastructure, data, analytics, and AI solutions. 

Powering groundbreaking AI solutions


The era of AI has largely been shaped by an exponential growth in the sophistication of large language models like OpenAI’s GPT trained on trillions of parameters and groundbreaking generative AI services like Bing Chat Enterprise and Microsoft Copilot used by millions of people globally. The leadership by Azure in optimizing infrastructure for AI workloads in the cloud is pioneering this innovation and why customers like OpenAI, Inflection, and Adept are choosing Azure to build and run AI solutions.

In this new era of AI, we are redefining cloud infrastructure, from silicon to systems, to prepare for AI in every business, in every app, for everyone. At Ignite, we’re introducing our first custom AI accelerator series, Azure Maia, designed to run cloud-based training and inferencing for AI workloads such as OpenAI models, Bing, GitHub Copilot, and ChatGPT. Maia 100 is the first generation in the series, with 105 billion transistors, making it one of the largest chips on 5nm process technology. The innovations for Maia 100 span across the silicon, software, network, racks, and cooling capabilities. This equips the Azure AI infrastructure with end-to-end systems optimization tailored to meet the needs of groundbreaking AI such as GPT.

Alongside the Maia 100, we’re introducing our first custom in-house central processing unit series, Azure Cobalt, built on Arm architecture for optimal performance or watt efficiency, powering common cloud workloads for the Microsoft Cloud. From in-house silicon to systems, Microsoft now optimizes and innovates at every layer in the infrastructure stack. Cobalt 100, the first generation in the series, is a 64-bit 128-core chip that delivers up to 40 percent performance improvement over current generations of Azure Arm chips and is powering services such as Microsoft Teams and Azure SQL. 

Networking innovation runs across our first-generation Maia 100 and Cobalt 100 chips. From hollow core fiber technology to the general availability of Azure Boost, we’re enabling faster networking and storage solutions in the cloud. You can now achieve up to 12.5 GBs throughput, 650K input output operations per second (IOPs) in remote storage performance to run data-intensive workloads, and up to 200 GBs in networking bandwidth for network-intensive workloads. 

We continue to build our AI infrastructure in close collaboration with silicon providers and industry leaders, incorporating the latest innovations in software, power, models, and silicon. Azure works closely with NVIDIA to provide NVIDIA H100 Tensor Core (GPU) graphics processing unit-based virtual machines (VMs) for mid to large-scale AI workloads, including Azure Confidential VMs. On top of that, we are adding the latest NVIDIA H200 Tensor Core GPU to our fleet next year to support larger model inferencing with no reduction in latency. 

As we expand our partnership with AMD, customers can access AI-optimized VMs powered by AMD’s new MI300 accelerator early next year. This demonstrates our commitment to adding optionality for customers in price, performance, and power for all of their unique business needs. 

These investments have allowed Azure to pioneer performance for AI supercomputing in the cloud and have consistently ranked us as the number one cloud in the top 500 of the world’s supercomputers. With these additions to the Azure infrastructure hardware portfolio, our platform enables us to deliver the best performance and efficiency across all workloads.

Microsoft Azure delivers purpose-built cloud infrastructure in the era of AI

Being adaptive and purpose-built for your workloads


We’ve heard about your challenges in migrating workloads to the public cloud, especially for mission-critical workloads. We continue to work with the technology vendors you’ve relied on to run your workloads and ensure Azure is supporting your needs such as SAP, VMware, NetApp, RedHat, Citrix, and Oracle. We’re excited about our recent partnership to bring Oracle Database Services into Azure to help keep your business efficient and resilient.

At Ignite, we’re announcing the general availability of Oracle Database@Azure in the US East Azure region as of December 2023. Customers will now have direct access to Oracle database services running on Oracle Cloud Infrastructure (OCI) deployed in Azure data centers. The new service will deliver all the performance, scale, and workload availability advantages of Oracle Exadata Database Service on OCI combined with the security, flexibility, and best-in-class services of Azure. Microsoft is the only other hyper scaler to offer OCI Database Services to simplify cloud migration, multicloud deployment, and management.

As we’ve observed through our interactions the durable state of the cloud is evolving to one where customer workloads need to be supported wherever they’re needed. We realize that cloud migration is not a one-size-fits-all approach, and that’s why we’re committed to meeting you where you are on your cloud journey. An adaptive cloud enables you to thrive in dynamic environments by unifying siloed teams, distributed sites, and sprawling systems into a single operations, application, and data model in Azure.  

Our vision for adaptive cloud builds on the work we’ve already started through Azure Arc. With Azure Arc, customers can project their on-premises, edge, and multicloud resources to Azure, deploy Azure native services on those resources, and extend Azure services to the edge.  

We’re excited to make some new announcements that will help customers implement their adaptive cloud strategies. For VMware customers, we’re announcing the general availability of VMware vSphere enabled by Azure Arc. Azure Arc brings together Azure and the VMware vSphere infrastructure enabling VM administrators to empower their developers to use Azure technologies with their existing server-based workloads and new Kubernetes workloads all from Azure. Additionally, we’re delighted to share the preview of Azure IoT Operations enabled by Azure Arc. By using Azure IoT Operations, customers can greatly reduce the complexity and time it takes to build an end-to-end solution that empowers them to make near real-time decisions backed by AI-driven insights to run agile, resilient, and sustainable operations with both Microsoft and partner technologies.

Amplifying your impact with AI-enhanced operations


Every day, cloud administrators and IT professionals are being asked to do more. We consistently hear from customers that they’re tasked with a wider range of operations, collaborating and managing more users, supporting more complex needs to deliver on increasing customer demand and integrating more workloads into their cloud environment. 

That’s why we’re excited to introduce the public preview of Microsoft Copilot for Azure, a new solution built into Azure that helps simplify how you design, operate, or troubleshoot apps and infrastructure from cloud to edge.

Microsoft Azure delivers purpose-built cloud infrastructure in the era of AI

Enabling limitless innovation in the era of AI


Delivering on the promise of advanced AI for our customers requires high computing infrastructure, services, and expertise—things that can only be addressed with the scale and agility of the Microsoft Cloud. Our unique equipment and system designs help us and customers like you meet the challenges of the ever-changing technological landscape. From increasing the lifecycle of our hardware and running efficient supply chain operations to providing purpose-built infrastructure in this new era of AI, we can ensure we’re always here to bring your ideas to life in a safe and responsible way.

Source: microsoft.com

Wednesday, 8 November 2023

Microsoft is now a FinOps Certified Service Provider

Microsoft is now a FinOps Certified Service Provider

In an era where cloud computing has become the backbone of modern business operations, efficient financial management is the linchpin that keeps organizations agile and cost-effective. The FinOps Framework has emerged as a powerful approach to optimize cloud costs, allowing organizations to efficiently manage their cloud expenditure. Today, we are thrilled to announce that Microsoft has achieved a milestone that reaffirms our commitment to empowering our customers and partners in their journey towards optimized cloud spending. We are now a FinOps Certified Service Provider. This certification is a testament to our unwavering dedication to providing you with the best-in-class solutions for managing your cloud finances and ensuring that your organization thrives in this era of digital transformation.

FinOps consulting journey at Microsoft


Our journey in FinOps consulting dates back to the early days of Microsoft Azure, where we embarked on a mission to assist organizations in navigating the complex landscape of cloud cost management. Over the years, we have had the privilege of collaborating with countless organizations, ensuring they unlock the full potential of their cloud investments. What truly excites us, however, is the remarkable momentum that the FinOps Foundation has generated. This foundation has played a pivotal role in cultivating a vibrant and inclusive community of FinOps professionals, united by a shared passion for optimizing cloud expenditures.

Together with this dynamic community, we are poised to take the world of FinOps to the next level. Our continued collaboration, knowledge-sharing, and dedication to the cause will not only enhance our collective understanding of cloud financial management but also drive innovation and excellence in this critical domain. With the power of collaboration and the momentum of the FinOps community, we are prepared to shape the future of FinOps, making it more accessible, efficient, and beneficial for all.

At Microsoft, our commitment to you extends throughout the entire service lifecycle. Whether you are Unified Enterprise Support customer, receiving Proactive Services, or a Microsoft Industry Solutions Delivery (ISD) customer, receiving modernization and enabling innovation for the Microsoft cloud, we are here to provide the expertise and guidance you need to meet your FinOps goals. 

Your goals may by focused on enablement or long-term optimization. We receive many questions from our customers that correspond to each of these goal categories:

Enablement:

  • “I’m looking to improve our financial forecast for the upcoming year.”—Chief Financial Officer.
  • “I’ve been meaning to make our cloud spending more efficient but haven’t had the time.”—Chief Technical Officer.
  • “I’m setting our unit’s KPIs and want to make our operations for the coming quarter leaner.”—Business Unit Lead.
  • “I need to support our leadership in achieving our quarterly goals and make operations more efficient.”—Product/Application Owner.

Long-term optimization:

  • “I’m concerned about the economic downturn and need to improve our bottom line.”—Chief Financial Officer.
  • “I need to reduce my operational cost so that I can free up money for innovative projects.”—Chief Technology Officer.
  • “I need to make sure our group’s strategy is aligned to company goals.”—Business Unit Lead.
  • “I work closely with the product and am responsible for the changes.”—Product/Application Owner.

With these questions and requirements in mind, we have developed a series of offerings that provide the solutions.

FinOps solution offerings at Microsoft


Our Unified Enterprise Support currently has three FinOps offerings:

  • FinOps Introduction
  • FinOps Assessment
  • FinOps Operations for Azure
    • Azure Cost Management Tool Chain
    • Azure Billing Mechanics
    • Azure Cost Management Mechanics
    • Azure Cost Optimization Opportunities

Our Industry Solutions Delivery, Journey to FinOps offering helps our customers optimize their existing landscape, establish a cost-conscious culture, and supporting governance controls to maximize the value of their Azure spend. This offer helps our customers:

  • Understand the underlying drivers of cost by cloud resource types.
  • Uncover the link between current cost and performance and cost optimization levers.
  • Achieve tangible impact/savings through a systematic and continuous cost optimization process while aligning with performance, scalability, and stability goals.
  • Develop or accelerate a cost-conscious organization.

The Microsoft vision for the future of FinOps


Looking forward, we are excited to share our vision for the future of FinOps. Engaging with the FinOps community through the FinOps Foundation Slack and active participation in working groups is a vital part of our strategy. Some of the working groups we actively contribute to include FinOps Open Cost and Usage Specification (FOCUS), aimed at building and maintaining a common specification for cloud cost, usage, and billing data) and FinOps Champion (focused on creating a FinOps Champion Program). These initiatives demonstrate our commitment to shaping the future of cloud cost management.

We are continuously exploring new ways to enhance your FinOps experience. Our goal is to bring you the latest tools, best practices, and thought leadership that will empower you in the ever-evolving cloud ecosystem. As the cloud landscape continues to evolve, we are dedicated to ensuring that your organization remains at the forefront of FinOps, equipped with the knowledge and tools needed to thrive in this dynamic environment.

Microsoft as a FinOps Certified Service Provider


Microsoft’s certification as a FinOps Certified Service Provider is a significant milestone in the realm of cloud cost management. It highlights the growing importance of FinOps in the cloud industry and underscores the critical role of financial discipline in optimizing cloud spending. By achieving this certification, Microsoft has positioned itself as a leader in cloud cost management, benefiting its customers, the industry, and cloud users around the world.

As organizations continue to embrace cloud computing, the need for effective cloud cost management will only grow. With the Microsoft’s commitment to FinOps, businesses can expect greater control over their cloud expenses, ensuring that their cloud investments align with their financial goals and operational needs. The FinOps Foundation recently launched the State of FinOps Survey for 2024 to collect industry data to help organizations better understand FinOps trends and common challenges faced. Please consider taking the time to complete this survey and check out past years results.

Your success is our utmost priority, and we encourage you to take action today. Reach out to your dedicated account representative to discover how we can help you achieve your FinOps objectives. Additionally, we invite you to review our publicly available FinOps documentation, which is a valuable resource for in-depth insights into optimizing your cloud finances. You can also actively engage with us in the FinOps Foundation community, where you can connect with fellow professionals, share valuable insights, and stay updated on the latest industry trends.

What’s next for FinOps?


We look forward to the opportunity to meet you in person at Microsoft Ignite and the FinOps Foundation Seattle Roadshow on November 15, 2023. These events provide an excellent platform to network, share experiences, and continue building a brighter and more cost-efficient cloud future together. Your journey to optimized cloud spending starts here, and we are here to support you every step of the way.

Tuesday, 7 November 2023

Building for the future: The enterprise generative AI application lifecycle with Azure AI

Building for the future: The enterprise generative AI application lifecycle with Azure AI

In our previous blog, we explored the emerging practice of large language model operations (LLMOps) and the nuances that set it apart from traditional machine learning operations (MLOps). We discussed the challenges of scaling large language model-powered applications and how Microsoft Azure AI uniquely helps organizations manage this complexity. We touched on the importance of considering the development journey as an iterative process to achieve a quality application.

Building for the future: The enterprise generative AI application lifecycle with Azure AI

In this blog, we’ll explore these concepts in more detail. The enterprise development process requires collaboration, diligent evaluation, risk management, and scaled deployment. By providing a robust suite of capabilities supporting these challenges, Azure AI affords a clear and efficient path to generating value in your products for your customers.

Enterprise LLM Lifecycle


Building for the future: The enterprise generative AI application lifecycle with Azure AI

Ideating and exploring loop


The first loop typically involves a single developer searching for a model catalog for large language models (LLMs) that align with their specific business requirements. Working with a subset of data and prompts, the developer will try to understand the capabilities and limitations of each model with prototyping and evaluation. Developers usually explore altering prompts to the models, different chunking sizes and vectoring indexing methods, and basic interactions while trying to validate or refute business hypotheses. For instance, in a customer support scenario, they might input sample customer queries to see if the model generates appropriate and helpful responses. They can validate this first by typing in examples, but quickly move to bulk testing with files and automated metrics.

Beyond Azure OpenAI Service, Azure AI offers a comprehensive model catalog, which empowers users to discover, customize, evaluate, and deploy foundation models from leading providers such as Hugging Face, Meta, and OpenAI. This helps developers find and select optimal foundation models for their specific use case. Developers can quickly test and evaluate models using their own data to see how the pre-trained model would perform for their desired scenarios.  

Building and augmenting loop 


Once a developer discovers and evaluates the core capabilities of their preferred LLM, they advance to the next loop which focuses on guiding and enhancing the LLM to better meet their specific needs. Traditionally, a base model is trained with point-in-time data. However, often the scenario requires either enterprise-local data, real-time data, or more fundamental alterations.

For reasoning on enterprise data, Retrieval Augmented Generation (RAG) is preferred, which injects information from internal data sources into the prompt based on the specific user request. Common sources are document search systems, structured databases, and non-SQL stores. With RAG, a developer can “ground” their solution using the capabilities of their LLMs to process and generate responses based on this injected data. This helps developers achieve customized solutions while maintaining relevance and optimizing costs. RAG also facilitates continuous data updates without the need for fine-tuning as the data comes from other sources.  

During this loop, the developer may find cases where the output accuracy doesn’t meet desired thresholds. Another method to alter the outcome of an LLM is fine-tuning. Fine-tuning helps most when the nature of the system needs to be altered. Generally, the LLM will answer any prompt in a similar tone and format. But for example, if the use case requires code output, JSON, or any such modification, there may be a consistent change or restriction in the output, where fine-tuning can be employed to better align the system’s responses with the specific requirements of the task at hand. By adjusting the parameters of the LLM during fine-tuning, the developer can significantly improve the output accuracy and relevance, making the system more useful and efficient for the intended use case. 

It is also feasible to combine prompt engineering, RAG augmentation, and a fine-tuned LLM. Since fine-tuning necessitates additional data, most users initiate with prompt engineering and modifications to data retrieval before proceeding to fine-tune the model. 

Most importantly, continuous evaluation is an essential element of this loop. During this phase, developers assess the quality and overall groundedness of their LLMs. The end goal is to facilitate safe, responsible, and data-driven insights to inform decision-making while ensuring the AI solutions are primed for production. 

Azure AI prompt flow is a pivotal component in this loop. Prompt flow helps teams streamline the development and evaluation of LLM applications by providing tools for systematic experimentation and a rich array of built-in templates and metrics. This ensures a structured and informed approach to LLM refinement. Developers can also effortlessly integrate with frameworks like LangChain or Semantic Kernel, tailoring their LLM flows based on their business requirements. The addition of reusable Python tools enhances data processing capabilities, while simplified and secure connections to APIs and external data sources afford flexible augmentation of the solution. Developers can also use multiple LLMs as part of their workflow, applied dynamically or conditionally to work on specific tasks and manage costs.  

With Azure AI, evaluating the effectiveness of different development approaches becomes straightforward. Developers can easily craft and compare the performance of prompt variants against sample data, using insightful metrics such as groundedness, fluency, and coherence. In essence, throughout this loop, prompt flow is the linchpin, bridging the gap between innovative ideas and tangible AI solutions. 

Operationalizing loop 


The third loop captures the transition of LLMs from development to production. This loop primarily involves deployment, monitoring, incorporating content safety systems, and integrating with CI/CD (continuous integration and continuous deployment) processes. This stage of the process is often managed by production engineers who have existing processes for application deployment. Central to this stage is collaboration, facilitating a smooth handoff of assets between application developers and data scientists building on the LLMs, and production engineers tasked with deploying them.

Deployment allows for a seamless transfer of LLMs and prompt flows to endpoints for inference without the need for a complex infrastructure setup. Monitoring helps teams track and optimize their LLM application’s safety and quality in production. Content safety systems help detect and mitigate misuse and unwanted content, both on the ingress and egress of the application. Combined, these systems fortify the application against potential risks, improving alignment with risk, governance, and compliance standards.  

Unlike traditional machine learning models that might classify content, LLMs fundamentally generate content. This content often powers end-user-facing experiences like chatbots, with the integration often falling on developers who may not have experience managing probabilistic models. LLM-based applications often incorporate agents and plugins to enhance the capabilities of models to trigger some actions, which could also amplify the risk. These factors, combined with the inherent variability of LLM outputs, show the importance of risk management in LLMOps is critical.  

Azure AI prompt flow ensures a smooth deployment process to managed online endpoints in Azure Machine Learning. Because prompt flows are well-defined files that adhere to published schemas, they are easily incorporated into existing productization pipelines. Upon deployment, Azure Machine Learning invokes the model data collector, which autonomously gathers production data. That way, monitoring capabilities in Azure AI can provide a granular understanding of resource utilization, ensuring optimal performance and cost-effectiveness through token usage and cost monitoring. More importantly, customers can monitor their generative AI applications for quality and safety in production, using scheduled drift detection using either built-in or customer-defined metrics. Developers can also use Azure AI Content Safety to detect and mitigate harmful content or use the built-in content safety filters provided with Azure OpenAI Service models. Together, these systems provide greater control, quality, and transparency, delivering AI solutions that are safer, more efficient, and more easily meet the organization’s compliance standards.

Azure AI also helps to foster closer collaboration among diverse roles by facilitating the seamless sharing of assets like models, prompts, data, and experiment results using registries. Assets crafted in one workspace can be effortlessly discovered in another, ensuring a fluid handoff of LLMs and prompts. This not only enables a smoother development process but also preserves the lineage across both development and production environments. This integrated approach ensures that LLM applications are not only effective and insightful but also deeply ingrained within the business fabric, delivering unmatched value.

Managing loop


The final loop in the Enterprise Lifecycle LLM process lays down a structured framework for ongoing governance, management, and security. AI governance can help organizations accelerate their AI adoption and innovation by providing clear and consistent guidelines, processes, and standards for their AI projects.

Azure AI provides built-in AI governance capabilities for privacy, security, compliance, and responsible AI, as well as extensive connectors and integrations to simplify AI governance across your data estate. For example, administrators can set policies to allow or enforce specific security configurations, such as whether your Azure Machine Learning workspace uses a private endpoint. Or, organizations can integrate Azure Machine Learning workspaces with Microsoft Purview to publish metadata on AI assets automatically to the Purview Data Map for easier lineage tracking. This helps risk and compliance professionals understand what data is used to train AI models, how base models are fine-tuned or extended, and where models are used across different production applications. This information is crucial for supporting responsible AI practices and providing evidence for compliance reports and audits.

Whether building generative AI applications with open-source models, Azure’s managed OpenAI models, or your own pre-trained custom models, Azure AI facilitates safe, secure, and reliable AI solutions with greater ease with purpose-built, scalable infrastructure.

Explore the harmonized journey of LLMOps at Microsoft Ignite


As organizations delve deeper into LLMOps to streamline processes, one truth becomes abundantly clear: the journey is multifaceted and requires a diverse range of skills. While tools and technologies like Azure AI prompt flow play a crucial role, the human element—and diverse expertise—is indispensable. It’s the harmonious collaboration of cross-functional teams that creates real magic. Together, they ensure the transformation of a promising idea into a proof of concept and then a game-changing LLM application.

Source: microsoft.com

Saturday, 4 November 2023

Introducing Azure Bastion Developer: Secure and cost-effective access to your Azure Virtual Machines

Microsoft Azure is constantly evolving to meet the needs of its growing user base. In response to the feedback and requirements of developers, we have announced a new SKU for Azure Bastion: Bastion Developer. This service, now in public preview, will be a game-changer for developers seeking secure, cost-effective, and hassle-free connectivity to their Azure Virtual Machines. In this blog post, we’ll explore what Azure Bastion Developer is, the problems this new SKU addresses, and why it’s a must-try solution for developers.

What is Azure Bastion Developer?


Azure Bastion Developer is a new low-cost, zero-configuration, always-on SKU of the Azure Bastion service. Its primary mission is to provide secure-by-default Remote Desktop Protocol (RDP) and Secure Shell (SSH) access to Azure Virtual Machines, allowing users to establish secure connections to a single Virtual Machine at a time without the need for additional network configurations or public IP addresses on Virtual Machines. This service is designed to simplify and enhance the process of accessing your Azure Virtual Machines by eliminating the complexities, high costs, and security concerns often associated with alternative methods.

Addressing developer pain points


Azure Bastion Developer has been developed with the aim of addressing three common issues that developers encounter when connecting to Azure Virtual Machines:

1. Discovery

When developers create standalone Virtual Machines, they may not actively seek out Azure Bastion, and it might not be readily apparent during the Virtual Machine creation process. While IT professionals are familiar with the concept of a bastion host or jump-box server, the average Azure user may not be. This could lead to the use of less secure public IP-based access methods. Azure Bastion Developer solves this problem by providing secure and seamless access directly in the Virtual Machine blade. In the coming months, Bastion Developer will populate as the recommended connectivity option in the Virtual Machine connect experience for available regions.

2. Usability

Setting up Azure Bastion has traditionally required users to deploy a new resource and follow a series of configuration steps, including the creation of a dedicated subnet. While these steps might be manageable for technically savvy users, they can be complex and time-consuming for many. Azure Bastion Developer simplifies the process by offering an easy-to-use, zero-configuration solution. Users can opt-in to use it during Virtual Machine connection, making secure access a breeze.

3. Cost

Azure Bastion Basic, while a powerful tool, may be a potentially expensive choice for developers who spend a few hundred dollars or less in Azure each month, leading them to connect with less secure public IP based options. Azure Bastion Developer addresses this concern by providing an option that comes at a more affordable price point than public IP. This cost-effective pricing will make Azure Bastion Developer the default private connectivity option in Azure, enabling developers to enjoy secure access without breaking the bank. The public preview of Bastion Developer will be free with more details on pricing when generally available.

Connectivity Options with Azure Bastion Developer


1. Portal-based access (public preview). Bastion Developer will offer support for RDP connections for Windows Virtual Machines and SSH connections for Linux Virtual Machines in the Azure portal.

2. Native client-based access for SSH (roadmap). Bastion Developer will offer support for SSH connections for Linux Virtual Machines via Azure Command Line Interface (CLI) in the coming months.

Feature comparison of Azure Bastion offerings


Bastion Developer will be a lightweight SKU of the Bastion service, allowing a single connection per user directly through the Virtual Machines connect experience. Bastion Developer is ideal for Dev/Test users who want to securely connect to their Virtual Machines without the need for additional features or scaling. The feature matrix below outlines the differences between Bastion Developer and Bastion Basic and Standard SKUs.

Features Developer  Basic  Standard 
Private connectivity to Virtual Machines Yes Yes  Yes 
Dedicated host agent  No Yes  Yes 
Support for multiple connections per user  No  Yes Yes 
Linux Virtual Machine private key in AKV   No  Yes  Yes 
Support for Network Security Groups   No  Yes Yes 
Audit logging   No  Yes  Yes 
Kerberos support   No  Yes  Yes 
VNET peering support   No  Yes  Yes
Host scaling (2-50 instances)   No  No  Yes
Custom port and protocol No  No  Yes 
Native SSH support via Azure CLI   Roadmap Roadmap  Yes 
Native RDP support via Azure CLI   No  No  Yes 
Azure Active Directory login for RDP/SSH via native client  No  No  Yes 
IP-based connection   No  No  Yes 
Shareable links   No  No  Yes 

How to get started


We invite you to preview Azure Bastion Developer in your cloud environment.

1. Navigate to the Azure portal.
2. Deploy a Windows or Linux Virtual Machine in one of the regions below. Note that Bastion Developer is currently only available in the following regions:
  1. Central United States EUAP
  2. East United States 2 EUAP
  3. West Central United States
  4. North Central United States
  5. West United States
  6. North Europe
3. Navigate to the Bastion tab in the Virtual Machine blade and click Deploy Bastion Developer. (Bastion Basic and Standard deployments will be moved under “Dedicated Deployment Options”).
4. Once your Bastion Developer resource is deployed, enter your Virtual Machine username and password and select Connect to securely connect to your Virtual Machine in the browser.

Introducing Azure Bastion Developer: Secure and cost-effective access to your Azure Virtual Machines

Introducing Azure Bastion Developer: Secure and cost-effective access to your Azure Virtual Machines