Sunday, 30 January 2022

Microsoft launches landing zone accelerator for Azure Arc-enabled servers

Azure Arc-enabled Servers, Azure Exam Prep, Azure Learning, Azure Preparation, Azure Online Prep, Azure Career, Azure Skills

We continue to innovate and add new capabilities to Azure Arc to enable new scenarios in hybrid and multicloud. We also want to provide our customers with the right guidance and best practices to adopt hybrid and multicloud technologies to meet their business needs. Today we’re launching the Azure Arc-enabled servers landing zone accelerator within the Azure Cloud Adoption Framework. The landing zone accelerator provides best practices, guidance, and automated reference implementations so that customers can get started with their deployments quickly and easily.

Azure Arc-enabled servers landing zone accelerator makes it easier for customers to increase security, governance, and compliance posture on servers that are deployed outside of Azure. Along with Azure Arc, services such as Microsoft Defender for Cloud, Azure Sentinel, Azure Monitor, Azure Log Analytics, Azure Policy, and many others are included in the reference implementations that can then be extended to production environments.

Design areas within the landing zone accelerator

The Azure Arc-enabled servers landing zone accelerator enables customers’ cloud adoption journey with considerations, recommendations, and architecture patterns most important to customers. For deploying Azure Arc-enabled servers in the most recommended way, we created a set of seven critical design areas. Each of these specific areas, walks customers through a set of design considerations, recommendations, architectures, and next steps:

◉ Identity and access management

◉ Network topology and connectivity

◉ Resource organization

◉ Governance and security disciplines

◉ Management disciplines

◉ Cost governance

◉ Automation disciplines

Automation for landing zone accelerator

Azure Arc landing zone accelerator uses the sandbox automation powered by Azure Arc Jumpstart, for reference implementations. Since launching 18 months ago, Azure Arc Jumpstart has grown tremendously, with more than 90 automated scenarios, thousands of visitors a month, and a vivid open-source community sharing their learnings on Azure Arc. As part of Jumpstart, we developed ArcBox, an automated sandbox environment for all-things Azure Arc, deployed in customers’ Azure subscriptions.

Here’s what Kevin Booth, Principal Cloud Architect at Insight, a technology provider, had to say about Jumpstart—“The Azure Arc Jumpstarts have proven invaluable to us at Insight in familiarizing our people and our clients with Azure Arc’s use cases, feature set, and capabilities. We at Insight have taken the Jumpstart scenarios and integrated them into our own IP to help accelerate implementation to more rapidly onboard customers, in a best practice manner.

For the Azure Arc-enabled servers landing zone accelerator, we developed the new ArcBox for IT Pros, which will act as the sandbox automation solution for Azure Arc-enabled servers with services like Azure Policy, Azure Monitor, Microsoft Defender for Cloud, Microsoft Sentinel, and more.

This provides customers with a comprehensive experience that can just be deployed and have a fully operational Azure Arc-enabled servers environment.

Azure Arc-enabled Servers, Azure Exam Prep, Azure Learning, Azure Preparation, Azure Online Prep, Azure Career, Azure Skills

The sandbox automation supports Bicep, Terraform, and ARM templates, so customers can choose what makes sense to them and their organizations’ automation practices. This is also part of our new ArcBox 2.0 release.

Source: microsoft.com

Thursday, 27 January 2022

Rightsize to maximize your cloud investment with Microsoft Azure

Microsoft Azure, Azure Exam Prep, Azure Exam Preparation, Microsoft Career, Microsoft Skills, Microsoft Jobs

If you are running on-premises servers, chances are that you utilize a fraction of your overall server cores most of the time but are forced to over-provision to handle peak loads. Moving those workloads to the cloud can greatly reduce cost by “rightsizing” server capacity as needed.

Rightsizing is one of the key levers you have for controlling costs and optimizing resources. By understanding cloud economics, and using what Azure provides, you can identify the smallest virtual server instances that support your requirements and realize immediate savings by eliminating unused capacity.

Many industries experience spikes in server usage. When you rightsize with Azure, you are no longer compelled to buy and provision capacity based on peak demand, which results in excess capacity and excess spending.

For instance, H&R Block found that its servers got used most at specific times of the year—namely, tax season, and maintaining expensive on-premises infrastructure throughout the year was driving up costs. Once the tax preparer migrated the first 20 percent of its apps and platforms to Azure, it became very clear how the variable cost model of the cloud contrasted with the fixed model of the on-premises datacenters and revaluated its architecture.

Rightsizing in the cloud will mean different things to different organizations. One of the first questions to ask is how much of your environment is elastic versus static to get an idea of savings based on the reduction in footprint. In the example below, static usage never went above 30 percent of capacity, indicating a huge opportunity for savings.

Microsoft Azure, Azure Exam Prep, Azure Exam Preparation, Microsoft Career, Microsoft Skills, Microsoft Jobs

What does rightsizing look like for you?


Turning off workloads can obviously have an immediate impact on your budget. But how aggressively should you look to trim? Do you always know what is driving the consumption? Are there situations where you can not immediately rightsize? For workloads still needed, what can be done to optimize those resources?

That optimization can take several forms:

Resizing virtual machines: Business and applications requirements evolve, so committing to a specific virtual machine size ahead of time can be limiting.

◉ Shutting down underutilized instances: With workloads in the cloud, use Azure Advisor to find underutilized resources and get recommendations for resource optimization. This tool also can help determine the cost savings from rightsizing or shutting down central processing units (CPUs).

◉ Interrupting workloads with Azure Spot Virtual Machines: You can get deep discounts for interruptible workloads that do not need to be completed within a specific timeframe.

◉ Identify workloads that need extra capacity: With Azure, it is easier to meet consumption demands. In fact, the process can be largely automated.

When migrating workloads to Azure, do not consider it a one-to-one migration of server cores. The cloud is infinitely more flexible, allowing for unpredictable workloads and paying only for the resources needed. Plan for the peak but know that you do not have to hold on to that capacity. Under consistently high usage, consumption-based pricing can be less efficient for estimating baseline costs when compared to the equivalent provisioned pricing.

Be sure to consider tradeoffs between cost optimization and other aspects of the design, such as security, scalability, resilience, and operability. When using tools like Azure Advisor, understand that they can only give a snapshot of usage during their discovery period. If your organization experiences large seasonal fluctuations, you can save on provisioning your base workloads, typically your line-of-business applications, by reserving virtual machine instances and capacity with a discount. And when those seasonal patterns and occasional bursts drive up usage, pay-as-you-go pricing kicks in.

Those consistent workloads, like a batch process that runs every day using the same resources, you can get reduced pricing by taking advantage of Azure Reservations and receive discounts up to 72 percent by reserving your resources in advance.

And speaking of cost optimization tools, use the Azure Well-Architected Framework to optimize the quality of your Azure workloads. Read the overview of cost optimization to dive deeper into the tools and processes for creating cost-effective workloads. These tools really can help. According to an IDC assessment, Azure customer enablement tools can lower the three-year cost of operations by 24 percent.

Planning for growth no longer means overprovisioning for fear of hitting capacity. When you understand cloud economics and follow the key financial and technical guidance from Azure, your workloads will be much more cost-effective in Azure.

Source: microsoft.com

Tuesday, 25 January 2022

Elastic and Microsoft Azure: Unified Observability for Spring Boot applications

Today, we are announcing the availability of Elastic integrations for unified observability of Spring Boot applications on Azure. You can seamlessly ship Microsoft Azure Spring Cloud logs and metrics into Elastic, instrument Spring Boot applications, and monitor every step of your cloud journey. You also get a holistic view across Spring Boot applications and other logs and metrics in your cloud and on-premises environments.

Over the past two years, we worked with many enterprise customers to learn about the scenarios they face. Many of these customers have thousands of Spring Boot applications running in on-premises data centers. As they migrate these applications to the cloud, they need to aggregate logs and metrics from these applications and instrument them for application performance monitoring (APM) using solutions their developers are familiar with and have been using for years. In addition, they must ensure continuity for existing server-side software that are already shipping logs and metrics and are pre-instrumented for end-to-end monitoring using systems like Elastic. You can gain deeper application visibility, reduce the time spent on root cause analysis, and provide a consistent customer experience in your web and mobile applications. Learnings from a 2021 survey also indicated that “end-to-end monitoring” is the second biggest challenge DevOps and IT managers face as they migrate Spring Boot applications to the cloud. With the integration of Azure Spring Cloud logs and metrics in Elastic, you can streamline your journey and easily instrument your Spring Boot applications for unified observability.

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Career, Microsoft Azure Skills, Microsoft Azure Job, Azure Preparation, Announcements, Web Developer, Cloud Strategy, Monitoring, Azure DevOps, Migration
Figure 1: Video screenshot with dashboards showing observability for Spring Boot applications using Elastic

Elastic can now seamlessly ingest logs and metrics from Microsoft Azure Spring Cloud so that DevOps and SREs can monitor their Spring Boot applications—whether in the cloud, on-premises, or in hybrid environments—and remove monitoring silos to increase operational efficiency and developer productivity. DevOps and SREs can monitor both Spring Boot logs and application metrics across all environments by automatically shipping them to Elastic. Teams can also correlate and add context to their Spring data with other observability data sets across their environments for holistic end-to-end monitoring.

"Microsoft is committed to making it easier for customers to modernize their Java applications in the cloud. The expanded native integration of Elastic on Azure includes support for Azure Spring Cloud that enables customers to simply achieve end-to-end observability of their Spring Boot applications."—Julia Liuson, President, Developer Division, Microsoft

Shipping Azure Spring Cloud logs to Elastic


Native integration makes it seamless to ship logs to the Elastic Observability solution. Once data is shipped into Elastic, you can jumpstart analysis in minutes with turnkey visualizations and dashboards.

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Career, Microsoft Azure Skills, Microsoft Azure Job, Azure Preparation, Announcements, Web Developer, Cloud Strategy, Monitoring, Azure DevOps, Migration
Figure 2: Diagnostic Setting blade for Azure Spring Cloud Service

Instrumenting Spring Boot applications


Run a "provisioning" automation for a completely hands-off experience for instrumenting and monitoring any new applications that you create and deploy using Elastic’s Terraform or ARM template. Or you can run it on-demand using the Azure command-line interface (CLI) for greater flexibility and control.

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Career, Microsoft Azure Skills, Microsoft Azure Job, Azure Preparation, Announcements, Web Developer, Cloud Strategy, Monitoring, Azure DevOps, Migration
Figure 3: Example of setting up Elastic instrumentation using the Azure CLI (command-line interface)

Analyzing Spring Boot application performance


After instrumenting and shipping logs from your Spring Boot application, you can use Elastic Observability to monitor, analyze, and search your data to keep your applications performing.

You can also use the out-of-the-box Azure Spring Cloud dashboards within Elastic Observability to gain instant insights from your Azure Spring Cloud logs.

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Career, Microsoft Azure Skills, Microsoft Azure Job, Azure Preparation, Announcements, Web Developer, Cloud Strategy, Monitoring, Azure DevOps, Migration
Figure 4: Out-of-the-box dashboards for Azure Spring Cloud Service logs in Elastic Observability

With Elastic application performance monitoring (APM), you can analyze and search your application performance data.

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Career, Microsoft Azure Skills, Microsoft Azure Job, Azure Preparation, Announcements, Web Developer, Cloud Strategy, Monitoring, Azure DevOps, Migration
Figure 5: Service Map for customer-service from Spring Petclinic Application

You can also use Elastic APM to look at Java Virtual Machine (JVM) metrics to understand heap memory usage, system memory usage, CPU usage, and other metrics that help you make evidence-based capacity planning decisions.

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Career, Microsoft Azure Skills, Microsoft Azure Job, Azure Preparation, Announcements, Web Developer, Cloud Strategy, Monitoring, Azure DevOps, Migration
Figure 6: JVM metrics in Elastic APM for customers-service

Utilizing the distributed tracing capabilities of Elastic APM, you can drill further into the performance of your applications to understand which transactions impact your customers the most and wherein those transactions is the most time being spent.

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Career, Microsoft Azure Skills, Microsoft Azure Job, Azure Preparation, Announcements, Web Developer, Cloud Strategy, Monitoring, Azure DevOps, Migration
Figure 7: Distributed tracing in Elastic APM for customers-service's GET owners operation

Machine learning automatically detects anomalies


With Elastic’s built-in machine learning capabilities, extracting new insights from your Azure Spring Cloud data is as simple as clicking a button—making machine learning truly operational.

The Elastic Stack processes data upon ingestion, ensuring that you have the metadata you need to identify root causes or add context to any event. Built-in tools like Data Visualizer help you find the jobs you're looking for and identify fields in your data that would pair well with machine learning.

You can derive actionable insights from your logs and APM data with the rich alerting and anomaly detection capabilities within Elastic Observability. For example, you can uncover irregular log rates or unusual behavior in the response times of your applications in Azure Spring Cloud

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Career, Microsoft Azure Skills, Microsoft Azure Job, Azure Preparation, Announcements, Web Developer, Cloud Strategy, Monitoring, Azure DevOps, Migration
Figure 8: Anomaly Explorer in Elastic

Use the same logs to stop threats at cloud scale


Since Elastic offers a common schema and single repository, the same observability data from Spring applications and other data sets can also be used for extended detection and response (XDR) to drive mean time to detection towards zero. Elastic Security brings together security information and event management (SIEM) and endpoint security, allowing organizations to ingest and retain large volumes of data from diverse sources, store and search data for longer, and augment threat hunting with detections and machine learning.

Focus on customer value while we keep the lights on


The native integration of Elastic in Azure is the easiest and fastest way to get started with Elastic on Azure. Quickly ship your Azure Spring Cloud logs and APM data to Elastic, access any type of data, from anywhere, when you need it the most—whether your data is on-prem, or in multi-cloud or hybrid cloud environments. This holistic view helps streamline migrating your Spring Boot applications to the cloud. Plus, Elastic Cloud handles the security, maintenance, and upkeep, allowing you to drive your business forward with increased visibility, improved operations, and cyber resilience.

Build your solutions and monitor them today


Azure Spring Cloud is jointly built, operated, and supported by Microsoft and VMware. It is a fully managed service for Spring Boot applications that abstracts away the complexity of infrastructure and Spring Cloud middleware management, so you can focus on building your business logic and let Azure take care of dynamic scaling, patches, security, compliance, and high availability. With a few steps, you can provision Azure Spring Cloud, create applications, deploy, and scale Spring Boot applications, and start monitoring in minutes. We will continue to bring more developer-friendly and enterprise-ready features to Azure Spring Cloud. We would love to hear how you are building impactful solutions using Azure Spring Cloud.

Source: microsoft.com

Saturday, 22 January 2022

What does “open” really mean to you?

Microsoft, Microsoft Exam Prep, Microsoft Career, Microsoft Tutorial and Materials, Microsoft Skills, Microsoft Jobs, Microsoft Preparation

In the world we live in today, almost every announcement has the word “open” in it. Unfortunately, it feels like there are as many definitions of what “open” means as there are announcements mentioning it. Sometimes it means “we have documented our closed-source SDK from our 100 percent proprietary product”, but nothing more. Allow me to give you my definition.

When we started the Industrial IoT team eight years ago, our team had already been working in the automotive and manufacturing spaces for a long time. Back then, we were called the “embedded” team. The replacement term “IoT” came much later. During these early years, we learned that customers in the manufacturing space were tired—very tired. They were tired of getting locked into proprietary ecosystems dominated by a single vendor who was charging them big bucks and yet they couldn’t go elsewhere as their data and interfaces were locked to that vendor’s systems.

We knew we had to do things differently. Satya had taken over the company and enabled each and every person working for Microsoft to embrace openness and focus on doing what’s right for the customer, instead of our bottom line. Increasing our bottom line then became a byproduct. This is a subtle but important distinction. The share price of Microsoft today is an indication of the success of this strategy and has proven him, and by extension us, right.

Giving customers the freedom to choose

So what, exactly, did we do? The .NET Framework had just been made open-source and cross-platform, so we wanted to do that, too. But we wanted to go even further. We wanted to really give our customers the chance to leave at any time. Yes, you read that right—allowing your customer to leave builds trust and increases the value of your product since you are not trying to limit them but are convinced that they will stay simply because the product you are selling is so good.

Building trust through contribution

Enabling them to leave meant enabling them to take their applications and their data with them. This meant using open interfaces throughout, not building SDKs the customers had to integrate, and also using open data models everywhere. With that in mind, the obvious choice in who to work with was in fact the very organizations responsible for the open data models and interfaces we wanted to build into our products. As it turned out, these organizations really needed help when it came to their open-source reference implementations anyway and welcomed us with open arms. We said we will contribute as much as possible to these organizations and in turn build the open-source reference implementations we contributed back into our products and therefore endorse the quality of the reference implementations we contributed. This strategy was the foundation of great partnerships with many consortia that lasts to this day and in many cases just keep getting better and better.

Eight years later and Microsoft has contributed over 4.6 million lines of code to the OPC Foundation and another 174 thousand lines of code to the Digital Twin Consortium, making us by far the largest contributor of open-source software to both organizations. The over 50 thousand lines of code we contributed to the recently launched UA Cloud Library is only a small but important part of that. Analysts all over the world are taking note of that, and it is a big reason they are putting us on top of everyone else time and time again when it comes to their magic quadrant publications.

Perhaps it is time for you to take note, too, and figure out for yourself what “open” really means to you.

Source: microsoft.com

Thursday, 20 January 2022

Delivering the connected shopping experience: How Microsoft and Avanade are reimagining retail

Consumers today demand flexibility and convenience in how they purchase everything from groceries to home goods. In addition to shopping online, they are increasingly using mobile devices and other digital services to make in-store shopping more efficient and convenient. In response, retailers are striving to reimagine their physical store footprints to drive new and improved shopping experiences.

Leading retailers are rapidly developing new solutions to deliver a retail experience that drives loyalty, both online and in stores.

Microsoft Exam Prep, Microsoft Certification, Microsoft Guides, Microsoft Preparation, Microsoft Learn, Microsoft Skills, Microsoft Jobs

Common challenges to implementing successful solutions include long deployment times for custom applications, integration of disparate point solutions, and scaling new solutions across physical stores in a timely manner.

Optimizing in-store experiences with Avanade Intelligent Store


We recently collaborated with Avanade to create a modular solution called Avanade Intelligent Store that helps retailers overcome these hurdles. Avanade specializes in providing digital and cloud services, business solutions, and design-led experiences using Microsoft technologies. This solution, built on Microsoft Azure, integrates physical and digital in-store use cases and allows retailers to optimize the customer experience across channels.

Microsoft Exam Prep, Microsoft Certification, Microsoft Guides, Microsoft Preparation, Microsoft Learn, Microsoft Skills, Microsoft Jobs

Retailers can select from pre-defined Azure offerings, including out-of-stock detection, digital signage, and virtual queue management. Avanade’s Intelligent Store enables repeatable, scalable in-store experiences delivered in a modular way and also offers a managed services option. With this model, retailers can switch services on or off as their needs fluctuate over time across different store locations.

The heart of the Avanade solution is a digital twin—a digital representation of a physical store environment, updated in real-time—that enables up-to-date visibility, seamless automation, and the use of data to enhance shopper experiences throughout the buying journey. As a result, retailers can meet multiple omnichannel use cases, including traditional shopping, online shopping and delivery, or online shopping and curbside pickup.

Using Azure for omni-channel optimization helps retailers deliver compelling brand experiences across devices and channels, understand preferences more deeply, and craft sales offers based on needs. Data across all physical and virtual apps and channels becomes an input for analytics, with services such as Azure Synapse Analytics and managed databases that help you eliminate data silos to accelerate insights. You can use this data to modernize apps and web experiences, bringing the power of AI to build advertisements that resonate with customers. You can also optimize in-store experiences with a centralized management platform, and deploy web applications with high availability and auto-scaling, without the need to manage physical infrastructure.

Microsoft Exam Prep, Microsoft Certification, Microsoft Guides, Microsoft Preparation, Microsoft Learn, Microsoft Skills, Microsoft Jobs

Swedish grocery store ICA-Roslagstull is using Avanade Intelligent Store to collect shopping data from sensors on cameras and cooler and freezer compressors. That data, collected on Azure, is then sent in real-time to store employees’ smartwatches or other mobile devices.

Armed with this information, store workers can immediately see if customers need assistance or if shelves need stocking. This presents a better overall experience by shortening wait times and ensuring accurate inventory, while also encouraging shoppers to spend more time in the store and engage with the brand both digitally and physically.

Driving a personalized, connected shopping experience


By implementing Intelligent Store solutions, retailers can create a connected and curated experience for customers, offering personalization and product recommendations while gaining real-time inventory insights and inventory optimization. With these capabilities, retail companies can differentiate the shopping experience, bring their brand to life, and increase efficiency and profitability.

Source: microsoft.com

Tuesday, 18 January 2022

Powering all your applications from cloud to edge with Azure infrastructure

Azure Infrastructure, Azure Certification, Azure Preparation, Azure Certification, Azure Career, Azure Skills, Azure Cloud

Organizations are transforming from cloud to edge, migrating and optimizing existing workloads, building new cloud-native apps, unlocking new scenarios at the edge, and combining these strategies to meet a diverse set of business needs. Microsoft is committed to helping at every step of the way with continuous technology innovation.

Today, we’re announcing product updates and enhancements across the Azure infrastructure portfolio that provide better performance, scalability, and security. We're enabling new scenarios to meet your business needs, offering more flexibility and choices for long-term success.

Achieve performance, scalability, and security

No matter what application workloads you run in the cloud, the performance, scalability, and security of the underlying cloud infrastructure are critical to success. To that end, Azure continues to innovate with new infrastructure as a service (IaaS) capabilities to empower you.

Gaining better price-performance with new Azure Virtual Machines (VMs). New Intel-based Dv5 and Ev5 VMs are generally available and deliver better price-performance compared to prior generations. New AMD-based Dasv5 and Easv5 VMs are also available, delivering better price-performance over previous generations and providing options without local disk storage for lower price points. And we are also excited to announce the preview of the new memory-optimized Ebsv5 VM series, which delivers higher remote storage performance (up to 4,000 MB/s) compared to prior VM generations. As our VM portfolio expands, use the VM selector to easily identify the right VM and disk storage options for different workloads.

Increasing application availability with Azure Virtual Machine Scale Sets flexible orchestration mode. Generally available, this new capability gives you full control of the individual VMs within a scale set while increasing application resiliency at scale across thousands of VMs.

Enhancing scalability and performance with new Azure storage capabilities. On-demand disk bursting, now generally available, lets you boost disk performance above the provisioned limit when needed to handle increased demand. With Azure Disk Storage live resize capability in preview, you can dynamically increase capacity without downtime to easily adapt to changes in demand.

Transparent network appliance insertion at scale. With the new Gateway Load Balancer in preview, you can scale and deploy third-party network appliances with ease and automatically reroute traffic to the healthy appliance instance to ensure the high availability and reliability of applications.

Managing virtual networks at scale. Azure Virtual Network Manager, also in preview, is the one-stop-shop for centralized network management. The highly scalable and available network management solution enables you to easily create and manage virtual network topologies and protect your network resources at scale. 

Continued innovation to deliver unmatched security. With Trusted Launch for all second-generation Azure VMs, it’s easier than ever to harden workloads and protect against malicious attacks. To further ensure privacy and security, you can lift and shift existing workloads into new confidential VMs, available in preview. Also in preview, Azure Kubernetes Service (AKS) on Intel SGX VMs and AMD SEV-SNP VMs helps secure orchestration of confidential containers. To ensure maximum network security, a new Azure Bastion Standard SKU, IPv6 support for private peering, and advanced MACsec support are all generally available. Additional network security enhancements include the preview of expanded ExpressRoute FastPath support and a new Application Gateway WAF engine that provides better performance.

Migrate, modernize, or optimize your workloads

Most organizations run a diverse set of applications—ranging from traditional and specialized workloads to modern applications—each has different requirements and a different cloud adoption strategy. Our mission is to provide the platform capabilities to power all your applications. Below are highlights of our latest product updates that address our customers' varied workload needs.

Simplifying IT operations for Windows Server and Linux workloads. Azure Automanage is used by IT and DevOps to automate and optimize IT management. Now generally available, extended network for Azure and SMB over QUIC—capabilities of Azure Automanage for Windows Server—help simplify Windows Server workload migration. In preview, new enhancements such as custom configuration profiles and support for Azure Arc-enabled servers provide you more flexibility in managing Windows and Linux VMs.

More regional coverage and capabilities for migration with Azure VMware Solution. Today, we’re announcing the expansion of Azure VMware Solution to the Germany West region, making the solution now available in 18 Azure regions globally. We also recently launched Azure Disk pool integration for Azure VMware Solution in preview, allowing you to scale Azure Disk Storage for data-intensive workloads. Additionally, newly expanded workload scenarios include supporting Citrix Virtual Desktop Infrastructure on Azure VMware Solution.

Optimizing remote work with new Azure Virtual Desktop enhancements. Azure Virtual Desktop is the only cloud VDI solution that supports the full Windows 10 and Windows 11 experience, along with multi-session support to host multiple users per Azure VM. To support further cost optimization, we’re announcing the preview of autoscale support, which will help organizations schedule availability around peak workloads and pay only for the resources they need. For data and application workloads you want to run locally, Azure Virtual Desktop for Azure Stack HCI in preview can help.

Modernizing with cloud-native technology on Azure. Azure Kubernetes Service (AKS), one of the fastest-growing services on Azure, lets you deploy and manage containerized applications more easily with a fully-managed Kubernetes service. Attend Microsoft Into Focus: Digital and app innovation and learn about the latest AKS and cloud-native updates including the preview of Azure Container Apps and the general availability of the Open Service Mesh integration for AKS.

More options for migration and modernization. We recently announced a new app containerization tool, improved discovery and assessment capabilities for SQL and .NET web modernization, and the general availability of agentless software inventory and dependency analysis in Azure Migrate. We’ve also further simplified the Azure Migrate appliance onboarding experience. Learn more about the newly announced support for additional scenarios in Azure Migration and Modernization Program (AMMP) including SAP, hybrid with Azure Arc, security, and increased application modernization support.

Unlock new scenarios across hybrid, multicloud, and edge

This week, we shared the latest features and innovations for Azure Arc and Azure Stack HCI in our Secure, develop, and innovate in hybrid and multicloud with Microsoft Azure session and provided a single resource to learn about and stay current on the full Azure hybrid portfolio.

Additionally, we continue to expand our IoT capabilities to support more edge scenarios. Azure IoT Central introduces support for organizations. This highly requested capability simplifies IoT for mainstream adoption. You can now build an IoT solution once and onboard multiple tenants into it—keeping costs down and reducing management overhead. In addition, Windows Server IoT 2022 and Windows 11 IoT Enterprise are now generally available. Dive deeper into IoT strategies, business growth, and considerations around sustainability and cybersecurity to power your edge strategy with Microsoft IoT Signals.

Source: microsoft.com

Sunday, 16 January 2022

Microsoft expands its AI-supercomputer lineup with general availability of the latest 80GB NVIDIA A100 GPUs in Azure, claims 4 spots on TOP500 supercomputers list

Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure AI

Today, Microsoft announced the general availability of a brand-new virtual machine (VM) series in Azure, the NDm A100 v4 Series, featuring NVIDIA A100 Tensor Core 80 GB GPUs. This expands Azure leadership-class AI supercomputing scalability in the public cloud, building on our June general availability of the original ND A100 v4 instances, and adding another public cloud first with the Azure ND A100 v4 VMs claiming four official places in the TOP500 supercomputing list. This milestone is thanks to a class-leading design with NVIDIA Quantum InfiniBand networking, featuring In-Network Computing, 200 GB/s and GPUDirect RDMA for each GPU, and an all-new PCIe Gen 4.0-based architecture.

We live in the era of large-scale AI models, the demand for large scale computing keeps growing. The original ND A100 v4 series features NVIDIA A100 Tensor Core GPUs each equipped with 40 GB of HBM2 memory, which the new NDm A100 v4 series doubles to 80 GB, along with a 30 percent increase in GPU memory bandwidth for today’s most data-intensive workloads. RAM available to the virtual machine has also increased to 1,900 GB per VM- to allow customers with large datasets and models a proportional increase in memory capacity to support novel data management techniques, faster checkpointing, and more.

The high-memory NDm A100 v4 series brings AI-Supercomputer power to the masses by creating opportunities for all businesses to use it as a competitive advantage. Cutting-edge AI customers are using both 40 GB ND A100 v4 VMs and 80 GB NDm A100 v4 VMs at scale for large-scale production AI and machine learning workloads, and seeing impressive performance and scalability, including OpenAI for research and products, Meta for their leading AI research, Nuance for their comprehensive AI-powered voice-enabled solution, numerous Microsoft internal teams for large scale cognitive science model training, and many more.

“Some of our research models can take dozens, or even hundreds of NVIDIA GPUs to train optimally, and Azure’s ND A100 v4 product helps address the growing training demands of large AI models. Modern training techniques require not only powerful accelerators, but also a communication fabric between them, and Azure’s implementation of NVIDIA Quantum InfiniBand 200 GB/s networking with GPUDirect RDMA between each NVIDIA A100 GPU has allowed us to use PyTorch and the communication libraries we’re already familiar with, without modification.”—Myle Ott, Research Engineer, Meta AI Research

“The pace of innovation in conversational AI is gated in part by experimental throughput and turnaround time. With the ND A100 v4, we are able to not only complete experiments in half the time vs the NDv2  but also benefit from significant per-experiment PAYG cost savings. This will be a critical accelerant for the advancement of our Dragon Ambient eXperience technologies.”—Paul Vozila, VP, Central Research at Nuance Communications 

"We live in the era of large-scale AI models, like the recently announced MT-NLG 530B. Training state-of-the-art Turing models at this size presented unprecedented challenges to the underlying training infrastructure, at the same time significantly raised the bar for acceleration, networking, stability, and availability. Similar to the collaborative research effort with NVIDIA Selene supercomputing infrastructure, Azure NDm A100 v4 with 80 GB of high bandwidth memory can remove many existing limits in scaling up models, such as increasing the maximum number of parameters and reducing the number of nodes required. Its performance and agility can provide a serious competitive edge to Azure customers in the race of advancing AI."—Microsoft Turing

The new high-memory NDm A100 v4 for data-intensive GPU compute workloads reaffirms Microsoft’s commitment to rapidly adopting and shipping the latest scale-up and scale-out GPU accelerator technologies to the public cloud.

We can’t wait to see what you’ll build, analyze, and discover with the new Azure NDm A100 v4 platform.

Size Physical CPU Cores  Host Memory (GB)  GPUs  Local NVMe Temporary Disk  NVIDIA Quantum InfiniBand Network  Azure Network 

Standard_ND96amsr

_A100_v4

96  1,900 GB  8 x 80 GB NVIDIA A100  6,400 GB  200 GB/s  40 Gbps 

Source: microsoft.com

Thursday, 13 January 2022

Accelerate your websites with Azure Static Web Apps enterprise-edge

Site performance is crucial when creating applications on the modern web. It has a direct impact on user experience, search engine rankings, and user conversion rates. With Azure Static Web Apps, we're committed to building a platform that helps you deliver the best results for your users at a global scale. It provides streamlined full-stack development and hosting across static content and serverless APIs to power high productivity from source code to global high availability.

Today, we are announcing the preview of the Azure Static Web Apps enterprise-grade edge powered by Azure Front Door which enables faster page loads, enhanced security, and increased reliability for your global apps with no configuration or additional code required.

Azure Static Web Apps,
Figure 1: Azure Front Door's Global Points of Presence (POP) and global network.

“Static Web Apps has helped us deliver our solution EasyLife 365 in a fast and reliable way to our customers worldwide. I can recommend this service to everyone looking for enterprise-grade static content hosting on Azure.”—Patrick Lamber, CEO, EasyLife 365

This new feature includes enhanced protection against Distributed Denial of Service (DDoS) attacks, significantly reduces latency, and helps increase throughput for your global users.

Improve page speed


Today, with Azure Static Web Apps, when a user visits your site, our HTTP load balancer will determine the closest node to their location and serve your application from there. If any of our global regions is down, we intelligently re-route your user's requests to the closest healthy node in the network. This enables you to build your application resiliently across regions, failover instantly, and offer your users an "always-on" website availability. Enabling the enterprise-grade edge further increases your global presence by caching your website’s static content across 118+ points of presence (POP) across 100 metro cities. As a result, your app runs with lower latency and higher throughput to your end-users.

Figure 2: Load test for a static site with enterprise-grade edge enabled.

Faster page loads will not only improve your user's experience but will also have a positive impact on your search engine rankings and ultimately result in higher user conversion.

Secure your applications on a globally deployed network


As the number of users you serve increases, so do the chances of malicious attacks like DDoS. Enabling your applications on an enterprise-grade edge can prevent attackers from reaching your application and affecting your application's availability and performance. The full scale and capacity of our globally deployed network provide defense against common network layer attacks through always-on traffic monitoring and real-time mitigation.

With this feature enabled, you run your websites on top of Microsoft’s global application and content delivery network that has been running the most significant web workloads for many years, ensuring its resiliency and self-recovery capabilities. Global services like Azure DevOps, LinkedIn, and Xbox Live run on top of it and deliver blazing fast and reliable application experience.

Our enterprise-grade edge is a zero-config cloud content delivery service that includes CDN and application acceleration. Once you enable this feature for your production applications, we will migrate your live traffic to our enhanced global distributed network with no downtime.

Figure 3: Enable enterprise-grade edge using the Azure portal.

With this new feature, Azure Static Web Apps streamlines everything you need to deliver the most secure and fastest experience for your users. We continue to be amazed by what you are building and can't wait to see how this will empower you and your users to achieve more!

Source: microsoft.com

Saturday, 8 January 2022

Improving the cloud for telcos: Updates of Microsoft’s acquisition of AT&T’s Network Cloud

Microsoft Cloud, Microsoft Exam Prep, Microsoft Certification, Microsoft Preparation, Microsoft Skills, Microsoft Jobs

Microsoft and AT&T reached a major milestone when we announced an industry-first collaboration to evolve Microsoft's hybrid cloud technology to support AT&T’s 5G core network workloads. Since then, we have had requests from many operators, partners, and customers to share more details. This blog is intended to do just that.

An eye toward future innovation: Harnessing the power of the cloud

Through Azure for Operators, Microsoft has forged close ties with AT&T personnel, product services groups, and partners. At the heart of the value Microsoft delivers in each of these relationships, is the way in which we leverage the power of the cloud to improve the next generation of telco networks. Microsoft aims to harness trends toward Software Defined Networking (SDN), Cloud-Native Network Functions (CNFs), and Virtualized Network Functions (VNFs) coupled with the service-based architecture of 5G, to begin digitally transforming the network.

This evolution involves introducing both hybrid infrastructure and software, building scalable elastic carrier-grade networks, and using the power of AI and machine learning to build self-optimizing networks that can heal, defend, and provision themselves. These efforts will enable operators to hyper-automate the business itself, bringing down costs and improving the overall service experience. Azure for Operators represents the set of investments Microsoft is making to bring the power of the cloud to the network.

Many of the benefits that Microsoft delivered in the enterprise space are directly applicable in the networking space. First, however, they need to be modified to account for the unique carrier-grade needs of operators. Our efforts are aimed at getting workloads on the network to function on a carrier-grade cloud, which is a hybrid cloud, spanning both public and dedicated on-premises cloud infrastructure. Telecommunication services are highly distributed and will likely become more so over time. As a result, the value of creating a carrier-grade hybrid cloud model lives in its ability to meet customers where they are—at the edge of the cloud, the edge of the network, or the edge of the enterprise.

The progress toward the acquisition

In 2013 AT&T adopted an aggressive position on Software Defined Networks (SDNs) and virtualization, with the ultimate goal of delivering 75 percent of their network using virtualized technology by 2020, which they succeeded at. Back then, there was no commercial cloud option available that included the necessary features and capabilities to enable carrier-grade cloud. AT&T created a standards-based implementation of cloud technology that was deployed in their on-premises data centers. This initial integrated cloud evolved into a Network Cloud, and today, we’ve arrived at Network Cloud 2.7—representing seven years of experience developing on-premises cloud for network workloads.

With Microsoft’s recent acquisition of this technology, development teams from AT&T’s Network Cloud organization have moved into Azure for Operators, directly integrating the intellectual property into a Microsoft offering and assuring a seamless transition.

There are three key reasons why this collaboration is unique. For starters, this is the first time that a tier-one operator has embraced commercial hybrid cloud technology to run mobility network workloads that support their existing consumer base. The second aspect that’s unique is that the effort is entirely focused on the mobility core network versus go-to-market collaborations at the edge. And third, the arrangement is multi-vendor. Microsoft hybrid cloud technology supports the AT&T mobile core network that spans more than 60 cloud-native network functions (CNFs) and virtual network functions (VNFs) from 15 different vendors.

Microsoft Cloud, Microsoft Exam Prep, Microsoft Certification, Microsoft Preparation, Microsoft Skills, Microsoft Jobs

What the Network Cloud acquisition means


Network Cloud technology originally developed by AT&T can be utilized by multiple carriers, maintaining security, without losing differentiation, and with the added benefit of having many costs such as security patching, vendor updates, and regulatory changes delivered as part of a standard commercial product.

These capabilities will be combined with Microsoft’s edge platform, our hybrid management platform, Azure Arc, and our ecosystem of partners including equipment providers, hardware vendors, and software vendors. By joining the Network Cloud with our platform and growing ecosystem, we have achieved a carrier-grade hybrid cloud solution that will be delivered as the Azure for Operators platform.

One key aspect of this collaboration is our respective roles—Microsoft develops the carrier-grade hybrid cloud technology that supports the AT&T mobility core network workloads. AT&T continues to select and manage the network applications (VNFs and CNFs) and their configurations to deliver mobility services to AT&T customers. As such, we're taking the AT&T Network Cloud technology, building it into Microsoft's standard hybrid cloud product, and then delivering a carrier-grade hybrid cloud solution back to the market and AT&T itself, where it can run at AT&T on-premises or on Azure public cloud. Microsoft hybrid cloud technology supports the AT&T mobility core network workloads used to deliver 5G connectivity that supports consumer, enterprise, and the FirstNet responder community. In terms of security, it’s important to note that Microsoft does not access AT&T customer data—AT&T continues to hold access to that data, and Microsoft cannot see it.

For AT&T, this collaboration puts them in a position to deliver new services faster and more flexibly across Azure public cloud and on-premises with common tooling and services, reducing time-to-market for a cloud-native approach. We believe the result will be better resiliency across the network, cost advantages when it comes to scaling existing services, and a more effective introduction of new services resulting in continuous improvements to the customer experience.

Source: microsoft.com

Thursday, 6 January 2022

Accelerate the in-vehicle digital experience with Azure Cognitive Services

Azure Cognitive Services, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Career, Azure Skills

Microsoft is helping to reshape the automotive industry in the way it serves its drivers with in-vehicle infotainment systems. As an example, Azure is partnering with XPeng to enable AI voice experiences for automotive brands and customers. The solution provides the industry with a fresh take on text-to-speech and expressive voice, global languages, speaker fidelity, and self-service customization. XPeng joins a growing trend of automakers rethinking investments in environmental voice.

This is a cutting-edge exploration of vehicle voice interaction in the auto industry,” Xpeng automotive AI product senior expert Hao Chao said. “The experience delivers a whole new level of natural speech. With a deep understanding of urban mobility, we are finding many more scenarios to leverage AI technology for a high level of driver-machine intuition.”

XPeng tapped into Microsoft's neural text-to-speech technology for their in-car user experience. By using Microsoft's neural text-to-speech with emotional styles, Xpeng can provide a more delightful listening experience for their customers and combat listening fatigue. Microsoft's neural text-to-speech provides fluency and naturalness that is comparable to a human voice. Coupled with multi-emotional voices, Microsoft text-to-speech acts as a refreshing replacement to the monotonous sound many car assistants have today.

"We are excited to reimagine how speech and voice can improve the lives of drivers,” Azure AI Speech Product Lead Binggong Ding said. “While from a technical point of view, we really want to make this a model that can serve all auto brands and their developers. How can we best optimize the use of synthetic speech to enable a high-fidelity voice experience without compromising sound quality? XPeng is building upon this challenge to provide a voice assistant that customers have been looking for.”

Microsoft’s long-term goal is to make advanced multi-emotional, global voice capabilities the new standard for global car brands and consumers. The technology adopted by XPeng added dozens of voice styles, unique emotional intensity control, and deduction abilities. It covers 90 certifications worldwide including domestic policies, regulatory data center requirement and EU GDPR, and higher data privacy-policy holder requirements. Together with the car manufacturers, Microsoft is creating new driving experiences with speech based on the text-to-speech and speech-to-text capabilities within Azure Cognitive Services for speech.

Accelerated speech innovation

Voice is the new interface in ambient computing technology. The quality of text-to-speech and speech-to-text has improved in recent years due to research and technological leaps enabled by the development of neural networks. High-quality speech-to-text and text-to-speech fulfill the needs of the automaker to create the next generation modern in-car speech experience. Microsoft speech-to-text offers robust recognition capabilities which are speaker-independent and capable of handling ambient noise while driving. Microsoft text-to-speech also features a more fluid, natural-sounding voice which can be a differentiation for automakers and customers alike. Both speech-to-text and text-to-speech also increase hands-free control of the car infotainment system. Microsoft text-to-speech supports several speaking styles, including chat, newscast, and customer service. These advancements allow drivers to have a more delightful driving experience.

Offering global languages

Microsoft helps automakers cover their global business and just recently hit a milestone of 100 languages and now supports 119 languages and variants with 278 voices out-of-box. This is aligned with our company vision to empower every person and organization on the planet to achieve more. “One hundred languages is a good milestone for us to achieve our ambition for everyone to be able to communicate regardless of the language they speak,” said Xuedong Huang, Microsoft Technical Fellow and Azure AI Chief Technology Officer. With more languages with their variants covered, we’re excited to be powering natural and intuitive voice experiences for automakers.

Differentiation with customization

Microsoft empowers automakers to develop a highly realistic branded voice for more natural conversational interfaces using the custom neural voice capability. Based on the neural text-to-speech technology and the multi-lingual multi-speaker universal model, custom neural voice lets you create synthetic voices that are rich in speaking styles or adaptable cross languages with as little as 30 minutes of audio. The realistic and natural-sounding voice of custom neural voice can represent brands and specific personas and allow users to interact with applications naturally in a conversational style.

Compliance and responsible AI

Microsoft is committed to investing in meeting regulatory standards around the globe to meet the automakers’ compliance requirements. The speech service, part of Azure Cognitive Services, is certified by SOC, FedRAMP, PCI DSS, HIPAA, HITECH, and ISO. Backed by Azure infrastructure, the speech service also offers enterprise-grade security, availability, compliance, and manageability.

Microsoft is committed to developing AI technology in a responsible way. We use different technical and policy features to safeguard against misuse of the technology. For example, we are designing and releasing Custom Neural Voice with the intention of protecting the rights of individuals and society, fostering transparent human-computer interaction, and counteracting the proliferation of harmful deepfakes and misleading content. This aligns with Microsoft’s commitment to responsible AI. That commitment includes Transparency Notes, which communicates the purpose, capabilities, and limitations of an AI system.

Source: microsoft.com

Tuesday, 4 January 2022

Azure DevOps: An Introduction

Azure DevOps, Azure Exam, Azure Exam Prep, Azure Exam Preparation, Azure Guides, Azure Career, Azure Skills

Azure DevOps is a Software-as-a-Service (SaaS) platform that provides several tools for better team collaboration. It also provides tools for automated build processes, version control, project management, testing, release management, package management, and much more. Azure DevOps was launched in October 2018 and has been a feature-rich platform since then.

To get started with Azure DevOps, you need to know about DevOps.

What is DevOps? And more importantly, Why DevOps?

The term DevOps originates from the fusion of two words, Development, and Operations. DevOps can have different meanings to different people and organizations.

According to Amazon Web Services (AWS),

DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.

According to Microsoft,

DevOps is the union of people, process, and products to enable continuous delivery of value to our customers.

DevOps has several benefits:

◉ Improved Collaboration among teams

◉ Reduction in Lead time from commit to deployment (Lead time is the time it takes for a feature to make it to the customer)

◉ Rapid Deployment: Software is deployed more frequently

◉ Scaling is more efficient and involves lesser risk

So, DevOps is not just a role, job title, methodology, or piece of software. DevOps is a field that takes skills from Development and Operations to create and run applications more effectively.

What is Azure DevOps? And again, Why Azure DevOps?

Azure DevOps is a platform that will make it easier and efficient to implement a DevOps-based solution in your organization. It contains a range of services that cover the complete software development life-cycle. These are:

◉ Azure Boards is a tool that helps teams plan, track, visualize, and discuss the work that needs to be done. It offers four different processes to choose from:

    ◉ CMMI

    ◉ Scrum

    ◉ Agile

    ◉ Basic

◉ Azure Pipelines is a cloud service that can be used to automatically build, test, and deploy any code project. It is a cloud-agnostic CI/CD platform with support for containers or Kubernetes and works with any language or project type.

◉ Azure Repos provides unlimited, cloud-hosted private, and public Git repos.

◉ Azure Artifacts lets you create, host, and share packages. It also provides integrated package management with support for Maven, npm, Python, and NuGet package feeds from public or private sources.

◉ Azure Test Plans provides manual and exploratory testing solutions.

Azure DevOps is also compatible with various third-party tools.

Azure DevOps is flexible, platform-agnostic, and cloud-agnostic. This means that you don’t have to use all the services provided by Azure and Azure DevOps for your needs. You can choose the best services according to your requirements without any trouble. Azure DevOps is not platform dependent i.e. it doesn’t matter whether you are using Window/Linux/Mac or whether you use .NET, C/C++, Python, or any other language for your development, Azure DevOps is compatible with almost all platforms and languages. The story does not end here, it also does not matter whether you deploy your software to Azure, AWS or GCP, continuous delivery is supported to all these cloud services.

There are some more benefits of using Azure DevOps for your software development needs:

◉ Reliability: Azure DevOps is a reliable SaaS platform with regular maintenance and support.

◉ Scalability: Azure DevOps makes it way easier for organizations to scale as per their needs without any hassle.

◉ Better Collaboration: Services like Azure Boards improve the collaboration among teams within an organization to a great extent.

◉ Better Management through Work Items: Azure DevOps makes it way easier to coordinate the management of your systems with Work Items.

◉ Access to the latest features: Azure DevOps users get access to new features every 3 weeks.

◉ Hassle-free upgrade: Upgrading and patching are way easier for Azure DevOps organizations.

Azure DevOps is a feature-rich, flexible platform that can be used to evolve the DevOps practices within an organization.

Source: geeksforgeeks.org