Saturday, 30 April 2022

Unlock cloud savings on the fly with autoscale on Azure

Azure Exam Prep, Azure Certification, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Job

Unused cloud resources can put an unnecessary drain on your computing budget, and unlike legacy on-premises architectures, there is no need to over-provision compute resources for times of heavy usage.

Autoscaling is one of the value levers that can help unlock cost savings for your Azure workloads by automatically scaling up and down the resources in use to better align capacity to demand. This practice can greatly reduce wasted spend for those dynamic workloads with inherently “peaky” demand.

In some cases, workloads with occasionally high peak demand have extremely low average utilization, making them ill-suited for other cost optimization practices, such as rightsizing and reservations.

For periods when an app puts a heavier demand on cloud resources, autoscaling adds resources to handle the load and satisfy service-level agreements for performance and availability. And for those times when the load demand decreases (nights, weekends, holidays), autoscaling can remove idle resources to reduce costs. Autoscaling automatically scales between the minimum and maximum number of instances and will run, add, or remove VMs automatically based on a set of rules.

Azure Exam Prep, Azure Certification, Azure Tutorial and Material, Azure Career, Azure Skills, Azure Job

Autoscaling is near real-time cost optimization. Think of it this way: Rather than build an addition to your house with extra bedrooms that will go unused most of the year, you have an agreement with a nearby hotel. Your guests can check-in, at any time and at the last minute, and the hotel will automatically charge you for the days when they visit.

Not only does it utilize cloud elasticity by paying for capacity only when you need it, you can also reduce the need for an operator to continually monitor the performance of a system and make decisions about adding or removing resources.

What services can you autoscale?


Azure provides built-in autoscaling using Azure Monitor autoscale for most compute options, including:

◉ Azure Virtual Machines Scale Sets

◉ Service Fabric

◉ Azure App Service

◉ Azure Cloud Services has built-in autoscaling at the role level. 

Azure Functions differs from the previous compute options because you don't need to configure any autoscale rules. The hosting plan you choose dictates how your function app is scaled:

◉ With a consumption plan, your functions app will scale automatically, and you will only pay for compute resources when your functions are running.

◉ With a premium plan, your app will automatically scale based on demand using pre-warmed workers that run applications with no delay after being idle.

◉ With a dedicated plan, you will run your functions within an App Service plan at regular App Service plan rates.

Azure Monitor autoscale provides a common set of autoscaling functionality for virtual machine scale sets, Azure App Service, and Azure Cloud Service. Scaling can be performed on a schedule, or based on a runtime metric, such as CPU or memory usage.

Use the built-in autoscaling features of the platform if they meet your requirements. If not, carefully consider whether you really need more complex scaling features. Examples of additional requirements may include more granularity of control, different ways to detect trigger events for scaling, scaling across subscriptions, and scaling other types of resources.

Note that application design can impact how that app handles scale as a load increases. To review design considerations for scalable applications, including choosing the right data storage and VM size, and more, check out Design scalable Azure applications—Microsoft Azure Well-Architected Framework.

Also know that, in general, it is better to scale up than to scale down. Scaling down usually involves deprovisioning or downtime. So, choose smaller instances when a workload is highly variable and scale out to get the required level of performance.

You can set up autoscale in the Azure portal, PowerShell, Azure CLI, or Azure Monitor REST API.

Source: microsoft.com

Friday, 29 April 2022

Study Resources for Microsoft AZ-400 Exam Preparation


The AZ-400
Microsoft Azure DevOps Solutions certification exam measures and confirms an applicant's skill as a DevOps Professional around utilizing Microsoft Azure technologies for designing and executing DevOps practices. This exam is a part of the required exams required to achieve the more comprehensive Microsoft Certified: Azure DevOps Engineer Expert certification.

Microsoft AZ-400 Certification Overview

The AZ-400 Microsoft Azure DevOps Solutions certification exam is designed for DevOps Professionals who link people, processes, and tools to constantly provide value to fulfill users' needs and business goals. These applicants simplify delivery by improving practices, enhancing communication and collaboration, and generating automation. They design and enforce app code and infrastructure tactics that enable continuous integration, testing, delivery, and constant observation and feedback.

Exam takers are expected to be experts in Agile practices, must be acquainted with both Azure Administration and Azure Development, and should be masters in one of these areas. They must be capable of designing and executing DevOps practices for version control, infrastructure as code (IaC), compliance, build, configuration management, release, and testing by utilizing Microsoft Azure technologies.

AZ-400 Exam Objectives

  • Develop an instrumentation strategy (5-10%)
  • Develop a Site Reliability Engineering (SRE) strategy (5-10%)
  • Develop a security and compliance plan (10-15%)
  • Manage source control (10-15%)
  • Facilitate communication and collaboration (10-15%)
  • Define and implement continuous integration (20-25%)
  • Define and implement continuous delivery and release management strategy (10-15%)

AZ-400 Microsoft DevOps Solutions Exam Structure

  • Certification Name: Microsoft Certified: Azure DevOps Engineer Expert
  • Question Type: Multiple-choice, Multiple-answer, Sequence type, Case studies based questions
  • Exam Cost: USD 165.00
  • Total Questions: 40 – 60 Questions
  • Exam Duration: 150 Minutes
  • Languages: English, Japanese, Chinese (Simplified), Korean, Spanish, German, and French

Is the Microsoft Azure AZ-400 Exam Difficult to Pass?

Achieving a Microsoft Certification is every professional's dream, and it helps them make higher salaries and fill the gap in their dream job. But the most crucial question is how difficult is the Microsoft Azure AZ-400 exam? Well, one thing is sure this exam is not impossible to crack. The pass ratio is low, but if you have the appropriate preparation plan. Furthermore, if you have former knowledge and a thorough understanding of the field of Azure, then the exam is not difficult to crack. On the contrary, if this is your first Microsoft Azure exam, in the first place, you must concentrate on the AZ-400 syllabus topics. Line up your preparation with the exam course and concepts.

However, it is not easy to pass the certification exam, as discussed, and it demands a lot of preparation and practice. So to smoothen your preparations, read the next section!

Study Resources for Microsoft DevOps Engineer AZ-400 Exam Preparation

The study resources you prefer explain how well you will be performing in the exam. But with the lots of options available for Microsoft exam preparation, it usually becomes challenging to select the right resources. Here is a list of a few resources that will boost your learning. Furthermore, they will help you take your preparations to the next level.

1. Make Most Out of Microsoft Documentation

Microsoft provides Microsoft documentation that includes various learning pages, and Microsoft documentation aids you in understanding various scales of different Azure services in a much more thorough manner. This will assure that you are working by the clock and understanding so many new azure technologies offered by industry professionals straight to you.

2. Take Up Instructor-Led Training Course

Microsoft offers online and instructor-led training courses that are on-demand classroom lectures that you can smoothly arrange anywhere and anytime. Furthermore, these training courses help you understand the AZ-400 exam syllabus in-depth. The course for the AZ-400 exam is:

Designing and Implementing Microsoft DevOps solutions

3. Obtain the Appropriate Books

Books have been a traditional and vital element for any exam preparation. They equip you with a profound understanding of the AZ-400 syllabus. Also, books provide real-life scenarios that help you qualify for a practical exam.

4. Gauge Your Preparation Level with AZ-400 Practice Test

Mistakes are unavoidable, but certainly, they can be restricted. When it comes to Microsoft exams, practice tests help in reducing errors. Moreover, readying the brain is vital. Practice tests offer that simulation in which the brain requires to get used to the real exam. Hence after studying the complete syllabus, you need to perform AZ-400 practice tests to assess your performance. This will help you out find your strong and weak areas. Moreover, attempting numerous AZ-400 practice tests will enhance your confidence. Try improving yourself with each subsequent exam.

Why Earn Microsoft AZ-400 Certification?

  • AZ-400 opens the door to career-changing opportunities for those individuals who are bound to use Cloud operations and surpass in the Cloud Computing field.
  • Earning this Microsoft certification will improve your knowledge of technical DevOps.
  • Applicants get the right idea of how to enforce techniques for continuous delivery and integration.
  • Applicants are qualified to formulate the security plans of an organization's applications and services on Azure Cloud. Therefore, it assists them in solving the credibility problems to operate all business operations seamlessly.
  • A large number of enterprises are progressively moving to DevOps and are executing related practices. With AZ-400 certification, applicants can outshine the crowd and open various lucrative job opportunities.
  • Because the competition is mounting among the organizations and Cloud implementations, different enterprises will prefer to hire Certified engineers over non-certified professionals.
  • The DevOps engineers holding Microsoft AZ-400 certification hold the skills working with Agile practices, and they are skilled in how to deliver secure and fast outcomes. Thus, AZ-400 certified Microsoft DevOps Engineers are among the IT field's highest-paying professionals.

Summary

In the present IT field, the AZ-400 exam is sought-after and one of the high-paid certifications. Those organizations working in Cloud always prioritize hiring Microsoft Azure DevOps Engineers who have passed Microsoft exam AZ-400. If you are determined to pursue the examination, you have to execute a lot of effort and have the perseverance to prepare for the same entirely. Following the tips and recommendations discussed above helps applicants pass the AZ-400 exam on the first try.

Thursday, 28 April 2022

Optimize your cloud investment with Azure Reservations

Azure Reservations, Azure Exam Prep, Azure Exam Prep, Azure Learning, Azure Career, Azure Skills, Azure Jobs, Azure Preparation

Continuous cost optimization can take place at all stages of an Azure workload’s lifecycle, but your Azure subscription provides a very effective benefit to further optimize your investment when you are ready to deploy that workload.

For cloud workloads with consistent resource usage, you can buy reserved instances at a significant discount and reduce your workload costs by up to 72 percent compared to pay-as-you-go prices. Azure Reservations can be obtained by committing to one-year or three-year plans for virtual machines, Azure Blob storage or Azure Data Lake Storage Gen2, SQL Database compute capacity, Azure Cosmos DB throughput, and other Azure resources.

When you can predict and commit to needed capacity, it gives us visibility into your resource requirements in advance, allowing us to be more efficient in our operations. We can then pass the savings on to you. This benefit applies to both Windows and Linux virtual machines (VMs).

In addition, you now can combine the cost savings of reserved instances with the added Azure Hybrid Benefit when running on-premises and Azure workloads to save up to 80 percent over pay-as-you-go pricing.

How to get your reservation

A reservation discount only applies to resources associated with Enterprise Agreement, Microsoft Customer Agreement, Cloud Solution Provider (CSP), or subscriptions with pay-as-you-go rates. These are billing discounts (paid upfront or monthly) and do not affect the runtime state of your resources. And do not worry, you will not pay any extra fees when you choose to pay monthly.

To determine which reservation to purchase, analyze your usage data in the Azure portal, or use reservation recommendations available in Azure Advisor (VMs only), the Cost Manage Power BI app, or the Reservation Recommendations REST API.

Reservation purchase recommendations are calculated by analyzing your hourly usage data over the last seven, 30, and 60 days.

Simple and flexible

You can purchase Azure Reserved VM Instances in three easy steps—just specify your Azure region, virtual machine type, and term (one year or three years)—that's it.

Here is how it works: Discounts are generally applied to the resource usage matching the attributes you select when you buy the reservation. Attributes include the scope where the matching VMs, SQL databases, Azure Cosmos DB, or other resources run. Attributes include the SKU, regions (where applicable), and scope. Reservation scope selects where the reservation savings apply. You can scope a reservation to a subscription or resource group. When you scope the reservation to a resource group, reservation discounts apply only to the resource group—not the entire subscription.

You can manage reservations for Azure resources including updating the scope to apply reservations to a different subscription, changing who can manage the reservation, splitting a reservation into smaller parts, or changing instance size. Enhanced data for reservation costs and usage is available for Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) usage in Azure Cost Management and Billing. Those same customers can view amortized cost data for reservations and use that data to chargeback the monetary value for a subscription, resource group, or resource.

Capacity on demand

The ability for you to access compute capacity with service-level agreements, and ahead of actual VM deployments, is important to ensure the availability of mission-critical applications running on Azure. On-demand capacity reservations, now in preview, enable you to reserve compute capacity for one or more virtual machine size(s) in an Azure region or availability zone for any length of time. You can create and cancel an on-demand capacity reservation at any time, no commitment is required.

You also can exchange a reservation for another reservation of the same type or refund a reservation, up to $50,000 USD in a 12-month rolling window if you no longer need it, or cancel a reserved instance at any time and return the remaining months to Microsoft.

Source: microsoft.com

Tuesday, 26 April 2022

Enhance your classroom experience with Azure Lab Services—April 2022 update

Azure Lab Services, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Preparation, Microsoft Skills, Microsoft Jobs

Azure Lab Services offers classroom labs for higher education, K-12 institutions, and commercial organizations that don't want to use the on-premises hardware but rather want to harness the power of the cloud to host labs for students or users. We are excited to announce major updates to Azure Lab Services including enhanced lab creation and improved backend reliability, access performance, extended virtual network support, easier labs administration via new roles, improved cost tracking via Azure Cost Management service, availability of PowerShell module, and .NET API SDK for advanced automation and customization, and integration with Canvas learning management system. Learn more about the new update and how to use it.

Along with making significant reliability enhancements to the backend, labs creation, and access performance improvements, this major update is bringing a whole slew of additional features for the IT departments and administrators, educators, and the students, who are the three key personas that use this service.

IT and administrators

For the IT and administrators, we have now introduced the concept of a lab plan instead of a lab account to provide more control over the creation, configuration, and management of the labs. For ease of administration of the lab, new roles have been created to provide granular control for different people managing labs for a large organization.

Creating a large number of labs with many virtual machines requires additional vCPUs which you have to request from us. With this new update, there is an improved vCPU capacity management for your subscription and you don't share the vCPU capacity with others using the service. We have also now made it easier for you to track costs for your lab resources in Azure Cost Management. We have replaced virtual network peering with virtual network injection. With Virtual Network Injection you have more control over the network for lab virtual machines. In your own subscription, create a virtual network in the same region as the lab, delegate a subnet to Azure Lab Services, and you’re off and running.

For advanced automation, deployment, configuration, and management we have the PowerShell module and .NET API SDK. The Azure Lab Services PowerShell will now be integrated with the Azure PowerShell module and will release early February. In alignment with all the global compliance and regulatory laws around data residency, we are also saving the customer data in the regions where the labs are set up.

Educators

For all the educators and instructors using the service, we have added new functionality to improve their experience. Azure Lab Services can now be integrated within Canvas, a popular learning management system. Educators can use Canvas to create and configure labs for the students. Students can connect to the virtual machine from inside their course in Canvas. We have improved the auto-shutdown feature of the virtual machine. Auto-shutdown settings are now available for all operating systems. In addition, we have improved idle detection based on resource usage. For more flexibility, an instructor or IT Administrator can choose to skip the virtual machine template creation process if they already have an image ready to use or want to quickly deploy virtual machines for their lab.

Students

Student experiences have also improved. Students can now redeploy their virtual machine without losing data if they are having issues accessing or using the virtual machine. If the lab is set up to use AAD group sync, there is no longer a need to send an invitation email so students can access their virtual machine—one is assigned to the student automatically.

Source: microsoft.com

Sunday, 24 April 2022

Microsoft announces new collaboration with Red Button for attack simulation testing

Microsoft Announces, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Preparation, Microsoft Exam Prep

As we highlighted in our latest attack trends report, Distributed Denial-of-Service (DDoS) attacks are one of the biggest security concerns today. Whether in the cloud or on-premises, DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet. Planning and preparing for a DDoS attack is crucial to a well-vetted incident management response plan.

Today, Microsoft is excited to announce a new collaboration with Red Button, offering our customers an additional DDoS attack simulation testing provider to choose from. With Red Button’s DDoS Testing service suite, you will be able to work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment. Simulation testing allows you to assess your current state of readiness, identify gaps in your incident response procedures, and guide you in developing a proper DDoS response strategy.

Red Button DDoS Testing

Red Button’s DDoS Testing service suite includes three stages:

1. Planning session

Red Button experts meet with your team to understand your network architecture, assemble technical details, and define clear goals and testing schedules. This includes planning the DDoS test scope and targets, attack vectors, and attack rates. The joint planning effort is detailed in a test plan document.

2. Controlled DDoS attack

Based on the defined goals, the Red Button team launches a combination of multi-vector DDoS attacks. The test typically lasts between three to six hours. Attacks are securely executed using dedicated servers and are controlled and monitored using Red Button’s management console.

3. Summary and recommendations

The Red Button team provides you with a written DDoS Test Report outlining the effectiveness of DDoS mitigation. The report includes an executive summary of the test results, a complete log of the simulation, a list of vulnerabilities within your infrastructure, and recommendations on how to correct them.

Here is an example of a DDoS Test Report from Red Button:

Microsoft Announces, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Preparation, Microsoft Exam Prep

In addition, Red Button offers two other service suites that can complement the DDoS Testing service suite:

DDoS 360 is an “all included” annual service that includes the DDoS Testing, DDoS Hardening, DDoS team skills development, and DDoS Incident Response services. The program consists of multiple year-round activities carried out by Red Button’s top DDoS experts, which includes extensive pre-attack activities to strengthen your technological infrastructure and improve the skills of your teams as well as a dedicated incident response expert team in the event of an attack.

◉ DDoS Incident Response (IR) is a 30-day incident response service that consists of three phases: when under a DDoS attack or DDoS threat (for example, DDoS ransom threat), Red Button DDoS experts are immediately assigned and work closely with your security and IT teams to analyze the attack and apply the appropriate mitigations. Once the attack has been fully mitigated, Red Button audits your network architecture and DDoS protection system configuration, including running a DDoS test and provides detailed recommendations for hardening and optimization to prevent future attacks. Lastly, Red Button conducts DDoS training for your teams to increase your skills and readiness, and helps you build a DDoS Playbook that provides detailed procedures and activities to prepare for any future attack.

Azure DDoS simulation testing policy


Red Button’s simulation environment is built within Azure. You can only simulate attacks against Azure-hosted public IP addresses that belong to an Azure subscription of your own, which will be validated by Azure Active Directory (Azure AD) before testing. Additionally, these target public IP addresses must be protected under Azure DDoS Protection. Red Button services are sold separately from Azure DDoS Protection and can be purchased directly through Red Button.

You may only simulate attacks using our approved testing partners:


Source: microsoft.com

Saturday, 23 April 2022

How Microsoft measures datacenter water and energy use to improve Azure Cloud sustainability

One of the biggest topics of discussion at COP26, the global climate conference held in November 2021, was how a lack of reliable and consistent measurement hampers progress on the path to Net Zero. I have been reflecting on this issue and, on this Earth Day, I would like to provide an update on how we are measuring energy and water use at our datacenters to improve sustainability across the Azure Cloud.

Today, we’re sharing an important update on how Microsoft, and our datacenters, are helping to solve our part of this measurement challenge.

While the environmental goals are similar, each industry has unique challenges in measuring its carbon emissions to build its sustainability strategy. It’s one of the key reasons we, together with ClimateWorks Foundation and 20 other leading organizations, launched the Carbon Call. It’s also why we developed Microsoft Cloud for Sustainability, an Azure-based platform that allows organizations to combine disparate data sources into one place and help provide insights into how to improve their sustainability approaches.

You’ve told us just how important measuring energy and water consumption from our datacenters is in taking sustainability into account for commercial decisions. Below you will see, for the first time, our datacenter PUE (Power Usage Effectiveness) and WUE (Water Usage Effectiveness) metrics. To address these capabilities, we set design goals—our theoretical estimates of the most efficient we can operate our datacenters—and ensure we have measurements of our actual efficiencies. These targets can vary between datacenter generations and usage; for instance, newer datacenter generations as well as datacenters operating at peak utilization are more efficient. We track these statistics at a global level and by our operating geographies—Americas, Asia Pacific, and EMEA (Europe, Middle East, Africa).

Understanding Power Usage Effectiveness (PUE)

PUE is an industry metric that measures how efficiently a datacenter consumes and uses the energy that powers the datacenter, including the operation of systems like powering, cooling, and operating the servers, data networks and lights. The closer the PUE number is to “1,” the more efficient the use of energy.

While local environment and infrastructure can affect how PUE is calculated, there are also slight variations across providers. Here’s the simplest way to think about PUE.

Azure Cloud Sustainability, Azure Exam Prep, Azure Certification, Azure Learning, Azure Preparation, Azure Tutorial and Material, Azure Guides, Azure Jobs

We design and build our datacenters toward the optimum PUE figure. We can also predict, with a high degree of accuracy, that optimum PUE figure. As we constantly innovate, we factor these changes into our datacenter designs to get as close to “1” as feasible. Our newest generation of datacenters have a design PUE of 1.12 and, with each new generation, we strive to become even more efficient. In the chart below, the blue bars show our estimated, or designed, PUE figures, while the grey bars indicate our actual PUE figures. As you can see, in Asia Pacific our actual PUE is higher; that’s due in part to higher ambient temperatures in the region which necessitates additional cooling.

In almost every region, our actual operating PUE is more efficient than our designs.

Azure Cloud Sustainability, Azure Exam Prep, Azure Certification, Azure Learning, Azure Preparation, Azure Tutorial and Material, Azure Guides, Azure Jobs

Understanding Water Usage Effectiveness (WUE)


Water Usage Effectiveness (WUE) is another key metric relating to the efficient and sustainable operations of our datacenters and is a crucial aspect as we work towards our commitment to be water positive by 2030.

WUE is calculated by dividing the number of liters of water used for humidification and cooling by the total annual amount of power (measured in kWh) needed to operate our datacenter IT equipment.

Azure Cloud Sustainability, Azure Exam Prep, Azure Certification, Azure Learning, Azure Preparation, Azure Tutorial and Material, Azure Guides, Azure Jobs

Like PUE, there are variables that can impact WUE—many of which relate to the location of the datacenter. Humid locations often have more atmospheric water, while arid locations have very little. Datacenters in colder parts of the world, like Sweden and Finland operate in naturally cooler environments so require less water for cooling. Our datacenter designs minimize water use. The chart below shows (in blue) our estimated or designed WUE figure, and in grey, our actual WUE figure. Again, Asia Pacific is higher due to higher ambient temperatures and as a result the need in some places for water-cooled chillers.

Azure Cloud Sustainability, Azure Exam Prep, Azure Certification, Azure Learning, Azure Preparation, Azure Tutorial and Material, Azure Guides, Azure Jobs

We continue to integrate our standards in water reduction technologies such as those in our Phoenix, Arizona datacenter where we use direct outside air most of the year to cool servers. We otherwise cool through direct evaporation that requires a fraction of the water compared to other, conventional water-based cooling systems such as water-cooled chillers.

Furthermore, by powering our datacenter with power from the Sun Streams 2 Solar Project owned by local partner, Longroad Energy, we’re displacing the water needed in the traditional electricity generation process and expect to save 356 million liters of water annually.

Scope 3 and supply chain


As we shared in March with our annual sustainability report, we made good progress on a number of our goals. Across the company’s operations, we saw an overall reduction in our Scope 1 and Scope 2 emissions of about 17 percent year over year, through our purchasing of renewable energy. At the same time, we also saw a rise in our Scope 3 emissions, which increased about 23 percent year over year.

We know that Scope 3 emissions (representing the total emissions across a company’s entire value chain) are the most difficult to control and reduce, because we can often only influence change. We know this is a long-term effort and this year we have increased our focus on operational discipline that is rooted in reliable data. We’ve also been working with partners across the industry, including Infrastructure Masons on carbon transparency within the datacenter supply chain, and will have exciting news to share at the Datacloud Global Congress on April 25 to 27.

Source: microsoft.com

Thursday, 21 April 2022

Azure Purview is now Microsoft Purview

Microsoft Purview, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Tutorial and Material, Microsoft Preparation

In September of 2021, we announced the highly anticipated general availability of Azure Purview—a cloud-native data governance solution to enable organizations of all sizes to manage and govern their on-premises, multicloud, and software as a service (SaaS) data. Since Azure Purview was brought onto the market, thousands of organizations including London Heathrow Airport, Grundfos, and illimity have collectively discovered tens of billions of data assets as well as served up millions of searches every month to empower knowledge workers to find valuable enterprise data quickly and easily. 

Organizations that use Azure Purview have a more holistic understanding of their hybrid data estate, which is always kept up to date with automated data discovery and sensitive data classification. In addition to empowering knowledge workers, this understanding, along with insights from sensitivity, business context, and relationships between data assets is also being used by teams working under the Chief Data Officers (CDO), the Chief Information and Security Officers (CIO and CISO) and the Chief Risk and Compliance Officers (CRO and CCO) to govern, protect, and manage data more effectively.

Traditional data management solutions rely on multiple unconnected, duplicative business processes, and a patchwork of software products augmented with custom code and point-wise integrations. Dozens of products are sometimes used together to address fragments of the data governance and compliance landscape, forcing Chief Data, Security, Compliance, and Legal Officers to stitch together solutions that don’t work together, expose infrastructure gaps, and are costly and complex to manage. A survey of US-based decision-makers showed that to meet their compliance and data-protection needs, almost 80 percent had purchased multiple products, and a majority had purchased three or more. The result is increased operations costs, ineffective data governance, poor security outcomes, failed compliance audits, and damage to brand reputation. Additionally, as the threat landscape continues to evolve, the types of risks organizations face inevitably expand and extend well beyond the traditional cybersecurity risks. This means that risk roles within the organization are blurring, requiring a collaborative and cohesive approach across data, compliance, and risk officers, as each drives an integral part of an effective data strategy. We believe the new way to optimize your data strategy is to deliver a unified view of data in the organization across hybrid, multicloud environments by bringing together the business users of data with the protectors of data.

In the past, we have shared how Azure Purview and Microsoft 365 Compliance are used together to ensure consistent, automated application of sensitivity labels to data assets across the data estate to simplify how organizations understand their sensitive data.

Today, we are excited to introduce Microsoft Purview—a comprehensive set of solutions from Microsoft to help you govern, protect, and manage your entire data estate. By bringing together the former Azure Purview and the former Microsoft 365 Compliance portfolio under one brand and over time, a more unified platform, Microsoft Purview can help you understand and govern the data across your estate, safeguard that data wherever it lives, and improve your risk and compliance posture in a much simpler way than traditional solutions on the market today.

Microsoft Purview

◉ Helps you gain visibility into assets across your entire data estate.

◉ Leverages that visibility to manage end-to-end data risks and regulatory compliance.

◉ Governs, protects, and manages data in a new, more comprehensive, and simpler way. 

Customers of the Azure Purview portal can now use the Microsoft Purview governance portal. For customers of Microsoft 365 E5 or Microsoft E5 Compliance, check out the Microsoft Purview compliance portal to see what’s new!

Source: microsoft.com

Tuesday, 19 April 2022

Enhance your data visualizations with Azure Managed Grafana—now in preview

Organizations are transforming their digital environments to increase agility and to operate more efficiently. We see this transformation in how customers migrate to the cloud and adopt cloud-native technologies and practices in their own environments. As their digital estates become increasingly more complex and critical to their business operations, it becomes even more important to effectively manage and monitor their applications and infrastructure.   

Grafana is a popular open-source analytics visualization tool that allows users to bring together logs, traces, metrics, and other disparate data from across an organization, regardless of where they are stored. Last year, we announced our strategic partnership with Grafana Labs to develop a Microsoft Azure managed service that lets customers run Grafana natively within the Azure cloud platform. Today, we are announcing that Azure Managed Grafana is available in preview. With Azure Managed Grafana, the Grafana dashboards our customers are familiar with are now integrated seamlessly with the services and security of Azure.

Microsoft Online Guide, Microsoft Exam Prep, Microsoft Certification, Microsoft Learning, Microsoft Skills, Microsoft Jobs

Seamless connection across Azure data sources and beyond


The Grafana application lets users easily visualize all their telemetry data in a single user interface. With Grafana's extensible architecture, users can visualize and correlate multiple data sources across on-premises, Azure, and multi-cloud environments. Azure Managed Grafana particularly optimizes this experience for Azure-native data stores such as Azure Monitor and Data Explorer thus making it easy for customers to connect to any resource in their subscription and view all resulting telemetry in a familiar Grafana dashboard.

Customers can preserve existing charts in the Azure portal that are used for monitoring. Through service-to-service integration, our customers can bring any chart in the Azure portal over to their Azure Managed Grafana instance with a one-click “pin to” operation thus automating the entire migration process. 

Azure Managed Grafana also provides a rich set of built-in dashboards for various Azure Monitor features to help customers easily build new visualizations. For example, some features with built-in dashboards include Azure Monitor application insights, Azure Monitor container insights, Azure Monitor virtual machines insights, and Azure Monitor alerts.

Microsoft Online Guide, Microsoft Exam Prep, Microsoft Certification, Microsoft Learning, Microsoft Skills, Microsoft Jobs

Secured access and sharing of Grafana dashboards with Azure Active Directory


In Azure Managed Grafana, customers can customize user permissions with specific roles and assignments stored in Azure Active Directory. These definitions are mapped transparently to Grafana’s internal roles, which enforces the actual access control. This integration enables both simplicity and consistency by allowing customers to manage users in their teams and authorize their use of a Grafana instance centrally through Azure Active Directory.

On the backend, Azure Managed Grafana can be configured to access Azure Monitor through a managed identity that was set up as part of the Grafana instance creation. Using this option, customers do not need to deal with another credential separately—though that is still possible if preferred.

Source: microsoft.com

Sunday, 17 April 2022

Accelerate silicon design innovation on Azure with Synopsys Cloud

Azure Synopsys Cloud, Azure Certification, Azure Learning, Azure Skills, Azure Jobs, Azure Career, Azure Cloud

Semiconductor and silicon technology are the basis of digital transformation happening everywhere, across industries and our daily lives, impacting the way we work, learn, and play. The continuous improvement in the performance and power of silicon has been key to enabling this innovation. Here at Microsoft, we’ve empowered our long-standing partners in the semiconductor industry to embrace Azure’s cloud infrastructure and scale out electronic design automation (EDA). With a new EDA-optimized cloud environment running on Azure, the launch of Synopsys Cloud marks a significant milestone for the industry by offering silicon design teams the ability to scale and accelerate their development cycles—transforming chip design the way that the cloud transformed computing.

Increasing flexibility and efficiency in silicon development on Azure

The collective rise in time-to-market pressure caused by the global chip shortage and increasing computational demands have caused chipmakers to seek more flexibility and efficiency in the silicon design process. Migrating chip design to Azure’s optimized infrastructure helps address part of this equation by enabling critical design and verification workloads on the cloud—resulting in faster time-to-results and better quality at a lower cost. With Synopsys Cloud built on Azure, chip designers will now also have access to a new pay-per-use model offering automated provisioning of infrastructure and EDA tools to address the growing demands of silicon design.

This “pay-as-you-go” model is a software as a service (SaaS)-based approach that will reduce barriers for companies of all sizes while enabling greater innovation and value for customers and EDA vendors alike. Using the power of Azure’s workload scaling and virtual machine (VM) selection capabilities, Synopsys Cloud customers will be able to optimize critical EDA workloads—from reducing processing time on verification tasks to saving runtime and enabling faster design convergence on library characterization.

Expanding access to chip design on the cloud

Microsoft has long been committed to helping companies of all sizes unlock more potential on the cloud. With its powerful chip design and verification tools running on Azure’s trusted and comprehensive cloud platform, Synopsys is Microsoft’s preferred partner for EDA on the cloud. Using Synopsys’ solution, customers ranging from startups to large design enterprises benefit from simplified access to custom infrastructure for all their chip design needs—helping them build silicon and tackle designs they previously could not.

Innovation for wide-ranging impact

From intelligent scaling of EDA resources to using AI and machine learning models to transform design and resource management, silicon manufacturing has already seen vast improvements with the introduction of the cloud. The shift towards cloud-centric silicon design has enabled newfound access to compute, storage, and tooling resources. Ultimately, improved time-to-results, quality-of-results, and cost-of-results are just the beginning of what cloud-enabled EDA enhancements can offer. As design on the cloud becomes increasingly widespread, I look forward to seeing the silicon industry continuing to innovate towards new levels of ingenuity—powered by the Microsoft Cloud.

Source: azure.microsoft.com

Saturday, 16 April 2022

Feathr: LinkedIn’s feature store is now available on Azure

Feature store motivation

With the advance of AI and machine learning, companies start to use complex machine learning pipelines in various applications, such as recommendation systems, fraud detection, and more. These complex systems usually require hundreds to thousands of features to support time-sensitive business applications, and the feature pipelines are maintained by different team members across various business groups.

In these machine learning systems, we see many problems that consume lots of energy of machine learning engineers and data scientists, in particular duplicated feature engineering, online-offline skew, and feature serving with low latency.

Azure Exam Prep, Azure Certification, Azure Learning, Azure Guides, Azure Prep, Azure Career, Azure Jobs, Azure Skills
Figure 1: Illustration on problems that feature store solves.

Duplicated feature engineering

◉ In an organization, thousands of features are buried in different scripts and in different formats; they are not captured, organized, or preserved, and thus cannot be reused and leveraged by teams other than those who generated them.

◉ Because feature engineering is so important for machine learning models and features cannot be shared, data scientists must duplicate their feature engineering efforts across teams.

Online-offline skew

◉ For features, offline training and online inference usually require different data serving pipelines—ensuring consistent features across different environments is expensive.

◉ Teams are deterred from using real-time data for inference due to the difficulty of serving the right data.

◉ Providing a convenient way to ensure data point-in-time correctness is key to avoid label leakage.

Serving features with low latency

◉ For real-time applications, getting feature lookups from database for real-time inference without compromising response latency and with high throughput can be challenging.

◉ Easily accessing features with very low latency is key in many machine learning scenarios, and optimizations needs to be done to combine different REST API calls to features.

To solve those problems, a concept called feature store was developed, so that:

◉ Features are centralized in an organization and can be reused

◉ Features can be served in a synchronous way between offline and online environment

◉ Features can be served in real-time with low latency

Introducing Feathr, a battle-tested feature store


Developing a feature store from scratch takes time, and it takes much more time to make it stable, scalable, and user-friendly. Feathr is the feature store that has been used in production and battle-tested in LinkedIn for over 6 years, serving all the LinkedIn machine learning feature platform with thousands of features in production.

At Microsoft, the LinkedIn team and the Azure team have worked very closely to open source Feathr, make it extensible, and build native integration with Azure. It’s available in this GitHub repository and you can read more about Feathr on the LinkedIn Engineering Blog.

Some of the highlights for Feathr include:

◉ Scalable with built-in optimizations. For example, based on some internal use case, Feathr can process billions of rows and PB scale data with built-in optimizations such as bloom filters and salted joins.

◉ Rich support for point-in-time joins and aggregations: Feathr has high performant built-in operators designed for Feature Store, including time-based aggregation, sliding window joins, look-up features, all with point-in-time correctness.

◉ Highly customizable user-defined functions (UDFs) with native PySpark and Spark SQL support to lower the learning curve for data scientists.

◉ Pythonic APIs to access everything with low learning curve; Integrated with model building so data scientists can be productive from day one.

◉ Rich type system including support for embeddings for advanced machine learning/deep learning scenarios. One of the common use cases is to build embeddings for customer profiles, and those embeddings can be reused across an organization in all the machine learning applications.

◉ Native cloud integration with simplified and scalable architecture, which is illustrated in the next section.

◉ Feature sharing and reuse made easy: Feathr has built-in feature registry so that features can be easily shared across different teams and boost team productivity.

Feathr on Azure architecture


The high-level architecture diagram below articulates how would a user interacts with Feathr on Azure:

Azure Exam Prep, Azure Certification, Azure Learning, Azure Guides, Azure Prep, Azure Career, Azure Jobs, Azure Skills
Figure 2: Feathr on Azure architecture.

1. A data or machine learning engineer creates features using their preferred tools (like pandas, Azure Machine Learning, Azure Databricks, and more). These features are ingested into offline stores, which can be either:

◉ Azure SQL Database (including serverless), Azure Synapse Dedicated SQL Pool (formerly SQL DW).

◉ Object storage, such as Azure BLOB storage, Azure Data Lake Store, and more. The format can be Parquet, Avro, or Delta Lake.

2. The data or machine learning engineer can persist the feature definitions into a central registry, which is built with Azure Purview.

3. The data or machine learning engineer can join on all the feature dataset in a point-in-time correct way, with Feathr Python SDK and with Spark engines such as Azure Synapse or Databricks.

4. The data or machine learning engineer can materialize features into an online store such as Azure Cache for Redis with Active-Active, enabling multi-primary, multi-write architecture that ensures eventual consistency between clusters.

5. Data scientists or machine learning engineers consume offline features with their favorite machine learning libraries, for example scikit-learn, PyTorch, or TensorFlow to train a model in their favorite machine learning platform such as Azure Machine Learning, then deploy the models in their favorite environment with services such as Azure Machine Learning endpoint.

6. The backend system makes a request to the deployed model, which makes a request to the Azure Cache for Redis to get the online features with Feathr Python SDK.

A sample notebook containing all the above flow is located in the Feathr repository for more reference.

Feathr has native integration with Azure and other cloud services. The table below shows these integrations:

Feathr component  Cloud Integrations 
Offline store – Object Store  Azure Blob Storage
Azure ADLS Gen2
AWS S3
Offline store – SQL  Azure SQL DB
Azure Synapse Dedicated SQL Pools (formerly SQL DW)
Azure SQL in VM
Snowflake 
Online store  Azure Cache for Redis 
Feature Registry  Azure Purview 
Compute Engine  Azure Synapse Spark Pools
Databricks 
Machine Learning Platform  Azure Machine Learning
Jupyter Notebook 
File Format  Parquet
ORC
Avro
Delta Lake 
Table 1: Feathr on Azure Integration with Azure Services.

Installation and getting started


Feathr has a pythonic interface to access all Feathr components, including feature definition and cloud interactions, and is open sourced here. The Feathr python client can be easily installed with pip:

pip install -U feathr

Source: microsoft.com

Thursday, 14 April 2022

Diversifying the telecommunications supply chain with Open RAN

Over the past few years, there has been an increasingly steady drumbeat for the need to diversify and open the telecommunications supply chain. This has been driven both by security concerns and by the need to improve the negotiating power of operators by introducing new entrants into the market. A key part of this supply chain that can be diversified is radio access network (RAN), where operators have typically spent most of their investments in network infrastructure.

To address the need for diversification, groups such as the O-RAN alliance have formed to open up RAN capabilities. In addition, select operator communities from all over Europe, the Middle East, Asia, and Africa have begun experimenting in this space. Governments have also been weighing in, designating telecommunications networks as a national priority and a critical part of infrastructure that needs to be secured, and nurtured to drive innovation. An example of this was the UK Government’s 5G diversification strategy—a plan to grow the telecommunications supply chain while simultaneously making it more resilient to future trends and threats.

Microsoft has successfully transformed into an edge and cloud company; so, we understand the magnitude of such an evolution. At Microsoft, our guiding principle is to support, develop, and foster a partner-rich ecosystem. We believe that the role that we play best as a cloud provider is to provide a secure, scalable, well-managed carrier-grade platform serving as the enabler for third parties to build upon.

Future Radio Access Network Challenge (FRANC)

As it turns out, the UK government’s Department of Digital, Culture, Media, and Sports (DCMS) was thinking along the same lines. The Future Radio Access Network Challenge (FRANC) was designed as a follow-on to their diversification strategy. It identified the need to accelerate Open RAN innovation to meet its target of 35 percent of all network traffic over Open RAN by 2030, as well as spark UK-based innovation in this space.

This initiative aligns well with our ambitions—to grow and diversify the supply chain as well as support a healthy and vibrant Open RAN ecosystem. We reached out to Intel and Capgemini, industry leaders in Open RAN, and the University of Edinburgh, a leading academic institution, to join us in demonstrating how beautifully our ideas could fit together to achieve our mutual objectives.

DCMS has endorsed this approach, with the Microsoft-led consortium being one of the award recipients of their challenge. At the Mobile World Congress (MWC) 2022, we pulled back the curtain a bit more to explain what we will be doing jointly as a group, and how our combined efforts will help accelerate the Open RAN ecosystem.

Technology showcased at the Mobile World Congress 2022

At MWC 2022, in close collaboration with our partners, we showed how disaggregated software and hardware are the future of telecommunications networks. This new software-driven programmable network architecture leads to faster rollouts with lower total cost of ownership. Cloud technologies—AI and machine learning analytics, edge computing, large-scale management, self-diagnostics, network programmability, network verification, and global connectivity—can be leveraged to improve highly secure operational efficiency of the virtualized RAN. This infrastructure also supports the creation of new revenue streams through the enablement of a developer ecosystem. 

Additionally, we announced the next wave of Azure for Operators solutions and services, which includes Azure Operator Distributed Services (AODS). AODS combines the enhanced version of AT&T’s Network Cloud software we acquired with the best of Azure, including our industry-leading security, monitoring, analytics, AI, machine learning, and so much more. Capable of handling network-intensive workloads and mission-critical applications, AODS is a carrier-grade platform that provides flexibility and scalability to support deployments at the edge of the cloud, the edge of the network, or the enterprise edge. We’ve been focused on ensuring that this edge infrastructure (both near and far edge) is capable of supporting RAN workloads.

We demonstrated a system that used our AODS solution, which provides hybrid cloud platform for telecommunications network functions. The architectural components included commercial off-the-shelf hardware equipment with Intel’s silicon and Capgemini’s Open RAN network functions. Specifically, Intel® Xeon® Scalable processors, PTP-enabled network interface cards (NICs), and the Intel vRAN Accelerator ACC100 Adapter, all leveraged by the FlexRAN™ layer 1 software. Capgemini provided the vCU and vDU Open RAN network functions, and Microsoft provided the cloud-managed platform through AODS.

Microsoft Exam Prep, Microsoft Certification, Microsoft Skills, Microsoft Jobs, Microsoft Career, Microsoft Azure, Azure Certification
Figure 1: Hardware setup of the live demonstration of carrier-grade cloud-managed Open RAN platform at MWC 2022. 

The setup included four commercial off-the-shelf servers connected to a top-of-rack (ToR) and management switch. A radio unit (RU), capable of 4x4 multiple input, multiple output (MIMO) over a 100 MHz channel, was connected to the ToR using 7.2 times front-haul interface. A grandmaster clock was also connected to the ToR switch providing PTP synchronization to the RU and the servers. A 5G smartphone device was used to connect to the network. We showed how with AODS we can connect all these together and deploy a virtualized RAN from a cloud with a few mouse clicks and manage it remotely. Such a seamless deployment process reduces the integration efforts that have to be done by vendors, allowing them to focus on innovation instead.

Looking ahead


Microsoft Exam Prep, Microsoft Certification, Microsoft Skills, Microsoft Jobs, Microsoft Career, Microsoft Azure, Azure Certification
Along with our partners, Microsoft is bringing to life carrier-grade edge-cloud solutions that empower operators globally to deploy Open RAN network functions easily and securely. Our tools and services can manage RAN deployments at scale. With Azure machine learning and AI, a core component of our technologies, operators can perform analytics that optimize performance, improve management, and proactively detect and solve problems.

Security principles designed for the cloud are being adopted to make the platform resilient, to prevent, detect, and respond to threats in the network and across the firmware and telecommunications supply chains. Edge and network monitoring and programmability via open API’s will enable a new generation of 5G applications while simultaneously improving operational efficiency. Operators can increase revenues and reduce infrastructure costs while building future-proof solutions.

Source: microsoft.com