Friday 30 July 2021

Advancing your financial services strategy with Azure sustainability

Many CEOs and senior business leaders have used the COVID-19 and economic crisis as an opportunity to focus on redesigning their business. Like others, they’ve felt compelled to re-examine their business and operational models, driven by the internal necessity for digital transformation, as well as external consumer and regulatory pressures to advance sustainability efforts. Across most, if not all, industries we’ve also seen the COVID-19 pandemic accelerate the pace of sustainability efforts, with executive spend expected to increase over the next few years in this direction (see figure 1).

The pandemic has also elevated the priority of social issues among executives, with consumers and employees alike granting greater credibility to corporate transparency in shareholder disclosures and the environmental impact of global operations. Within the financial services industry in particular, sustainability and climate risk can play a significant role in company valuation, as investors and other stakeholders assess the physical and transitional dimensions of climate risk in a firm’s financial assets and investments.

Given the growing importance of sustainability, this article will address how sustainability strategies can be utilized as a catalyst for business growth, particularly in the financial services industry, and how Microsoft Azure is uniquely positioned to help our customers achieve these goals.

Microsoft Exam Prep, Azure Tutorial and Material, Azure Certification, Azure Career, Azure Study Material, Azure Learning, Azure Guides
Figure 1: Per Gartner CEO and Senior Executive study, more than 90 percent of executives reported an increase in Sustainability spend.

Building blocks and benefits of a sustainability strategy


The first step towards building an effective sustainability strategy is to define it for your business (see figure 2). Establish a shared understanding of what sustainability looks like across your organization, and how each department contributes to the overarching goal.

The next stages involve analyzing real-time data to inform your strategy; data options can include historical energy consumption, real-time IoT signals on space occupancy, carbon emissions, and waste creation. Combining traditionally massive and disparate sources of data for insights requires having a modern data platform, one that can use the power of high-performance computing and advanced analytics to quickly process and derive insights from massive data sets.

The Microsoft Azure cloud platform can help here as it provides a broad spectrum of infrastructure, application, data management, and analytics tools to build a modern data estate. Microsoft’s sustainability solutions include technologies like Azure Machine Learning to assess risk and uncover climate trends, and Azure IoT Hub, which provides real-time connected device insights that lead to better-informed decision making. Being able to reason over data allows firms to both identify and monitor opportunities that introduce more sustainable practices, and discover additional revenue drivers as they scale.

Lucas Joppa, Microsoft’s Chief Environmental Officer, shared Microsoft’s sustainability report outlining how we’re making steps towards sustainability in our own businesses, working toward being carbon negative by 2030 and removing historic emissions by 2050. The report describes how Microsoft has worked across suppliers, customers, partners, and governments to progress towards the goal. Another major benefit is compliance consistency. As regulatory standards evolve, we’re intentional in helping our customers meet and prepare for them. Our technology portfolio meets a comprehensive set of compliance certifications that align with over 50 private and government national international standards, making Microsoft an optimal partner in sustainability.

Microsoft Exam Prep, Azure Tutorial and Material, Azure Certification, Azure Career, Azure Study Material, Azure Learning, Azure Guides
Figure 2: Digital solutions are among the top options companies use to address sustainability.

In addition to establishing compliance and consumer credibility, it’s also important to note the additional benefits that enterprises gain with a sustainable strategy: new partnerships, innovative products, and better business outcomes.

New partnerships

Executing a sustainability strategy requires collaboration with an ecosystem of partners, which could enhance your business. Currently, institutional and individual investors are prioritizing Environmental, Social, and Governance (ESG) visibility, examining disclosures to understand how ESG progress is being made, where companies stand, and where they are going in the future. Having a sustainability plan progresses future partnership consideration past table stakes and into real value.

Microsoft, as a partner, can help monitor non-traditional data points like social media, open data sets, and local news sources to help firms quickly react to changing market conditions or potential risks to financial assets. This can help inform discovery and future innovation.

An example of business transformation through a partnership is how startup Flowe, founded by one of the largest Italian banks, Banca Mediolanum, sought to help banking customers add sustainability—and purpose—to their lives and gained their desired millennial audience through a differentiated and competitive value proposition. Flowe, in partnership with the Microsoft Consulting Services, built a banking app that went beyond transaction and integrated social matters like sustainability, nutrition, and exercise to engage a banking community of customers who also then participated by sharing sustainability experiences; all powered by Azure AI. They were also able to benefit from their Azure (carbon neutral since 2012) partnership by becoming carbon neutral-certified quickly, providing transparency into the carbon footprint of the company’s resource usage.

Innovative products

Consumers and retail investors are now purchasing more with their conscience than ever before, generating an opportunity to create new evidence-based ESG products aligned to your company’s value proposition. Producing ESG solutions can help differentiate firms and drive competitive advantage as ESG initiatives are beginning to drive a new group of financial products like green and social bonds.

Because ESG is centered around research and data, insights may be uncovered through internal reporting and ESG ratings that could help inform new product development. Post-production, data insights can assist firms in reacting to market conditions and client requests with agility.

Capital Markets firms are uniquely positioned to influence and measure the progress in this space with new products, other financial services providers have incorporated ESG into offering responsible investment solutions to asset management clients. To help, Azure services empower firms to uncover evidence-based ESG financial products, centered on data reporting, quality insights, and trust.

Better business outcomes

Executives at transformational enterprises rank resource efficiency, innovation, and reduced costs as top benefits achieved in their organization’s sustainability program. Some of this comes by turning complex datasets across multiple sources into enhanced risk management models, reporting, and insights, which help develop new capabilities. 

Microsoft Azure solutions like Azure Cognitive Services and AI can help firms understand the full scope of materiality and find the needle in a haystack that could impact alpha generation and business risk.

The data and compute solutions from Microsoft and Microsoft partners help to assess climate risk by layering non-traditional risk factors on top of traditional and uncovering materiality with a potential fiscal impact. With this new ability, firms can run models faster and more often, enhancing the what-if analyses and its ability to manage emerging risks.

For a comprehensive solution, the upcoming Microsoft Cloud for Sustainability offering will deliver a set of integrated and automated sustainability data insights and tools that can help organizations record, report, and reduce their emissions and accelerate their sustainability goals.

Ultimately, better business outcomes are possible with a sustainable plan—see Microsoft CVP Bill Bordens’s message on Microsoft Cloud for Financial Services and Microsoft’s commitment to helping Financial Services institutions achieve sustainable growth.

Begin your journey today


Gartner refers to sustainability as a “mission-critical priority,” and with good reason. With the growing risk that climate events pose to financial assets and global supply chains, and the weight of consumer social sentiment, to not take action on sustainability efforts would mean to increase the risk of both business disruption and customer churn. Rather than overlooking or minimalizing this effort, consider how adopting a sustainability strategy can accelerate your business growth and generate new opportunities. With investment, the gains could include improved stakeholder engagement, enterprise innovation, and a competitive advantage.

Source: microsoft.com

Thursday 29 July 2021

Boost your client performance with Azure Files SMB Multichannel

Lower your deployment cost, while improving client performance with Server Message Block (SMB) Multichannel on premium tier.

Today, we are announcing the preview of Azure Files SMB Multichannel on premium tier. SMB 3.0 introduced the SMB Multichannel technology in Windows Server 2012 and Windows 8 client. This feature allows SMB 3.x clients to establish multiple network connections to SMB 3.x servers for greater performance over multiple network adapters and over network adapter with Receive Side Scaling (RSS) enabled. With this preview release, Azure Files SMB clients can now take advantage of SMB Multichannel technology with premium file shares in the cloud.

Benefits

SMB Multichannel allows multiple connections over the optimum network path that allows for increased performance from parallel processing. The increased performance is achieved by bandwidth aggregation over multiple NICs or with NIC support for Receive Sides Scaling (RSS) that enables distributed IOs across multiple CPUs and dynamic load balance.

Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Learning, Azure Guides, Azure Exam Prep

Benefits of Azure Files SMB Multichannel include:


◉ Higher throughput: Makes this feature suitable for applications with large files with large IOs such as media & entertainment for content creation/transcoding, genomics, and financial services risk analysis.

◉ Increased IOPS: This is particularly useful for small IO scenarios such as database applications.

◉ Network fault tolerance: Multiple connections mitigate the risk of disruption despite the loss of a network connection.

◉ Automatic configuration: Dynamic discovery and creation of multiple network paths.

◉ Cost optimization: Achieve higher scale from a single virtual machine (VM) client by utilizing full VM limits. To reach Azure Files premium bandwidth and IOPS scale, applications now require fewer VM clients to achieve the required scale.

Below is a sample test result that demonstrates the performance improvements of up to three times with SMB Multichannel feature enabled.

Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Learning, Azure Guides, Azure Exam Prep

The above sample test results are based on internal testing performed by generating random IO load with a single virtual machine client (Standard_D32s_v3 with a single RSS enabled NIC) connected to a premium file share. Diskspd tool was used to generate load against ten files with 64 queue depth to simulate multiple threaded workload patterns against multiple files in a share. The IOPS results are based on 4 KiB read and write IO sizes, while throughput test is based on 512 KiB read and write IO sizes to optimize for the performance. A larger percentage gain was observed in write throughput versus read throughput.

Pricing and availability


SMB Multichannel for Azure Files premium storage accounts come at zero additional cost. In addition, as part of our continued commitment to provide the most cost-effective file storage for your workloads, we recently announced a more than 33 percent price reduction on Azure Files premium tier.

Currently, SMB Multichannel preview on premium shares is available in limited regions for Windows SMB 3.x clients. We are quickly expanding the coverage to all Azure regions with premium tier. Stay up to date on region availability for SMB Multichannel by visiting the Azure Files documentation.

Getting started


Learn more about feature capability and SMB Multichannel performance in the Azure Files documentation. To get started, you will need to register your subscription for SMB Multichannel feature preview. Once the registration is complete, you can enable or disable SMB Multichannel on premium storage accounts (FileStorage) in one of the supported regions with a click of a button.

Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Learning, Azure Guides, Azure Exam Prep

Source: microsoft.com

Tuesday 27 July 2021

Innovate from cloud to edge on your terms with Azure

Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Study Material

The challenges of the past year revealed that serving and making a difference for each other, our communities, and the world around us is more critical than ever. In order to persevere and drive business success, organizations must be future-ready, build on their terms, operate hybrid seamlessly, and do all of this with an uncompromising foundation of trust. Microsoft Azure is committed to helping every organization accomplish just that, as outlined by our announcements at this week’s Microsoft Inspire.

Expand opportunities with new Azure infrastructure capabilities

Over 95 percent of Fortune 500 companies are transforming their businesses using Azure, relying on an enterprise-grade infrastructure and deep integration with the rest of the Microsoft Cloud. Azure is designed to power all workloads—providing the best Windows 10 (and now Windows 11) virtual experience for Virtual Desktop Infrastructure (VDI), offering the lowest total cost of ownership for Windows Server and SQL workloads, growing Linux workloads, and open source databases (over half of all Azure VMs are running Linux), and supporting specialized workloads such as SAP, VMware, high-performance computing (HPC), and more.

For Windows Server and SQL Server workloads, customers can take advantage of Azure Hybrid Benefit and—announced today—three years of free Extended Security Updates for Windows Server 2012/R2 and SQL Server 2012 running on Azure.

For all these workloads, cloud migration can be a huge project. We are committed to simplifying the journey, ensuring customers receive tailored support. We’ve expanded our Azure Migration and Modernization Program offering new scenarios that support the modernization of any app, regardless of framework or supporting database, into a broad array of managed services including Azure App Service, Azure Kubernetes Service, and—starting today—Azure Spring Cloud and Azure Cosmos DB. 

Bring innovation anywhere with Azure’s hybrid and multicloud approach

Taking a hybrid and multicloud approach resonates with how our customers view their businesses. Azure Arc is key to our approach to helping customers operate hybrid seamlessly. Enabled by Azure Arc, customers can also bring Azure services to any existing infrastructure including other clouds to take advantage of Azure’s benefits anywhere. This includes running managed databases like Azure SQL and Azure PostgreSQL on the infrastructure of your choice, as well as extending machine learning to the edge in order to run models close to where the data lives.

Azure Arc also integrates with Azure Lighthouse so partners can manage multiple customers resources across the edge, datacenters, and multicloud environments. With the general availability of Azure SQL enabled by Azure Arc on July 30, 2021, managed service providers using Azure Lighthouse will be able to manage cloud data services (alongside Windows, Linux, and Kubernetes) on any on-premises infrastructure or in any cloud environment a customer needs without having to change existing tools or processes.

Azure Stack HCI helps customers modernize datacenters with a hybrid solution of hyperconverged infrastructure tightly integrated with Azure. New for Azure Stack HCI, it is now possible to simplify guest licensing activation with Automatic Virtual Machine Activation (AVMA), and with a secured-core server, customers benefit from advanced protection with defense-in-depth protection over multiple layers across hardware, firmware, and operating systems. We also are happy to announce a new partner program to validate ISV software and SI services for the Azure Stack family with promotional activities for the partner solutions. To learn more, contact the Azure Stack Partner Program. In addition, the new advanced specializations for partners for Azure Stack HCI and Azure Arc are available for eligible partners to get advanced training and certification.

Innovate with Azure data

Our commitment to developers is to make Azure the best cloud for developing intelligent cloud-native applications that harness the power of data and AI.

Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Study Material
We recently announced several updates to Azure Cosmos DB that enhance its performance even further and help empower developers and ISVs to innovate with Azure Cosmos DB. Azure Cosmos DB serverless charges only for resources consumed and is great for applications with spiky traffic patterns. Azure Cosmos DB integrated cache reduces costs by up to 96 percent and boosts performance by up to 300 percent for read-heavy workloads. Always Encrypted for Azure Cosmos DB enables client-side encryption of sensitive data prior to being stored in a database. Azure Cosmos DB role-based access control (RBAC) enables admins to create Azure Active Directory roles with specific database access permissions.

We’ve also enhanced the developer experience on Azure Cosmos DB by introducing a significant expansion to Azure Cosmos DB’s free tier. Application developers will now have more flexibility to innovate at no cost with 1,000 RU/s provisioned throughput and 25 GB storage free monthly.

With these enhancements to Azure Cosmos DB, it’s easier and more cost-effective than ever to build new cloud-native applications on Azure. And when developers pair the globally available serverless Azure Cosmos DB with the highly elastic serverless Azure Functions, they get a unique combination that solves complex orchestration problems and enables mission-critical, even-driven modern applications.

Go limitless with Azure Analytics

Capturing the operational data in a cloud-native application with Azure Cosmos DB is just the beginning. Customers gain near-real-time insights on that data with a single click using Azure Synapse Link for Azure Cosmos DB. This low-cost, cloud-native HTAP implementation enables organizations to bring in-the-moment analytics to their business, with no ETL required and no performance impact on their operational system.

In order to meet our commitment to make analytics in Azure the most performant and secure experience it can be, we have continued to innovate with Azure Synapse Analytics. We recently announced several new capabilities and offers to help accelerate adoption, including Azure Synapse Pathway, which significantly accelerates data warehouse migrations by automating SQL code conversion to Azure Synapse. New Azure Synapse Business Value Assessments help partners drive value conversations with customers considering modernizing their analytics solution. The Azure Synapse migration offer boosts cost savings for customers migrating on-premises data warehouses to Azure Synapse, on top of the existing reserved capacity discounts. And the Azure Synapse limited-time free quantities offer enables customers with qualifying subscription types to take advantage of serverless capabilities and enjoy up to 10 TB of free queries per month, as well as the optimized Apache Spark capabilities for big data workloads with up to 120 free vCore-hours per month.

Discover and govern better than ever with comprehensive data governance

With the exponential growth in the volume, type, source, and interconnectedness of data, the need has never been greater to discover and govern that data across the organization. Azure Purview is a unified data management and governance solution that helps organizations achieve a complete understanding of their data. It helps discover, classify, protect, and govern data wherever it resides: on-premises, across clouds (including support for Amazon Web Services S3), in SaaS applications, and in Microsoft Power BI. Users can easily create a holistic, up-to-date map of their data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. With Azure Purview, data consumers are empowered to maximize the value and trustworthiness of their organization’s data across the data estate.

Source: microsoft.com

Saturday 24 July 2021

Advancing global network reliability through intelligent software - Part 2

Read Advancing global network reliability through intelligent software part 1.

In part one of this networking post, we presented the key design principles of our global network, explored how we emulate changes, our zero touch operations and change automation, and capacity planning. In part two, we start with traffic management. For a large-scale network like ours, it would not be efficient to use traditional hardware managed traffic routing. Instead, we have developed several software-based solutions to intelligently manage traffic engineering in our global network.

SDN-based Internet Traffic Engineering (ITE)

The edge is the most dynamic part of the global network—because the edge is how users connect to Microsoft’s services. We have strategically deployed edge PoPs close to users, to improve customer/network latency and extend the reach of Microsoft cloud services.

Read More: MS-900: Microsoft 365 Fundamentals

For example, if a user in Sydney, Australia accesses Azure resources hosted in Chicago, USA their traffic enters the Microsoft network at an edge PoP in Sydney then travels on our network to the service hosted in Chicago. The return traffic from Azure in Chicago flows back to Sydney on our Microsoft network. By accepting and delivering the traffic to the point closest to the user, we can better control the performance.

Azure Exam Prep, Azure Tutorial and Materials, Azure Preparation, Azure Learning, Azure Guides, Azure Career

Each edge PoP is connected to tens or hundreds of peering networks. Routes between our network and providers’ networks are exchanged using the Border Gateway Protocol (BGP). BGP best path selection has no inherent concept of congestion or performance—neither is BGP capacity aware. So, we developed an SDN-based Internet Traffic Engineering (ITE) system that steers traffic at the edge. The entry and exit points are dynamically altered based on the traffic load of the edge, internet partners’ capacity constraints, reduction or augments in capacity, demand spikes sometimes caused by distributed denial of service attacks, and latency performance of our internet partners. The ITE controller constantly monitors these signals and alters the routes we advertise to our internet partners and/or the routes advertised inside the Microsoft network, to select the best peer-edge.

Optimizing last mile resilience with Azure Peering Service


In addition to optimizing routes within our global network, the Azure Peering Service extends the optimized connectivity to the last mile in the networks of Internet Service Providers (ISPs). Azure Peering Service is a collaboration platform with providers, to enable reliable high-performing connectivity from the users to the Microsoft network. The partnership ensures local and geo redundancy, and proximity to the end users. Each peering location is provisioned with redundant and diverse peering links. Also, providers interconnect at multiple Microsoft PoP locations so that if one of the edge nodes has degraded performance, the traffic routes to and from Microsoft via alternative sites. Internet performance telemetries from Map of Internet (MOI) drive traffic steering for optimized last mile performance.

Route Anomaly Detection and Remediation (RADAR)


The internet runs on BGP. A network or autonomous system is bound to trust, accept, and propagate the routes advertised by its peers without questioning its provenance. That is the strength of BGP and allows the internet to update quickly and heal failures. But it is also its weakness—the path to prefixes owned by a network can be changed by accident or malicious intent to redirect, intercept, or blackhole traffic. There are several incidents that happen to every major provider and some make front page news. We developed a global Route Anomaly Detection and Remediation (RADAR) system to protect our global network.

RADAR detects and mitigates Microsoft route hijacks on the Internet. BGP route leak is the propagation of routing announcement(s) beyond their intended scope. RADAR detects route leaks in Azure and the internet. It can identify stable versus unstable versions of a route and validate new announcements. Using RADAR, and the ITE controller, we built real-time protection for Microsoft prefixes. Peering Service platform extends the route monitoring and protection against hijacks, leaks and any other BGP misconfiguration (intended or not) in the last mile up to the customer location.

Software-driven Wide Area Network (SWAN)


The backbone of the global network is analogous to a highway system connecting major cities. The SWAN controller is effectively the navigation system that assigns the routes for each vehicle, such that every vehicle reaches its destination as soon as possible and without causing congestion on the highways. The system consists of topology discovery, demand prediction, path computation, optimization, and route programming.

Azure Exam Prep, Azure Tutorial and Materials, Azure Preparation, Azure Learning, Azure Guides, Azure Career

Over the last 12 months, the speed of the controller to program the network improved by an order of magnitude and the route-finding capability improved two-fold. Link failures are like lane closures so the controller must recompute routes to decrease congestion. The controller uses the same shared risk link groups (SRLGs) to compute backup routes in case of failure of the primary routes. The backup routes activate immediately upon failure, and the controller gets to work at reoptimizing traffic placement. Links that go up and down in rapid succession are held back from service until they stabilize.

One measure of reliability is the percentage of successfully transmitted bytes to requested bytes, measured over an hour and averaged for the day. Ours is 99.999 percent or better for customer workloads. All communication between Microsoft services is through our dedicated global network. Thousand Eyes Cloud Performance Benchmark reports that over 99 percent of Azure inter-region latencies faster than the performance baseline, and over 60 percent of region pairs are at least 10 percent faster. This is a result of the capacity augments and software systems described in this post.

Bandwidth Broker—software-driven Network Quality of Service (QoS)


If the global network is a highway system, Bandwidth Broker is the system that controls the metering lights at the onramps of highways. For every customer vehicle, there is more than one Microsoft vehicle traversing the highway. Some of the Microsoft vehicles are discretionary and can be deferred to avoid congestion for customer vehicles. Customer vehicles always have a free pass to enter the highways. The metering lights are green in normal operation but when there is a failure or a demand spike, Bandwidth Broker turns on the metering lights in a controlled manner. Microsoft internal workloads are divided into traffic tiers, each with a different priority. Higher priority workloads are admitted in preference to lower priority workloads.

Brokering occurs at the sending host. Hosts periodically request bandwidth on behalf of applications running on them. The requests are aggregated by the controller, bandwidth is reserved, and grants are disseminated to each host. Bandwidth Broker and SWAN coordinate to adjust traffic volume to match routes, and traffic routes to match volume.

It is possible to experience multiple fiber cuts or failures that suddenly reduce network capacity. Geo-replication operations to increase resilience can cause a huge surge in network traffic. Bandwidth Broker generally allows us to preserve the customer experience during these conditions, by shedding discretionary internal workloads when congestion was imminent.

Continuous monitoring


A robust monitoring solution is the foundation to achieve higher network reliability. It lowers both the time to detect and time to repair. The monitoring pipelines constantly analyze several telemetry streams including traffic statistics, health signals, logs, and device configurations. The pipelines automatically collect more data when anomalies are detected or diagnose and remediate common failures. These automated interventions are also guarded by safety check systems.

Major investments in monitoring have been:

➤ Polling and ingestion of metrics data at sub-minute speeds. A few samples are needed to filter transients and a few more to generate a strong signal. This leads to faster detection times.
➤ An enhanced diagnostics system that is triggered by packet loss or latency alerts, instructs agents at different vantage points to collect additional information to help triangulate and pinpoint the issue to a specific link or device.
➤ Enhanced diagnostics trigger auto-mitigation and remediation actions for the most common incidents, with the help of Clockwerk and Real Time Operation Checker (ROC). This translates to faster time to repair and has the ripple effect of keeping engineers focused on more complex incidents.

Other pipelines continuously monitor network graphs for node isolation, and periodically assess risks with “what-if” intent using ROC as described above. We have multiple canary agents deployed throughout the network checking reachability, latency, and packet loss across our regions. This includes agents within Azure, as well as outside of our network, to enable outside-in monitoring. We also periodically analyze Map of Internet (MOI) telemetries to measure end to end performance from customers to Azure. Finally, we have robust monitoring in place to protect the network from security attacks such as BGP route hijacks, and distributed denial of service (DDoS).

Source: Microsoft.com

Thursday 22 July 2021

Next-generation firewall capabilities with Azure Firewall Premium

We are announcing the general availability release of Microsoft Azure Firewall Premium.

Key features in this release include:

1. TLS inspection: Azure Firewall Premium terminates outbound and east-west transport layer security (TLS) connections. Inbound TLS inspection is supported in conjunction with Azure Application Gateway allowing end-to-end encryption. Azure Firewall performs the required value-added security functions and re-encrypts the traffic which is sent to the original destination.

2. IDPS: Azure Firewall Premium provides signature-based intrusion detection and prevention system (IDPS) to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic or known malicious instruction sequences used by malware.

3. Web categories: Allows administrators to filter outbound user access to the internet based on categories (for example, social networking, search engines, gambling, and so on), reducing the time spent on managing individual fully qualified domain names (FQDNs) and URLs. This capability is also available for Azure Firewall Standard based on FQDNs only.

4. URL filtering: Allow administrators to filter outbound access to specific URLs, not just FQDNs. This capability works for both plain text and encrypted traffic if TLS inspection is enabled.

Azure Firewall Premium benefits

Azure Firewall Premium provides advanced threat protection that meets the needs of highly sensitive and regulated environments, such as the payment and healthcare industries. Organizations can leverage Premium stock-keeping unit (SKU) features like IDPS and TLS inspection to prevent malware and viruses from spreading across networks in both lateral and horizontal directions. To meet the increased performance demands of IDPS and TLS inspection, Azure Firewall Premium utilizes a more powerful Virtual Machine SKU. Like Standard SKU, the Premium SKU can seamlessly scale up to 30 Gbps and integrates with availability zones to support the service level agreement (SLA) of 99.99 percent. The Premium SKU complies with Payment Card Industry Data Security Standard (PCI DSS) environment needs.

To simplify migration for Standard SKU customers, we used a common configuration approach using Azure Firewall Policy. This approach allows reusing existing API integration with minimal changes and continues managing Azure Firewall using Firewall Manager. Customers using firewall rules (Classic) will take an additional step for the migration to Azure Firewall Policy first. Azure Firewall Policy offers several advantages such as sharing common configuration across multiple firewalls, grouping rules using rule collection groups, and managing rules over time using policy analytics (Private Preview).

The Azure Firewall Premium SKU is optimally priced to provide the best value for state-of-the-art cloud-native firewall service. Premium SKU, with its advanced threat protection capabilities, offers compelling reasons to migrate on-premise high-security perimeter networks to the cloud. This approach helps avoid latency incurred back-hauling internet traffic to on-premises perimeter networks.

Azure Firewall Premium, Azure Tutorial and Material, Azure Preparation, Azure Certification, Azure Guides, Azure Career
Figure 1: Azure Firewall Premium capabilities.

Migration from Azure Firewall Standard to Premium


As part of this general availability release, we are offering two new capabilities to allow smooth migration:

1. Convert the existing Azure Firewall rules (Classic) to Azure Firewall Policy.

Azure Firewall Premium, Azure Tutorial and Material, Azure Preparation, Azure Certification, Azure Guides, Azure Career
Figure 2: Migrate classic rules to Azure Firewall Policy.

2. Create a new Azure Firewall Premium and associate it to an existing policy.

Azure Firewall Premium, Azure Tutorial and Material, Azure Preparation, Azure Certification, Azure Guides, Azure Career
Figure 3: Create a new Azure Firewall Premium and associate an Azure Policy.

After exporting the Azure Firewall configuration and decommissioning your existing Azure Firewall Standard, you can deploy a new Azure Firewall Premium while associating to it the standard firewall configuration and maintaining its public IP.

Azure Firewall Premium pricing


Like the Standard SKU, Azure Firewall Premium pricing includes both deployment and data processing charges.

The deployment charge is 40 percent higher than Azure Firewall Standard and the data processing charge remains the same as Azure Firewall Standard.

Source: microsoft.com

Tuesday 20 July 2021

How cloud computing can improve 5G wireless networks

A great deal has been written about the technologies fueling 5G, especially how those technologies will improve the experience that users have regarding connectivity. Similarly, much has been said about how ongoing developments in technology will usher in a new generation of network-aware applications. In this article, we discuss one key aspect of 5G technology and how it will impact the development of wireless network capacity.

Rad More: DP-900: Microsoft Azure Data Fundamentals

This is one of the more important but often neglected aspects of wireless communication evolution. It represents yet another important reason why the convergence of cloud computing and wireless communications makes so much sense. To state it briefly, many of the complex problems associated with 5G wireless networks can be solved using software, which sets aside the need for costly, time-intensive, and often slow-to-evolve hardware that’s been used in the past.

Cloud and telecommunication: A perfect fit for next-generation networks

It is well understood that some of the most sophisticated technologies that make up 5G can be implemented in software running on off-the-shelf servers. This is exciting because we can slowly but surely walk away from specialized hardware, which has been used in all the previous four generations of telecommunication networks. Transitioning to software will help reduce the overall capital and operational expenses for telecommunication providers. Equally important, this shift from hardware to software will make such networks future-proof because they will empower the telecommunications industry to become nimble and aggressive when rolling out desirable features at a regular cadence rather than having to wait a decade or so for the next generation standards to emerge. Innovation will thrive as we create a world where going from one generation to another is a software upgrade, just as the cloud industry has been doing for over a decade.

We will say more about this in future blogs, but today, we want to discuss wireless capacity—or technically, spectrum efficiency. Hopefully, we will convince you that computing power can be used to increase cellular network capacity, and advances in software-based machine learning and data analytics techniques can be used to improve the efficiency of 5G and future networks. When adding this to the other elements of the ecosystem the marriage between cloud computing and telecommunications networks makes a perfect fit.

5G core technologies: Massive multi-user MIMO

Multiple-input and multiple-output (MIMO) is a method for multiplying the capacity of a radio link, using multiple transmission, and receiving antennas to exploit multipath propagation. MIMO is an essential element of wireless communication standards in Wi-Fi, 3G, and 4G. 5G, however, takes it to the next level with massive multi-user (MU) MIMO, scaling the number of antennas massively and supporting many users simultaneously. This technology is the key to 5G’s promise of 1,000 times the capacity gain over 4G.

The science behind massive MU-MIMO lies in the complex mathematics involved with manipulating signals sent to and received from every antenna so that communication channels with each user can be preserved and can survive the environmental distortion. This has been the subject of many technical books and academic studies, but you can find a simplified version in the illustration below.

Cloud Computing, Azure Exam Prep, Azure Learning, Azure Certification, Azure Career, Azure Tutorial and Material, Azure Cloud Strategy

Massive MU-MIMO involves lots of matrix multiplications and transpositions, all of which require significant computation. It is a direct function of the number of users being serviced by the cell tower, and the number of antennas the cell tower has. Furthermore, this computation takes place every few milliseconds for thousands of subcarriers. The implication here is that significant processing power and energy are needed. As network operators increase the number of antennas, the computational requirement goes up considerably, along with other associated problems.

User patterns also affect the amount of computation needed. The precoding method described in the above figure works best if the users are stationary or moving slowly. Otherwise, the precoding matrix must be recomputed frequently, needing even more computations. An alternative method, known as “conjugate beamforming,” may work better in this case, but the number of antennas must far exceed the number of users and the wireless capacity is generally reduced.

So, the overall capacity that the network delivers is a direct function of how much computation power the operator is willing to purchase and deploy at each of its thousands of cell towers. Edge computing, which allows the ability to scale up computing easily, is perfect for this. Even if some operators don’t need lots of capacity immediately, it is still vital to understand if the network is to be built in such a way that it can be easily scaled up as the demand for network capacity grows.

Microsoft has invested heavily in computation technology that can deliver massive MU-MIMO for 5G networks. As early as 2012, Microsoft Research invested in a practical solution to implement MU-MIMO, using distributed pipelines with a rack of commodity servers (an edge data center) to meet timing specs and to scale to hundreds of antennas (the technique was state-of-the-art and a report was published at SIGCOMM 2013).

Deep learning for wireless capacity


5G is moving towards an open architecture, with many ways to optimize a network. While this approach increases complexity, deep learning techniques can be used to take on these complexities, which are typically beyond human abilities to solve. In the above case about precoding for massive MIMO, we can apply deep learning techniques to select an algorithm that would reduce energy consumption while minimizing reduction in capacity. Through predictive analytics and modern software that adapts to dynamic network loads, 5G networks can become smarter.

Microsoft has invested heavily in machine learning and AI and supported the work of world-leading experts in this area. And we are working on augmenting telecommunication networks by designing deep learning algorithms that include domain knowledge. In addition to the example above, we are actively investigating how deep learning techniques may be used to control transmission power to reduce interference, and thus increase capacity.

Continuous machine learning (powered by flexible edge computing to model the dynamic radio frequency environment and user mobility patterns), along with managing the signal processing pipeline, creates a tremendous value proposition for the telecommunications industry. This massive step forward empowers the rapid incorporation of research breakthroughs into the system—not only for the purpose of increasing wireless capacity, but also to improve the total operational efficiency of 5G networks.

Cloud Computing, Azure Exam Prep, Azure Learning, Azure Certification, Azure Career, Azure Tutorial and Material, Azure Cloud Strategy

Azure: Where edge computing, the cloud, and telecommunication operators come together


For more than 10 years, Microsoft has invested heavily in edge computing and is continuing to do so. In particular, Azure is working to provide computation close to the cell towers where it will benefit network operators the most, as they look to cost-effectively scale their network. Additionally, through its Azure for Operators initiative, Microsoft is continually working to enable new first and third-party solutions that further enhance and simplify edge computing, from network connectivity to on-demand compute, to complete orchestration.

Given Microsoft’s ability to scale computation up as much and as often as operators demand, the power of technology at the edge—including massive MU-MIMO—is the answer that telecommunication operators have been looking for. Azure is here to support telecommunication operators in meeting their goals for increasing capacity as the network grows and evolves. While telecommunication providers increase the number of antennas and cell towers, Microsoft’s ability to spin up servers at scale and to manage them from anywhere in the world makes Azure the perfect fit for 5G and beyond for telecommunication networks.

Source: microsoft.com

Saturday 17 July 2021

Why Microsoft Azure Fundamentals AZ-900 Certification is Best for Azure Beginners?

Microsoft Azure constantly expands its set of components to offer exceptional quality solutions and help conquer business challenges. For this reason, many individuals determine to learn more about this cloud computing tool to promote their careers. But what are those crucial advantages that make Azure so sought-after, and how to acquire expertise in using it? Let’s explain it below through the overview of the most sought-after Microsoft AZ-900 exam.

Why Do Many Azure Professionals Choose the Microsoft AZ-900 Exam?

The prime reason for such popularity is that this distinct certification exam includes all the fundamentals of Azure. Microsoft AZ-900 exam, you demonstrate your skills in dealing with these cloud-based solutions and services at the entry-level and receive the prestigious Microsoft Certified: Azure Fundamentals certification. This certification confirms your proficiency as a professional, providing you a competitive advantage in the field. Also, although it is not a prerequisite for higher-level Azure certifications, it significantly exploits your potential for obtaining them.

Who Should Earn the Microsoft Certified - Azure Fundamentals Certification By Passing AZ-900 Exam?

This certification is for both technical specialists and people with a non-technical background. If you are involved in cloud computing and want to enhance your knowledge from a certified source of Microsoft, this certification is most suitable for you. It helps to understand cloud concepts, security, and privacy. Your concepts concerning the core services of Azure will be cleared. This certification is meant for those who want to validate their fundamental knowledge about cloud services and how Azure is utilizing its cloud services.

AZ-900 Exam Objective Domains

Here’s a very high-level list of the skills and objectives included in the AZ-900 Microsoft Azure Fundamentals certification exam. The percentages for each objective area designate the number of questions on the exam for that distinct exam objective.

  • Describe cloud concepts (20-25%)
  • Describe core Azure services (15-20%)
  • Describe core solutions and management tools on Azure (10-15%)
  • Describe general security and network security features (10-15%)
  • Describe identity, governance, privacy, and compliance features (20-25%)
  • Describe Azure cost management and Service Level Agreements (10-15%)

Preparing for the Microsoft Azure Fundamentals AZ-900 Exam

The Azure Certifications certainly are preeminent certifications available in the market, and AZ-900 is the first step in the direction. If you want to outshine in this exam on the first attempt, you can follow the tips mentioned below:

Create an Action Plan

If you want to accomplish a mission, you must chart out an action plan. It’s just as essential to make a study schedule as it is to follow through with it. Create a study schedule that focuses on the significance of each domain of the exam.

Register for Online Training Course

Studying thoroughly qualifies you for the exam and caters you to a solid approach for implementing your expertise in the future. In your exam preparations, choosing the right training course is very vital. Microsoft provides online free and paid instructor-led training courses for the AZ-900 exam.

Refer to the official source

Always use a resource that agrees with exam standards to assure that you completely understand the entire exam structure and the topics you must include. It is recommended that you respond to the most appropriate one. You should also go through the official exam webpage for complete details.

Avail online study Resources and videos to Help You

You can avail the various study materials and videos available on the internet. Just prior to taking the exam, be sure to flip through the sample questions or practice tests. It will benefit you in becoming familiar with the correct format of questions during the preparations.

Also Read: Start Your Journey With Azure and Prepare for the Microsoft AZ-900 Exam

The Importance of Time Management

It’s an art to manage your time efficiently. Every dedication, work, or social responsibility in your life demands you to manage your time competently. It would be helpful if you scheduled your preparations as early as possible to prevent running out of time.

Recognize Your Strengths and Weaknesses

Understanding your strengths and weaknesses is essential, but working on them is more vital. To recognize your strengths, weaknesses, opportunities, and risks, take up the SWOT analysis method. The best way to ensure persistent planning is to focus on your flaws and put aside a fair amount of time each day.

Take Up AZ-900 Practice Test

Practice tests help you better understand the topic and improves your familiarity with exam structure. It bolsters your strengths while also supporting you to recognize your mistakes. You will direct your efforts in the right direction if you are aware of your strengths and weaknesses. Evaluate yourself and your upgrades daily with practice tests. AZ-900 Practice tests also give you practical exam experience.

Conclusion

Microsoft Azure provides a set of different cloud-based services to support organizations in their business contests. You should set up your foothold when entering into this career by passing and achieving its equivalent certification. You can employ many available resources in preparation, but among the best ones to use the practice test. So, make the most of such an opportunity and get certified immediately!

Friday 16 July 2021

HoloLens 2 Industrial Edition brings mixed reality to regulated environments

HoloLens 2, Azure Tutorial and Material, Azure Exam Prep, Azure Certification, Azure Preparation

It’s been an energizing first year, in-market, for HoloLens 2. We loved seeing the ways HoloLens 2 has empowered customers across industries to increase productivity and innovate with purpose.

Lockheed Martin is using HoloLens 2 to build the Orion spacecraft and reduce touch labor—what used to require an eight-hour shift can now be completed in just 45 minutes.

Imperial College Healthcare NHS Trust is using HoloLens 2 to limit healthcare employee's exposure to patients with COVID-19, reducing the time staff are exposed to the disease by as much as 83 percent.

Mercedes-Benz USA is using HoloLens 2 to improve service technician efficiency and reduce time to problem resolution, realizing a 35 to 40 percent decrease in service technician travel.

As part of listening, and always learning, we heard strong demand for HoloLens 2 to operate in environments that have rigorous requirements or are regulated by industry certifications. In the semiconductor and pharmaceutical industries, partners require a device that meets particle emissions standards to enter and operate in their highly controlled clean room environments. In the energy sector, partners need a device that can function safely in potentially hazardous environments.

Enhance worker productivity and accelerate innovation

We are announcing the HoloLens 2 Industrial Edition, an untethered mixed reality device designed, built, and tested to support regulated industrial environments.

HoloLens 2, Azure Tutorial and Material, Azure Exam Prep, Azure Certification, Azure Preparation

HoloLens 2 Industrial Edition meets the following industrial standards and certifications:

HoloLens 2 Industrial Edition Standards Rating 
Clean room compatible  ISO 14644-1 Class 5-8
Intrinsic safety UL Class I, Division 2 

In addition to the industrial standards ratings, HoloLens 2 Industrial Edition is supported with a two year warranty and a Rapid Replacement Program. The Rapid Replacement program minimizes downtime, with advance exchange and expedited shipping.

With HoloLens 2 Industrial Edition, customers can now use HoloLens 2 in restricted environments without modifying your space, protocols, or workflows. Supported with a robust ecosystem of first and third party mixed reality applications that are backed by the reliability, security, and scale of Azure, HoloLens 2 Industrial Edition will empower enterprises operating in regulated environments to:

◉ Provide workers with access to remote experts any time they need it.

◉ Complete tasks faster and more accurately than ever before with interactive 3D guides.

◉ Train and onboard new employees faster with a “learning by doing” approach.

HoloLens 2 Industrial Edition is open for pre-order in select markets where HoloLens 2 is available, and shipments will begin in Spring 2021.

Source: microsoft.com

Thursday 15 July 2021

Manage RDP and SSH connectivity at scale with Azure Bastion

Securely manage remote connectivity at scale

Security is at the forefront of user and administrator connectivity to cloud services. As enterprises continue to move mission-critical applications to the cloud, the need for secure, scalable, and reliable remote public connectivity and jumpbox services increases. With this shift, enterprises are moving away from exposing public IP addresses on virtual machines (VMs) and cloud infrastructure running their applications, and instead, are relying on managed jumpbox services to facilitate user connectivity. Fundamentally, remote connectivity and managed jumpbox services need to provide four core capabilities:

1. Secure public access that minimizes the exposure of public IP addresses and entry points to the target applications.

2. Single deployments that manage connectivity across local or connected Virtual (private) Networks.

3. Infrastructure scaling capabilities that manage the number of concurrent remote connections.

4. Metrics, monitoring, and alerting on the infrastructure facilitating remote connectivity.

Secure public access

Traditional (on-premises) jumpbox solutions are often deployed to a semi-trusted management network. This network is separate from the local network containing application services and private endpoints. The public IP address of the jumpbox solution is defined in the management network and connectivity to applications and private endpoints in the target local network are enabled via a virtual private network (VPN) solution. Users then connect to the private IP address of the jumpbox solution in the management network and establish remote connectivity to the destination application via the VPN connection. Alternatively, some enterprises deploy applications to the management network and expose public IP addresses on the VMs hosting the target applications, and users establish Remote Desktop Protocol (RDP) and Secure Shell (SSH) connections directly to the application. However, this approach broadens the potential attack surface by scaling out public IP addresses to any VM requiring remote user connectivity. Ultimately, the need for trusted and secure access is critical for enterprise workloads.

More Info: 98-361: Microsoft Software Development Fundamentals

Azure Bastion is a fully managed jumpbox-as-a-service that provides secure RDP and SSH connectivity to VMs deployed in any local or peered Azure Virtual Networks. Remote connectivity is established directly from the Azure Portal, over a transport layer security (TLS) connection, to the public IP address of Azure Bastion. From there, Azure Bastion establishes RDP and SSH sessions to the private IP address of the target VMs in the local or peered Virtual Network. As customers deploy additional VMs to their Virtual Networks, Azure Bastion facilitates remote connectivity to both the existing and newly configured VMs, using a single public IP address. Additionally, customers can configure Network Security Groups (NSGs) to restrict inbound public access to the public IP address of Azure Bastion, creating a more secure access perimeter.

Azure Exam Prep, Azure Tutorial and Materials, Azure Learning, Azure Career, Azure Preparation

Single deployment that manages connectivity across local or connected Virtual Networks


Modern enterprises often leverage a hub-and-spoke topology when building application services. This type of architecture centralizes management Network Virtual Appliances (NVAs) and jumpbox services in a hub network and applications are deployed to connected spoke networks. Application traffic then traverses the hub network before reaching the target spoke application.

With Azure Bastion and Virtual Network peering, customers can continue to facilitate remote connectivity from a hub-and-spoke architecture within Azure. Specifically, customers can deploy Azure Bastion to a hub Virtual Network and configure application VMs in the spoke networks. Once the customer configures Virtual Network peering between the hub and spoke networks, Azure Bastion can manage RDP and SSH connectivity to VMs within the local hub Virtual Network and across the peer to VMs in the application spoke Virtual Networks.

Azure Exam Prep, Azure Tutorial and Materials, Azure Learning, Azure Career, Azure Preparation

Infrastructure scaling capabilities


One of the primary reasons enterprises are shifting mission-critical workloads to the cloud is to take advantage of platform-as-a-service (PaaS) infrastructure scaling capabilities. Specifically, with the click of a button, customers can scale-up and scale-out infrastructure to meet any increase in demand or traffic to their applications. Furthermore, as customers deploy additional applications to spoke networks, the volume of traffic traversing the hub network increases. As a result, the infrastructure facilitating NVAs and jumpbox services deployed to the hub network needs to be able to scale to serve the additional workload(s).

Azure Bastion now supports manual host scaling. When customers deploy a Standard Azure Bastion, they can configure between 2 and 50 scale units. Additionally, customers can manage the number of instances in the Azure Bastion configuration blade after the resource is created. RDP and SSH are usage-based protocols. Depending on the number of concurrent sessions and the workloads of each session, customers may need to scale out additional instances to serve application connectivity. Specifically, as customers either deploy additional applications to the spoke network(s) and or peer additional spoke networks to the hub network, they may need to scale out host instances to maintain Azure Bastion connectivity. Ultimately, support of both Virtual Network peering and host scaling enables Azure Bastion to manage remote connectivity globally.

Metrics monitoring and alerting


Another key benefit of cloud services is near-to-real-time metrics, monitoring, and alerting on the performance, availability, and traffic of infrastructure-as-a-service (IaaS) and PaaS resources. Enterprises often monitor and enable custom alerting on metrics within these three categories to proactively detect any performance issues—and more importantly, to scale-out infrastructure services as application demand increases, prior to any potential outages.

With Azure Bastion and Azure Monitor, customers can enable alerting across availability, performance, and traffic metrics. With these capabilities, customers can monitor central processing unit (CPU) utilization, memory utilization, session count, and split by host instance to gauge when to scale out host instances.

Azure Exam Prep, Azure Tutorial and Materials, Azure Learning, Azure Career, Azure Preparation

Deploy, manage, and monitor infrastructure with the click of a button


With these changes to Azure Bastion, customers can now reliably manage secure remote connectivity to applications at scale. As enterprises continue to shift production workloads to the cloud, it is imperative that cloud providers invest in PaaS offerings that expose the underlying platform benefits to customers. Ultimately, enterprises should be able to deploy, manage, and monitor infrastructure with the click of a button—reallocating the effort previously spent on infrastructure management to application development.

Source: microsoft.com

Tuesday 13 July 2021

Advancing application reliability with the Azure Well-Architected Framework

If you want to start a good discussion or argument about reliability at work, ask a colleague this question.

"When is architecture more important for the reliability of a service, product, or application? Before it is deployed to production, or afterward?"

Well, “surely”—you say—“if we don’t build the service with reliability in mind, it may not have the right components included to increase stability. It may not have redundancy to improve fault tolerance. Perhaps we will have left out robust retry logic, circuit breakers, or other known patterns for reliable systems.”

But maybe your colleague counters, “Well, I can’t deny that it is important to attempt to try and build things right from the beginning. But one thing I’ve learned about reliability is it is almost never achieved on the first go around. Even if you have done a phenomenal job at the whiteboard, designing with failure in mind, there are still going to be outages. And while nobody likes outages, if we handle them and a subsequent post-incident review correctly, we can learn a great deal that helps us make a service more reliable in the long term. On top of this, wouldn’t you agree that observability is an iterative process that involves changing what we measure and monitor as we learn more about the system while it is running? All these things would fall under the mantle John Reese and Niall Murphy called 'the wisdom of Production'. And all of these things surely need us to bring to bear all the architecture skills we have to do this right.”

If you are having a really good discussion, this goes back and forth across the table at least a few times. One side notes that “bolting on reliability after the fact” works about as well as “bolting on security after the fact” (that is to say, not well at all). The other side might bring up the lessons we’ve learned from chaos engineering showing us that experiments on a dev or staging environment can be very useful, but they don’t always yield some of the unique results we get from testing in production.

“But what about the value of continuous integration and continuous delivery (CI/CD) to reliability—trying to catch reliability issues before they get to production?”, gets asked. Then in response, “CI/CD is tremendously useful, but it didn’t catch our last issue because tests for large distributed systems are notoriously hard to get right.” And so on, and so on.

By now you’ve probably come to the same conclusion the people in this argument are bound to reach. Architecture is important in both the pre-production and post-production lifecycle stages. But that conclusion still leaves us in a peculiar spot because we don’t normally think about architecture or the role of an architect after something has been built. We don’t expect the architect who helped us build our house to show up at the doorstep a year later to say “OK, let’s do some more architecting.”

With the applications we build (or purchase) to run, things are different. There we have an expectation that the software will be changed at a much more rapid pace. It will be refactored, it will be enhanced, it will be upgraded. At each of these points, we must apply everything we know from the realm of architecture if we expect the result to be reliable. So let me tell you about one way to settle the debate we’ve been discussing, and also show you a tool that can help with your reliability even as we are squaring that circle.

The Azure Well-Architected Framework

The Well-Architected Framework is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architecture excellence: Cost Optimization, Operational Excellence, Performance Efficiency, Reliability, and Security. Incorporating these pillars helps produce high-quality, stable, and efficient cloud architecture.

But there’s that word “architecture” again, basically sitting right in the middle of the name and taunting us with an image of an architect who only participates at the beginning of the lifecycle.

Here’s the key to unlocking this conundrum: For reliability (and the other four pillars) the goal is to work towards and remain in a “well-architected state.”

That’s a state that strives to embody and make use of the best practices and all the accumulated knowledge from architecture meticulously embedded in the Well-Architected Framework. This guidance is meant to be useful to you at all stages of a cloud solution. It is useful to you in the beginning when you are designing your workloads. It is useful to you when you begin your periodic review of the workload as part of the refactoring, scaling, enhancing, or upgrading process. And finally, it can help when the cycle starts anew for the next major version of your workload.

How to get there

Anyone who has worked in the reliability space, even for a short while, knows that while a large body of guidance like the Well-Architected Framework is great, the tricky part is applying that knowledge to your specific workloads and efforts in flight. Just navigating a large document set like the Well-Architected Framework and determining where to start can be a challenge. I’d like to introduce you to a tool that I believe can bridge your ground truth and the guidance we offer. It can serve as our compass to this material.

Azure Well-Architected Framework, Azure Exam Prep, Azure Tutorial and Materials, Azure Learning, Azure Prep, Azure Preparation, Azure Guides

The Well-Architected Review is a self-guided assessment tool that will walk you through the Well-Architected Framework reliability pillar and the other four Well-Architected Framework pillars. This is a great process to do either by yourself, with your friendly neighborhood Cloud Solutions Architect, or supporting partners. It will ask you a set of questions about your reliability efforts—then, based on your responses, it offers suggestions on areas to focus on with direct links to our WAF documentation on those areas.

Here’s an example set of results:

Azure Well-Architected Framework, Azure Exam Prep, Azure Tutorial and Materials, Azure Learning, Azure Prep, Azure Preparation, Azure Guides

Let me offer a few tips that might not be obvious at first look for getting the most out of the Well-Architected Review:

1. Pay attention to the questions: You might think the results of the review are the biggest reward, but I’m here to tell you that the most valuable thing you may be able to take away from the review is the questions. Reliability can be a tricky area to tackle because there are so many possible ways to begin working on it, so many different places to start. Just knowing which questions to ask can be difficult. The Well-Architected Review can give you those questions.

2. Return to the review again and again: If you sign into the review platform with your Microsoft credentials, you can save the results. This means that in six months, or whenever you feel ready to conduct another review, you will be able to compare your new review to your previous information. This can be tremendously helpful for judging your progress across each pillar.

3. Share the results with your team: One thing many people don’t know about the Well-Architected Review is if you have signed in (see tip number two above), it will allow you to export your results as a Microsoft PowerPoint presentation. Take this draft, customize it, and you now have a ready-made presentation to take to your next team meeting so everyone can get behind your reliability efforts.

The Well-Architected Framework in action

If you would like to see some examples of the Well-Architected Framework in action, including some excellent sessions about reliability, I encourage you to check out the videos in the Well-Architected series of our Azure Enablement show. There’s some good online course material about the subject in Microsoft Learn, and guiding principles in our documentation. If you want to dive deeper into the architecture side of Well-Architected, I recommend checking out the Azure Architecture Center.

Source: microsoft.com