Tuesday 27 April 2021

Upgrade your infrastructure with the latest Dv5/Ev5 Azure VMs in preview

Azure Exam Prep, Azure Tutorial and Material, Azure Certification, Azure Career, Azure Learning

On April 6, Microsoft participated in the launch of Intel's 3rd Gen Intel Xeon Scalable processors, where Microsoft announced the upcoming preview of Azure Confidential Computing and general-purpose Microsoft Azure Virtual Machines. Today we are delivering on that promise and announcing the preview of Dv5-series and Ev5-series Azure Virtual Machines for general-purpose and memory-intensive workloads. You can request access to the preview by filling out Intel's 3rd Gen Xeon Scalable Processors in Azure form.

The promise associated with cloud economics goes well beyond any initial cost reductions that customers may experience when transitioning their IT environments to the cloud. Many organizations implicitly expect, among other benefits, continuous cloud infrastructure efficiency improvements. Specifically, they need a highly secure and "evergreen" infrastructure-as-a-service (IaaS) platform that is able to deliver increasing levels of performance and value. They also need fast access to the latest innovation before rolling out at scale within their own data centers. This is also why Microsoft continues to collaborate with technology vendors like Intel and embed the latest Intel hardware innovation within the fabric of Azure IaaS.

Azure Virtual Machines Dv5 and Ev5-series run on the latest 3rd Gen Intel Xeon Platinum 8370C (Ice Lake) processor in a hyper-threaded configuration, providing better value for most general-purpose enterprise-class workloads. The new processor features a higher all-core Turbo clock speed of 3.5GHz, Intel Turbo Boost Technology 2.0, Intel Advanced Vector Extensions 512 (Intel AVX-512), and Intel Deep Learning Boost. These new virtual machines (VMs) deliver up to 15 percent increased performance for many workloads and better price-to-performance than the previous Dv4 and Ev4-series VMs. 

“Intel has been collaborating closely with the Microsoft Azure team to bring innovations to market since the beginning of our cloud journey,” said Rebecca Weekly, Vice President of Data Platforms Group, General Manager Hyperscale Strategy and Execution, Intel. “Once again, it is exciting to see how Microsoft is embracing the latest 3rd Gen Intel Xeon Scalable processors’ capabilities to deliver significant improvements in performance and costs to customers running a broad range of workloads on Azure Virtual Machines."

General-purpose workloads

The new Dv5-series VMs offer a balance of memory to vCPU performance with increased scalability, up to 96 vCPUs, and 384 GiB of RAM. These VMs work well for many general computing workloads, e-commerce systems, web front ends, desktop virtualization solutions, customer relationship management applications, entry-level and mid-range databases, application servers, and more.

Series vCPU   Memory (GiB)   Local Disk (GiB) Max Data Disks   Max NICs 
Dv5-series  2 - 96 8 - 384 n/a 4 - 32 2 - 8
Dsv5-series  2 - 96  8 - 384  n/a  4 - 32  2 - 8 
Ddv5-series  2 - 96  8 - 384  75 - 2,400  4 - 32  2 - 8 
Ddsv5-series  2 - 96  8 - 384  75 - 2,400  4 - 32  2 - 8 

Memory-intensive workloads


The new Ev5-series VMs are ideal for memory-intensive enterprise applications, large relational database servers, data warehousing workloads, business intelligence applications, in-memory analytics workloads, and additional business-critical applications, including systems that process financial transactions of various nature. The new Ev5-series VMs offer high memory-to-vCPU ratios, with increased scalability up to 96 vCPUs and 672 GiB of RAM. 

Series vCPU   Memory (GiB)   Local Disk (GiB) Max Data Disks   Max NICs 
Ev5-series 2 - 96 16 - 672 n/a 4 - 32 2 - 8
Esv5-series 2 - 96  16 - 672 n/a  4 - 32  2 - 8 
Edv5-series 2 - 96  16 - 672 75 - 2,400  4 - 32  2 - 8 
Edsv5-series 2 - 96  16 - 672 75 - 2,400  4 - 32  2 - 8 

The Dv5 and Ev5-series Azure Virtual Machines will also further extend product categories that we introduced with the Dv4 and Ev4-series VMs by offering options with and without local disk storage. VM sizes with a local disk are denoted with a small "d" in the name. VM sizes without a local disk do not have the small "d" value. Many workloads do not benefit from a local disk, so these new options are optimized specifically for this case. Whether you choose a VM with a local disk or not, you can attach remote persistent disk storage to all VMs. Remote disk storage options (such as for the VM boot disk) are billed separately from the VMs in any case, as always.

Learn more about Dv5 and Ev5-series Azure Virtual Machines preview


The Dv5 and Dsv5, Ddv5 and Ddsv5, and Ev5 and Edv5 Azure Virtual Machines are all available in preview today. The Esv5 and Edsv5 series will be available in May 2021. The preview is initially available in our East US 2 Azure region and will be expanded to other regions over time.

During the preview period, the eight new series are available only with pay-as-you-go pricing. The price points are significantly discounted compared to future general availability prices. Spot Virtual Machines and Reserved Virtual Machine Instance pricing will not be available during the preview. Prices will vary by region.

If you are currently using Dv4 and Ev4-series Azure Virtual Machines, the Dv5 and Ev5-series VMs will provide you with a better price-to-performance option and increased scalability up to 96 vCPUs and are just one way that we’re continuing to bring the promise of the cloud to customers everywhere.

Source: microsoft.com

Sunday 25 April 2021

Microsoft brings Azure supercomputing to UK Met Office

Azure Exam Prep, Azure Tutorial and Material, Azure Certification

The UK Meteorological Office (Met Office) is working with Microsoft to deliver advanced supercomputing capabilities for weather and climate research, ensuring the continuation of the Met Office’s international leadership in this area.

Supercomputing is fundamental to the Met Office’s weather and climate science and services. Microsoft Azure’s supercomputing-as-a-service will enable the Met Office to leverage the best blend of dedicated and public cloud services to provide more accurate predictions to help citizens and businesses plan daily activities, better prepare for extreme weather, and address the challenges associated with climate change.

Microsoft Azure will integrate HPE Cray EX supercomputers from Hewlett Packard Enterprise (HPE), plus a Microsoft high-performance active data archive system, and other Azure cloud technologies, along with an end-to-end managed service to deliver this market-leading supercomputing-as-a-service. The partnership will also include innovation services to support the Met Office in exploiting future technologies, such as AI, plus commercialization opportunities.

Work on the project starts immediately, with the supercomputing capability becoming operational starting July 2022. The supercomputer is built in four quadrants to optimize operational resilience for mission-critical supercomputing capability. Each quadrant will consist of an HPE Cray EX supercomputer integrated into Azure, initially using 3rd generation AMD EPYC processors, which will later be augmented with next generation AMD EPYC processors. The first generation of the supercomputer solution will have a combined total of over 1.5 million processor cores and over 60 petaflops, otherwise known as 60 quadrillion (60,000,000,000,000,000) calculations per second of aggregate peak computing capacity. Microsoft will deliver further upgrades in computing capability over the ten years.

The active data archive system will support nearly 4 exabytes of data with high-performance data storage, query, and retrieval capabilities. The Met Office will also use Azure high-performance computing (HPC) cloud solutions such as HB-series InfiniBand clusters powered with AMD EPYC processors.

Looking to the future

As part of this agreement, the supercomputing capability will be based in the South of the UK and will be powered through the use of 100 percent renewable energy and market-leading energy efficiency, saving 7,415 tonnes of CO2 in the first year of operational service alone. This collaboration with the Met Office builds on Microsoft’s commitment to a more sustainable future by reducing our environmental footprint, accelerating research, and helping our customers create sustainable solutions.

Azure Exam Prep, Azure Tutorial and Material, Azure Certification
This partnership showcases Microsoft Azure’s unique ability to deliver mission-critical supercomputing integrated with cloud computing at the largest scales. Microsoft remains the only public cloud provider giving customers access to the full spectrum of HPC capabilities spanning dedicated supercomputers, InfiniBand HPC clusters with sector-leading real-world application performance and scalability, and a plethora of Virtual Machine types with low latency networking. Across the globe, Azure is consistently demonstrating our ability to put genuine HPC capabilities into the hands of the broadest range of customers and users, enabling them to unlock their individual innovations.

Source: microsoft.com

Thursday 22 April 2021

Why wait? SaaS-delivered data management for the pace of change

Azure Exam Prep, Azure Learning, Azure Certification, Azure Preparation

It's incredible to think how much can change in a year. The past year has seen unprecedented digital transformation, with 90 percent of companies reporting cloud usage higher than planned over the past year, according to the Flexera 2021 State of the Cloud Report. Into this new reality, Microsoft and longtime partner Commvault announced an agreement focused on Azure and Metallic—a software-as-a-service (SaaS) venture established to meet the growing need for agile, cloud-based data management. International Data Corporation (IDC) has predicted data-protection-as-a-service (DPaaS) will surpass traditional data replication and protection services as early as this year, which means that in a changing world, companies need mature cloud-based solutions now. There’s no better time to look back and ahead to how Commvault and Metallic, together with Azure, are leading in the evolution of the data management industry—with an enterprise-grade SaaS solution. 

On May 11, you can also learn more about the benefits of SaaS-delivered data management and the innovation Metallic and Azure are driving, by registering for the joint webinar, Future-Proofing your Cloud Data Management.

Data management-as-a-service (DMaaS) needs breadth and flexibility

Many companies have turned recent challenges into opportunities to modernize and accelerate digital transformation. According to the IDG Cloud Computing Study 2020, 59 percent of companies reported their IT environments would be either mostly cloud (43 percent) or all cloud (16 percent) 18 months from now—up from just 38 percent.

Cloud solutions can help companies stay one step ahead of the pace of change, but those solutions need to accommodate the demands of hybrid environments. As companies migrate to Azure, modernize apps, navigate hybrid IT, and adopt containers, SaaS-delivered data management solutions can accompany them along the way—but they need to deliver breadth and flexibility. To meet these needs, Metallic—built on Azure—has sped the delivery of new backup-as-a-service (BaaS) innovations.

Over the past six months alone, these innovations have included:

◉ A portfolio of services that has more than doubled, with new offerings to protect workloads from Kubernetes to SAP HANA, on-premises, or in Azure, as well as a managed Azure cloud storage service for Commvault customers.

◉ Tripled country availability—with Metallic now available in 24 countries across the globe.

◉ New customer-requested features—including eDiscovery, fast and flexible restore of Microsoft Teams data for Office 365 backup, and unique edge management capabilities.

◉ Distinctions including a Best of VMWare award, the Gold Stevie in Cloud Storage and Backup, recognition as a DCIG Top 5 Office 365 Backup solution, and being ranked by customers as number one in SaaS Backup on IT Central Station.

◉ Along with new programs to help customers navigate digital transformation—from special offers for educational institutions adopting Office 365, to free Kubernetes data protection for life.

Azure Exam Prep, Azure Learning, Azure Certification, Azure Preparation
As the world changes, one thing stays the same—the need to simplify IT and continue to do more with less. SaaS-delivered solutions are helping companies move from CAPEX to OPEX, eliminating the need to maintain infrastructure and updates, and simplifying management overhead. Fast-to-market innovation is the hallmark of any successful SaaS solution, but customer outcomes must be the guiding force behind the innovations. Metallic customer use cases run the gamut from endpoint protection to data center modernization, to Office 365 adoption, with just a few examples:

◉ Among the largest banks in the United States, Arvest Bank began the move to the cloud with Office 365 to support a remote workforce, with Microsoft Teams an important driver for productivity. They chose Metallic for its Teams data protection with flexible restore, security, and cost-effective total cost of ownership (TCO) that included Azure storage.

◉ Engineering company Parsons needed to protect sensitive laptop data and turned to Metallic for its simple management. Deployment took 10 to 15 minutes only, and they enjoyed the “set it and forget it” design.

◉ The Nevada Department of Transportation saw the opportunity to modernize its data management strategy with cloud-based protection for VMs. With Metallic built on Azure, they developed a SaaS strategy that would simplify management while maintaining their strict security requirements. Jeff Day, Systems Technical Supervisor, State of Nevada Department of Transportation, noted “We are able to have all of our backups in the cloud now, and were even able to leverage our existing Azure storage. With Metallic, we no longer have to manage and maintain physical hardware which makes us financially and even technically more efficient.”

Following the most recent announcements from Metallic, Enterprise Strategy Group Sr. Validation analyst Vinny Choinski said it well, “The rise in both remote work and potential data risk is driving strong demand for cloud-based data protection solutions—with the continued expansion of Metallic’s SaaS Plus portfolio, Commvault has created an impressive backup-as-a-service ecosystem—a comprehensive range of offerings supporting enterprise-wide workloads to the backup target of choice, whether it’s on-premises, in the cloud, or to the HyperScale X appliance.”

Source: microsoft.com

Tuesday 20 April 2021

Expanding our DR scenarios to new zonal capabilities with Azure Site Recovery

Azure Exam Prep, Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Career

Azure provides high availability, disaster recovery, and backup solutions that can enable your applications to meet business availability requirements and recovery objectives. We continue to build upon our portfolio of resilient services by bringing zonal capabilities for improved business continuity and disaster recovery with Azure Site Recovery. Azure Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you failover to a secondary location, and access apps from there. After the primary location is running again, you can fail back to it. Azure Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages.  

With the zonal disaster recovery capability, we are making it possible to replicate and orchestrate the failover of applications in Azure across Azure Availability Zones within a given region. Zone to Zone disaster recovery options with Azure Site Recovery is on its way to being available in all regions with Azure Availability Zones (AZs). Availability Zones, fault-isolated locations within an Azure region, provide redundant power, cooling, and networking, allowing customers to run mission-critical applications with higher availability and fault tolerance to datacenter failures. Azure Availability Zones will be available in every country Microsoft Azure publicly operates in by the end of 2021. 

While Availability Zones are traditionally used by customers for high availability, they can also be leveraged for Disaster Recovery under specific scenarios. The capability adds disaster recovery options for scenarios that may require maintaining data residency and local compliance, reducing the complexity of configuring a DR strategy in a secondary region, and improving the recovery point objective (RPO). 

Adhering to local compliance and data residency

To support customers' unique compliance and data residency needs, Azure offers regions within geographies that provide a distinct boundary for data residency and compliance. Zone to Zone recovery can be leveraged by customers that prefer to keep applications within a particular legal jurisdiction since your applications and data do not cross-national boundaries. Azure provides a wide portfolio of more than 90 compliance offerings to support streamlined compliance and protect data with the most comprehensive compliance coverage.

Reducing the complexity of DR in a secondary region, latency, and RPO

Many Azure regions are designed to provide traditional disaster recovery, with a distinct region that has a large degree of separation from the other location. For some customers enabling this type of DR requires recreation of complicated networking infrastructure and increased cost and operational complexity. Zone to Zone disaster recovery reduces complexity as it leverages redundant networking concepts across Availability Zones making configuration much simpler. 

Azure Exam Prep, Azure Certification, Azure Learning, Azure Tutorial and Material, Azure Career
Since Availability Zones are designed to support synchronous replication while optimizing physical separation to offer protection and isolation from localized failures, this also means the data traverse shorter distances when compared with region to region DR so customers may see lower latency and consequently lower RPO by leveraging zonal DR. 

“When Availability Zones for Azure were being made available, we were in the middle of moving our production workloads to cloud from on-premises datacenters. At that time, we made a bold decision to start distributing all of our IaaS instances across different zones to maximize benefits of redundancy and minimize impact of single datacenter failures on business-facing applications. We looked at recently announced Zone to Zone replication as an enhancement of our DR capability and implemented it in production. The call was quite easy, given that we had prior positive experience with ASR and we’d simply change the datacenter to which we replicate. A couple of drills for different applications were conducted in non-production environments with great success—we were able to restore the protected VMs in about 10 minutes. By failing over to the same region/VNET, we didn’t need to create dedicated test networks which greatly reduces administrative overhead. I really liked the fact that Azure took care of almost the entire failover process end-to-end with provisioning new VM, attaching the disks, and even swapping the IP so that other applications could quickly reconnect to the “new” instance." - Patryk Wolski, Senior Manager – Infrastructure Lead at Accenture.

Architecting for zonal resilience and disaster recovery

Beyond providing the right capabilities in the right regions, we are also committed to providing guidance and proven practices to help our customers and partners take advantage of these capabilities. We recently launched the Azure Well-Architected Framework—a set of guiding tenets that can be used to improve the quality of a workload. Reliability is one of the five pillars of architectural excellence alongside Cost Optimization, Operational Excellence, Performance Efficiency, and Security. If you already have a workload running in Azure and would like to assess your alignment to best practices in one or more of these areas, try the Azure Well-Architected Review.

Source: microsoft.com

Saturday 17 April 2021

The blueprint to securely solve the elusive zero-touch provisioning of IoT devices at scale

The 2020 IoT Signals Report reveals 95 percent of IoT projects fail at proof of concept (PoC), with a significant portion due to the inability to scale despite the existence of many claims touting zero-touch provisioning. Imagine the following winning alternative instead: an IoT solution builder receives a batch of devices from its original equipment manufacturer (OEM) and all they do is power them to get the following:

◉ Automatic and secure onboarding to a production certificates provider.

◉ Receipt of device operational certificate credentials.

◉ Automatic provisioning to cloud application services.

◉ Automation of credentials renewal and lifecycle management.

Read More: AZ-220: Microsoft Azure IoT Developer

More so, this seamless process is the same for all devices, whether in PoC or the millionth in production, and the best part is that setup requires only three simple one-time non-engineering actions by the solution builder. This is exactly what we’ve accomplished with partners and now present how as a blueprint.

Azure Tutorial and Material, Azure Exam Prep, Azure Study Material, Azure Preparation, Azure Career, Azure Guides

Figure 1: Seamlessly and securely deploy at scale from a one-time setup in three simple steps—a solution blueprint to zero-touch provisioning

For this ease, all the solution builder does for setup is create an account with the managed credential provider, deliver device customization instructions to the OEM, and register an attestation certificate to Azure Device Provisioning Service (DPS). They perform each of these actions only once to enable a zero-touch provisioning experience that is true for both PoC experience and production deployments at scale. What the solution builder may not and need not know is the preceding complex integrations comprise an interplay of multiple certificate credentials in a trust delegation to accommodate the multi-custodial nature of the device manufacturing value chain, security hardening to resist tampering, and priming for automated renewal and lifecycle management of operational credentials. True scalable zero-touch provisioning can result only after these complex integrations occur; otherwise, the burden falls on the internet of things (IoT) solution builder and hence the observed high failure rate.

But why is zero-touch provisioning so elusive?


Simply put, zero-touch provisioning requires a narrow understanding of the provisioning needs for IoT devices. This is not a criticism but rather an observation that might be indicative of evolution and maturity in IoT practices. A true solution will never emerge without a proper understanding of the problem space.

A holistic view of IoT provisioning must recognize IoT projects which exist in phases and must consider these phases when designing a zero-touch provisioning experience. For illustrative simplicity, let’s break down the project into three phases, namely evaluation, deployment, and operational, but knowing one could get even more granular.

The evaluation phase

The evaluation phase kickstarts every project and entails the creation of a PoC. It is characterized by the solution builder having full control of the development environment and working with pre-existing devices in unitary quantities. By virtue of the full control of the development environment, provisioning entails embedding a credential into the device. This allows the builder to take comfort in the security because only they have knowledge of the credential and they are the only one who has physical possession of the device.

The deployment phase

Next comes the deployment phase which entails device manufacturing for production scale. This phase expands the development environment into an ecosystem of device manufacturing and supply chain partners. It also expands device quantities by several orders of magnitude. A clear characteristic of the deployment phase is a shift of control from full ownership of the solution builder to shared ownership with partners. Security demands strong actions to protect confidential information within the solution by preventing the accidental sharing of information, allowing for increased trust in partner interactions. To uphold security and trust, provisioning must entail multiple credentials for knowledge amongst partners, a trust delegation scheme as the device changes custody, and security hardening to help prevent tampering.

The operational phase

The operational phase returns control to the IoT solution builder and entails the solution operation and lifecycle management of credentials and devices. The role of provisioning in this phase is the setup that divorces the value chain of manufacturing partners to focus on operation (hence how the solution builder regains control), provisions operational credentials, and enables lifecycle management actions such as renewal, revocation, and retirement.

Provisioning of IoT devices is therefore a complex undertaking in security and building trust within an open ecosystem. Consequently, getting provisioning right demands a comprehensive understanding of the multi-faceted nature of the problem and acknowledgment that a complete solution will require several fields of expertise. Sadly, most claims of zero-touch provisioning only address the needs of the evaluation phase and ignore the needs of the deployment and operational phases that are requisite for at scale production. It is no wonder why the experience of zero-touch provisioning is elusive.

Call in the experts


Complex problems are best solved by domain experts. Solving for zero trust provisioning requires expertise from many domains top of which are experts in operating public key infrastructures (PKI), hardening security, and customizing devices in a standard device manufacturing and acquisition process.

Expertly operating a PKI is a fundamental requirement of zero-touch provisioning. A PKI service suitable for onboarding and operation of IoT devices at scale amongst many attributes needs to be highly available, provide global coverage, enable certificates audits, and deliver lifecycle management actions such as renewal and revocation. Above all, the PKI service should assist in achieving data sovereignty goals. An expertly operated PKI is important for many reasons. First, the underlying asymmetric-key cryptography provides the foundation for a zero trust model of partner collaboration within a devices’ value chain. The fact that each partner holds a private key that they never share provides the basis for exclusive trust. Secondly, PKI enables IoT to benefit from decades of experience in enterprise IT practice of issuance and lifecycle management of certificate-based device credentials. Certificate-based credentials are valued over alternative forms of credentials because they also build on asymmetric-key cryptography to enforce a zero trust model of computing in IoT. Operating a PKI builds on these two concepts and requires heavy responsibilities only dedicated experts can deliver. Certificate Authorities (CA) possess the requisite expertise from long practices in IT.

Security hardening complements a well-planned and structured PKI in resisting tampering. It is not a secured solution without countermeasure to subversion which is the goal of tamper resistance. Tamper resistance derives from a very special class of integrated circuits whose primary goal is to operate normally or fail predictably under all adversity be it physical, environmental, or networked. The result is mitigation against subversion, hijack, infiltration, and exfiltration. These tamper-resistant integrated circuits commonly known as hardware security modules or simply HSMs. The well-honed art of producing and prescribing proper application HSMs demands expertise that is provincial to only a subset of semiconductor silicon manufacturers.

Device personalization through customization is the final element to achieving secured zero-touch provisioning and demands domain expertise of OEM. The OEM must work in concert with the PKI and HSM providers to assure certain goals. First, that trust initiates and properly transits various custodians in the device manufacturing value chain. Second, that the device is customized to the solution builders’ specifications and to seamlessly connect to the right cloud solutions. Third, that the device automatically onboards and transition into operational states complete with proper credential provisioning and lifecycle management. Fourth, that the device is hardened against impersonation. Finally, that the device procurement process remains simple. Delivering secured devices in simplicity is a difficult balance that commands expertise and experience.

Finally, it takes the right IoT product base, with features purposefully designed in, to leverage expertise from the various domains and exclusively through the use of standards where available. The IoT Identity Service security subsystem for Azure IoT Edge accomplishes this goal.

The blueprint


For this blueprint we allied with the Certificate Authority (CA) and PKI services provider, Global Sign, the semiconductor manufacturer and HSM maker, Infineon Technologies, and OEM and edge device integrator, Eurotech. The technical integration builds on the modular IoT Identity Service security subsystem of Azure IoT Edge where the domain experts leveraged features such as the IETF RFC 7030 Enrollment over Secure Transport (EST) built-in client standard for certificates requests, the ISO/IEC 11889 Trusted Platform Module (TPM) and PKCS#11 interface standards for HSM integration, and the modularity of the security subsystem to accommodate the diversity of existing device manufacturing flows which is a very important consideration. The goal is not to disrupt decades-old existing manufacturing supply chains but to build on their respective experiences. This allied integration spares the IoT solution builder from delving into requisite domain expertise and assures a solution that is secured by default. The result is a device highly customized for the IoT solution builder who need not do more on receipt than to turn it on.

Azure Tutorial and Material, Azure Exam Prep, Azure Study Material, Azure Preparation, Azure Career, Azure Guides

Figure 2: Integrated trust from TPM to cloud for security and integrity from supply chain to services

The blueprint is thus about domain experts allying to solve the problem for the IoT solution builder and in doing so assures proper application of various technologies for a comprehensive solution to zero-touch provisioning at scale. For this integration, trust truly initiates from the source of the value chain which is the Infineon Technologies TPM. For example, Global Sign can authoritatively verify that the target TPM is in fact one manufactured by Infineon Technologies because of prior TPM manufacturing certificate cross-signing as part of pre-verification to issuing operational certificates.

Azure Tutorial and Material, Azure Exam Prep, Azure Study Material, Azure Preparation, Azure Career, Azure Guides

Figure 3: The IoT device identity lifecycle involves multiple credentials

This alliance of partners has composed a joint whitepaper that outlines the security and engineering principles that underlie this solution in the spirit of presenting a blueprint for replication.

Why standardization is important 


Zero-touch provisioning is a difficult problem that truly calls for standardization. The difficulty might stem from several reasons but an obvious one is how to build a solution standard on a very diverse base of manufacturing flows without coercing expensive restructuring and retooling. No problem lasts forever and someday a standard will emerge. Until then, why not build on existing standards (such as TPM, X.509, PKCS#11, EST), manufacturing flows, and value chains to create microcosms of technology alignments and pragmatically solve a clear and present problem? This is the essence of the blueprint which in addition to providing a pragmatic solution for the moment, is a call to the rest of the industry to unite in standardization.

Bringing IoT solutions to production


Many solutions that claim zero-touch provisioning in IoT lead to failures in PoC because they fail to solve the challenges that underlie IoT provisioning at scale. The right solution requires a comprehensive undertaking that must employ expertise from several domains to overcome complex challenges and deliver secured and seamless zero-touch provisioning at scale. Complex problems of this nature are often solved by uniting forces in standardization. However, many consortia have been at this problem for several years without tangible results, presumably because a high risk of forcing a highly diverse device manufacturing flows into untenable expensive restructuring for compliance. This blog offers a comprehensive solution to zero-touch provisioning by an alliance of experts presented here as a blueprint that builds on existing experiences and manufacturing flows to raise the success rate of IoT solutions going into production.

To all the domain experts in the IoT value chain, this is a call to recognize the shared responsibility requisite of secured IoT solution deployments. We all win when the solution builder is successful so let us all team up in alliances to bring about truly secured and comprehensive zero-touch provisioning in production at scale or simply join us in Azure. It is the blueprint for success.

To all IoT solution builders, ask your OEM partners to align with partners and deliver devices with the pre-integrations described in this blueprint to help simplify the experience of securely scaling the solution from PoC to production.

Source: microsoft.com

Thursday 15 April 2021

Empowering operators on their cloud migration journey

For operators, many challenges can be involved in their journey to the cloud, some more complex than others. Here, it is important to note that when it comes to operators’ path to cloud migration, there is no such thing as a one-size-fits-all solution. Operators have unique needs for security, observability, resiliency, and performance.

Read More: 98-364: Database Fundamentals

As a result, Microsoft offers a level of support for operators that is far greater than simply taking existing cloud services and repackaging them. We understand that it is critically important to know exactly what operator workloads require—including what it takes to meet the demands imposed by operators as a result of their commitment to delivering fault-tolerant services to customers.

In this blog, we delve into one such example of Microsoft’s commitment to operators and to developing a hands-on product strategy born from years of research. 

Bringing a cellular packet core to the cloud

Several years ago, Microsoft started work on a research project aimed at determining the feasibility of implementing a cellular core network (EPC) on a hyperscale public cloud. What came from this work was a research prototype of a distributed network architecture for the EPC on the public cloud. It ran as a cloud service, provisioning for high network availability while balancing the unpredictability of public clouds. Keeping the original EPC design, this prototype cloud EPC provided the same basic function of a cellular core network and was compatible with standard cellular equipment (such as phones and base stations).

To better understand the needs of operators planning to migrate to the cloud, we deployed the cloud EPC on Azure and tested it using a combination of real mobile phones and synthetic workloads. It demonstrated higher availability when compared to existing telco solutions at the time, as well as a level of performance that was comparable to many commercial cellular networks.

Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Career, Azure Learning
Figure 1: Research prototype for high-level network deployment

In the end, this work offered the possibility of a distributed network architecture leveraging the public cloud—evidence of Microsoft’s ability to potentially relieve operators of the burden of managing their own infrastructure.

Going a step further


For Microsoft, the aforementioned project was just the start of building greater awareness and expertise—both of which are now being used to the benefit of operators on their cloud migration journeys. So, to further understand how the earlier prototype cloud EPC might perform in the real world while carrying actual mobile traffic, learnings from this early project were incorporated into a real-world cellular network trial that lasted two years.

The real-world experiment was created in conjunction with the city of Cambridge in the United Kingdom. Consisting of five cellular towers installed at various points across the city, the experiment was designated to benefit underserved communities that lacked traditional broadband access. Microsoft deployed the cloud EPC on public Azure regions located in Dublin, Ireland, in parallel with a failover positioned in Amsterdam, the Netherlands. The trial with this small network ran successfully for the entire period, functioning without a single outage. Microsoft gained a wealth of technological and operational data that we currently leverage for the benefit of operators today.

Azure Exam Prep, Azure Tutorial and Material, Azure Preparation, Azure Career, Azure Learning
Figure 2: Data from the Cambridge trial

Four key lessons learned


1. It’s very doable

One of the most important outcomes of the Cambridge trial is the notion that a telecom-grade virtual network function (VNF) can indeed work on the hyperscale public cloud. Even though the EPC was in the cloud outside of the country, Microsoft could still provide a live LTE network with solid network performance. Traffic reached between 20 to 40GB each day, and the maximum link throughput was more than 20 Mbps (with 2×2 MIMO on 5MHz); during this time, most users received at least 4 Mbps in download speeds.

2. Setup is much faster

Operators want to know what tools and services are available to make sense of the complexities of moving to the cloud. Based on our experiment, locating the EPC on the cloud makes a cellular roll-out far easier. It could have taken months to procure and commission traditional EPC equipment alone, not to mention the need for capital expenditure. Instead, it took less than five minutes to initiate it in a new Azure region.

3. It’s highly reliable

Addressing the concerns that hyperscale clouds are not up to the high availability standard that operators require, this study proved that VNFs running on Azure can be highly reliable. In measuring the reliability of various Azure components over a three-month period, Azure met four nines availability, which was sufficient for the trial. Here, it is key to note that other services within Azure (such as Azure ExpressRoute, and deployments across Azure Availability Sets and Azure Availability Zones, plus reliable data stores) could be incorporated into the deployment to improve network uptime further.

4. It’s easy to maintain

Another important finding from the experiment was the ease of network management. Microsoft was able to write a network management interface on Azure to conduct daily operations. And by leveraging Azure’s data analytics tools, we were able to monitor the network health and generate alerts through the Azure portal. We achieved all this without writing a single line of code, thereby enabling a single team member to manage the entire network.

What does this mean for operators today?


Microsoft continues to refine its strategy and portfolio of services based on research such as the Cambridge trial, practical experience with hundreds of customers running networks using advanced technology, and deep relationships with operators globally. Some of the design features taken from these learnings include:

The use of microservices-based architectures to reduce footprint and improve performance

For example, our IP Multimedia Core Network Subsystem (IMS) was the first commercial cloud-native network function built specifically to run inside containers within hybrid cloud architectures. Recognizing that a small compute footprint is required to meet the financial demands of operator infrastructures, care was taken to ensure that the microservices methodologies employed were granular enough to realize the benefits of resiliency, flexibility, and scalability, but not so granular that they impact data persistence and performance.

Additional complexities had to be considered for network functions supporting real-time user traffic, requiring new data plane acceleration and packet processing pipeline innovations that can provide near-silicon-like throughput without using excessive CPU cycles or custom hardware. These technologies were first employed within our Session Border Controller and later within the 5G core, built specifically for multi-access edge compute environments where resources are incredibly constrained.

Similarly, solutions such as Unity Cloud Orchestration have harnessed cloud-native technologies to simplify orchestration by using a single tool such as Kubernetes to manage containerized network functions. Unity Cloud reduces times and complexity in capacity and high availability network planning, as these functions are dynamically managed. And Unity Cloud simplifies feature delivery and the software and patch upgrade process using microservices.

The use of an automated management layer to reduce operational costs

The service automation capabilities enabled by ServiceIQ and Unity Cloud Operations—exposed as application programming interfaces (APIs) to a business intelligence layer, allow operators to create networks that are more secure, flexible, efficient, scalable, and resilient. These networks will be:

◉ Self-healing: Using the power of big data and AI, the network may build predictive failure models, which are then combined with automated processes that are capable of altering network configurations to avoid failure conditions.

◉ Self-defending: Behavioral analytics models can be built to identify network element behavior that is abnormal and could indicate a compromised component. An automated process could then sandbox the suspect network element for further analysis and remediation or even roll back to the last known good configuration.

◉ Self-optimizing: AI recognizes patterns that lead to more efficient compute resources, radio resources, power settings, and adjusts network configuration accordingly.

◉ Self-configuring: As new network elements are added, they are automatically recognized, provisioned, and configured in the network.

Whether an operator chooses a Microsoft first-party VNF or cloud-native network function (CNF) or chooses to work with the third-party VNF or CNF partners certified on the Azure platform, Azure will ensure the underlying cloud platform provides the necessary orchestration, management, and exposure capabilities to ensure resiliency, manage performance, and automate execution needed to support core network workloads.

Our work


Microsoft’s level of commitment to supporting the migration of operators to the cloud cannot be understated. The research work cited here represents only a couple of examples of the many projects we continue to explore and which will continue to inform our roadmap moving forward. In the end, this body of knowledge leads the direction we take to support operators in order to make migration to the cloud as smooth as possible. Microsoft knows that when it comes to reliability, speed, and consistency within hyperscale public cloud deployment, there is no one better suited to support operators.

Source: microsoft.com

Tuesday 13 April 2021

Microsoft powers transformation at NVIDIA GTC—GPU technology conference

Microsoft Exam Prep, Microsoft Preparation, Microsoft Prep, Microsoft Tutorial and Material, Microsoft Career

Customers around the world rely on Microsoft Azure to drive innovations related to our environment, public health, energy sustainability, weather modeling, economic growth, and more. Finding solutions to these important challenges requires huge amounts of focused computing power. Customers are increasingly finding the best way to access such high-performance computing (HPC) through the agility, scale, security, and leading-edge performance of Azure’s purpose-built HPC and AI cloud services.

Azure’s market-leading vision for HPC and AI is based on a core of genuine and recognized HPC expertise, using proven HPC technology and design principles, enhanced with the best features of the cloud. The result is a capability that delivers performance, scale, and value, unlike any other cloud. This means applications are scaling 12 times higher than other public clouds. It means higher application performance per node. It means powering AI workloads for one customer with a supercomputer fit to be among the top five in the world. It also means delivering massive compute power into the hands of medical researchers over a weekend to prove out life-saving innovations in the fight against COVID-19.

This year during NVIDIA GTC 21, we’re spotlighting some of the most transformational applications powered by NVIDIA accelerated computing that highlights our commitment to edge, on-premises, and cloud computing.

AI and supercomputing scale

The AI and machine learning space continues to be one of the most inspiring areas of technical evolution since the internet. The trend toward using massive AI models to power a large number of tasks is changing how AI is built. At Microsoft Build 2020, we shared our vision for AI at Scale utilizing state-of-the-art AI supercomputing in Azure and a new class of large-scale AI models enabling next-generation AI. The advantage of large-scale models is that they only need to be trained once with massive amounts of data using AI supercomputing, enabling them to then be “fine-tuned” for different tasks and domains with much smaller datasets and resources.

Training models at this scale requires large clusters of hundreds of machines with specialized AI accelerators interconnected by high-bandwidth networks inside and across the machines. We have been building such clusters in Azure to enable new natural language generation and understanding capabilities across Microsoft products.

The work that we have done on large-scale compute clusters, leading network design, and the software stack, including Azure Machine Learning, ONNX Runtime, and other Azure AI services, to manage it is directly aligned with our AI at Scale strategy.

Machine learning at the edge

Microsoft provides various solutions in the intelligent edge portfolio to empower customers to make sure that machine learning not only happens in the cloud but also at the edge. The solutions include Azure Stack Hub, Azure Stack Edge, and IoT Edge.

Whether you are capturing sensor data and inferencing at the edge or performing end-to-end processing with model training in Azure and leveraging the trained models at the edge for enhanced inferencing operations—Microsoft can support your needs however and wherever you need to.

Visualization and GPU workstations

Azure enables a wide range of visualization workloads, which are critical for desktop virtualization as well as professional graphics such as computer-aided design, content creation, and interactive rendering. Visualization workloads on Azure are powered by NVIDIA’s world-class graphics processing units (GPUs) and RTX technology, the world’s preeminent visual computing platform.

With access to graphics workstations on Azure cloud, artists, designers, and technical professionals can work remotely, from anywhere, and from any connected device. 

Latest announcements

◉ We are proud to announce a new high-memory variant that is coming to our GPU supercomputing portfolio, featuring the latest NVIDIA A100 80GB SXM GPUs around the same NVIDIA InfiniBand HDR and PCIe Gen4-based building-block we have today, but of course with a few adjustments to make sure that customer workloads can take full advantage of these new chips. Like the A100 40GB GPU instance, these will be available to customers on-demand, at massive scale, without any specific commitment.

◉ NVIDIA and Microsoft Azure are raising the bar for XR streaming. Announced today, the NVIDIA CloudXR platform will be available on Azure instances NCv3 and NCasT4_v3.

Source: microsoft.com

Sunday 11 April 2021

Scale cloud adoption with modular designs for enterprise-scale landing zones

In this blog post, we will walk you through Tailwind Traders' experience to run its workloads on Azure using Cloud Adoption Framework and Azure landing zones guidance.

What are enterprise-scale landing zones?

We followed Tailwind Traders and their cloud adoption journey, where we learned about the Cloud Adoption Framework and Azure landing zones. After researching the different landing zone options—start small and expand, enterprise-scale, and partner landing zones—Tailwind Traders has decided to start with the enterprise-scale architecture.

Tailwind Traders requires an initial implementation of landing zones with fully integrated governance, security, and operations from the start. The enterprise-scale architecture provides prescriptive guidance coupled with Azure best practices, developed largely by Microsoft architects and the broader Cloud Solutions Unit technical community. It follows design principles across the critical design areas for an organization's Azure environment, like enterprise enrollment, identity, network topology, resource organization, governance, operations, business continuity and disaster recovery (BCDR), as well as deployment options.

The enterprise-scale architecture provides a strategic design path and a target state for the Tailwind Traders Azure environment, which will evolve with new features and changes to the Azure platform. It uses Azure-native platform services and capabilities and aligns with Azure platform roadmaps to ensure that new capabilities are available within the environment.

Azure Exam Prep, Azure Preparation, Azure Guides, Azure Career, Azure Learning
Figure 1: Enterprise-Scale with hub and spoke architecture

Tailwind Traders wants to make sure that they set up their Azure environment with a great foundation for their Azure landing zones, which support their existing applications which they are migrating to the cloud, as well as new greenfield workloads. The modular design of the enterprise-scale architecture allows Tailwind Traders to start with foundational landing zones that support their application no matter if the applications are being migrated from their on-premises infrastructure or are newly developed.

Modular design


The modular design not only makes it simple to deploy existing and new applications, it also allows Tailwind Traders to start with a lighter deployment implementation and scale depending on their business needs. Currently, enterprise-scale offers three different reference implementations, which all can be scaled without refactoring when requirements change over time.

◉ Enterprise-scale foundation: This reference architecture allows organizations to start with foundational landing zones. It allows organizations such as Tailwind Traders to start as needed and scale later depending on their business requirements. This reference implementation is great for organizations who want to start with landing zones in Azure and don't need hybrid connectivity to their on-premises environment at the beginning. The modular design of enterprise-scale allows customer to add hybrid connectivity at a later stage when business requirements change, without refactoring the Azure environment design.

◉ Enterprise-scale Virtual WAN: Add a Virtual WAN network module to the enterprise-scale foundation. This reference implementation is great for organizations who want to start with the enterprise-scale foundation implementation but require hybrid connectivity to their on-premises datacenter, branch offices, factories, retail stores, or other edge locations.

◉ Enterprise-scale hub and spoke: This reference implementation includes the enterprise-scale foundation, hybrid connectivity as well as a network architecture based on the traditional hub and spoke network topology.

Enterprise-scale implementation


Azure Exam Prep, Azure Preparation, Azure Guides, Azure Career, Azure Learning
These reference implementations do not only include the enterprise-scale architecture but also come with automation, and Azure Resource Manager templates to easily deploy and manage the enterprise-scale implementation. This allows Tailwind Traders to leverage the Azure portal or infrastructure as code (IaC) to set up and configure their environment. With this approach, it's also possible for Tailwind Traders to start with the portal and transition to infrastructure as code as soon their organization is ready. With Azure providing a unified and consistent control plane across all Azure resources and policy-driven controls, Tailwind Traders can use a set of Azure Policies and controls to govern their entire cloud environment.

Enterprise-scale is a great option for organizations like Tailwind Traders to start with their cloud journey, deploy landing zones which included the critical design areas, as well as allows them to scale in the future.

We will be exploring Tailwind Traders and their cloud adoption journey using enterprise-scale architecture in future blog posts. However, if you want to learn more about the Cloud Adoption Framework and how it can help accelerate your cloud adoption journey, then Azure Enablement Show is a great place to start. 

Source: microsoft.com

Saturday 10 April 2021

Deepen the value chain for geospatial earth imagery on cloud using Azure Orbital

Azure Orbital, a managed ground station service, enables customers to communicate to, control their satellite, process data, and scale operations directly in Microsoft Azure. Since our Azure Orbital announcement, we have been rapidly building and deploying ground stations in our datacenters. We are currently deploying multiple satellite ground stations in our datacenters that enable a variety of scenarios, including Earth Observation, Remote Sensing, and Global Communications. 

Azure Orbital ground station update

With Azure Orbital, we are building satellite ground stations in our datacenters, allowing customers to ingest data from their satellites directly in Azure for processing and storage at-scale. Below is a sneak peek of our first satellite ground station in our Quincy, Washington datacenter:

Azure Certification, Azure Tutorial and Material, Azure Preparation, Azure Career, Azure Guides

We continue to build ground stations in other locations and integrate with partner ground stations to further our customers’ ability to communicate with their satellites for data ingestion.

In collaboration with several customers and partners, we have created multiple scenarios for signal processing, data processing, and geospatial data analytics. Today, we are pleased to welcome another partner into our Orbital ecosystem—Thales Alenia Space (TAS). TAS’s Deeper Vision and Microsoft’s Azure Orbital will bring great data processing, inferencing, and analytics capabilities to our customers.

Thales Alenia Space partnership


Together with Thales Alenia Space, we are bringing near real-time geospatial data processing capabilities to our customers. The power of processing data on the cloud with Azure Orbital coupled with the application of artificial intelligence (AI) and machine learning technology helps our customers analyze environmental changes captured with satellite imagery.

Drawing on over 40 years of experience and a unique combination of skills, expertise, and cultures, Thales Alenia Space, a Joint Venture between Thales (67 percent) and Leonardo (33 percent), delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental management, exploration, science, and orbital infrastructures. Governments, institutions, space agencies, and telecom operators count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources, and explore our Solar System and beyond. 

“Thales Alenia Space and Microsoft are innovating together by combining their expertise in space and cloud technologies. Customers can now combine all the functionality of Thales Alenia Space’s DeeperVision solution for processing dataflows and generating timely information with the cloud capabilities of Azure Orbital. This information is enriched by high-speed, high-volume artificial intelligence and machine learning to create an unprecedented impact on and beyond the planet!” - Clarence Duflocq, Vice President Strategy & Innovation, Thales Alenia Space

Use case: Geospatial data value-chain


Customers can use Azure Orbital and Orbital ground stations to bring geospatial data from their satellites. The raw satellite data can then be processed at-scale on the cloud for analysis using various Azure services to achieve goals like Change Detection, Site Monitoring, Situational Awareness, and Entity Recognition.

Azure Certification, Azure Tutorial and Material, Azure Preparation, Azure Career, Azure Guides

Deeper Vision performs automated content extraction from images and enriches image data. The user can request images from specified content, which will prompt Deeper Vision’s automation and allow the user to focus on tasks where human expertise is critical. This ability becomes crucial for scenarios like change detection and site monitoring. When new imagery arrives from specific areas, Deeper Vision can compare it with previously acquired imagery to highlight places that changed.

Azure Certification, Azure Tutorial and Material, Azure Preparation, Azure Career, Azure Guides

In the example above, we have two images taken by the European Space Agency Satellite Sentinel 2—one from 2016 and one from 2018. We can see that volumes of earth have been removed from the desert, and a neighborhood has been built. The image on the far right highlights the change detected.

Demo: Deeper Vision and Azure Orbital


In three easy steps, geospatial data, downlinked using Azure Orbital, can be processed on the cloud in near real-time for business transformational insights:

Azure Certification, Azure Tutorial and Material, Azure Preparation, Azure Career, Azure Guides

Step 1: The earth imagery is downlinked from a customer satellite. In September 2020, with a live downlink from AQUA, a public satellite owned and operated by NASA that stores downlinked data in an Azure Storage account, we demonstrated how customers can downlink data from the satellite using Azure Orbital.

Step 2: The satellite operator can process raw data from the Azure Storage Blob using their data processing pipeline for satellite imagery, which will convert them into final assets. If necessary, these assets can then be stored back in Azure Storage.

Step 3: With Deeper Vision, customers can perform inferencing on these geospatial images—a capability by our partner TAS that brings services like change detection and entity recognition.

Source: microsoft.com

Thursday 8 April 2021

Leverage enterprise-scale reference implementations for your cloud adoption

This blog will discuss the IT team at Tailwind Traders and how they leveraged enterprise-scale reference implementations for the cloud environment they are building.

Enterprise-scale landing zone architecture provides a strategic design path and target technical state for your Azure environment, including enterprise enrollment, identity, network topology, resource organization, governance, operations, business continuity, and disaster recovery (BCDR), as well as deployment options. These landing zones follow design principles across the critical design areas for an organization's Azure environment and aligns with Azure platform roadmaps to ensure that new capabilities can be integrated.

Read More: 98-365: Windows Server Administration Fundamentals

Tailwind Traders takes advantage of prescriptive guidance coupled with best practices for your Azure control plane by using the enterprise-scale architecture.

Cloud Adoption Framework enterprise-scale landing zone architecture

The enterprise-landing zone architecture offers a modular design, which not only makes it simple to deploy existing and new applications but also allows Tailwind Traders to start with a lighter deployment implementation and scale depending on their business needs.

This architecture considers several design areas:

◉ Enterprise agreement (EA) enrolment and Azure Active Directory tenants

◉ Identity and access management

◉ Management group and subscription organization

◉ Network topology and connectivity

◉ Management and monitoring

◉ Business continuity and disaster recovery

◉ Security, governance, and compliance

◉ Platform automation and DevOps

To make the implementation of the enterprise-scale landing zone architecture a straightforward process, enterprise-scale offers reference implementations. The reference implementations can be deployed using the Azure portal or infrastructure as code (IaC) to set up and configure their environment. This allows the use of automation and Azure Resource Manager templates or Terraform to easily deploy and manage the enterprise-scale implementation.

Currently, enterprise-scale offers three different reference implementations, which all can be scaled without refactoring when requirements change over time.

Enterprise-scale foundation

The enterprise-scale foundation reference architecture allows organizations to start with Azure landing zones. It allows organizations such as Tailwind Traders to start as needed and scale later depending on their business requirements. This reference implementation is great for organizations that want to start with landing zones in Azure and don't need hybrid connectivity to their on-premises infrastructure at the beginning. However, the modular design of enterprise-scale allows the customer to add hybrid connectivity at a later stage when business requirements change, without refactoring the Azure environment design.

Azure Exam Prep, Azure Preparation, Azure Tutorial and Material, Azure Career, Azure Guides
Figure 1: Enterprise-scale foundation architecture

This architecture includes and deploys:

◉ A scalable management group hierarchy aligned to core platform capabilities, allowing you to operationalize at scale using centrally managed Azure role-based access control (RBAC) and Azure Policy where platform and workloads have clear separation.

◉ Azure Policies to enable autonomy for the platform and the landing zones.

◉ An Azure subscription dedicated for management, which enables core platform capabilities at scale using Azure Policy such as Log Analytics, Automation account, Azure Security Center, and Azure Sentinel.

◉ A landing zone subscription for Azure native, internet-facing applications and resources, and specific workload Azure Policies.

Enterprise-scale hub and spoke


The enterprise-scale hub and spoke reference architecture includes the enterprise-scale foundation, and adds hybrid connectivity with Azure ExpressRoute or virtual private network (VPN), as well as a network architecture based on the traditional hub and spoke network topology. This allows Tailwind Traders to leverage the foundational landing zone and add connectivity on-premises datacenters and branch offices by using a traditional hub and spoke network architecture.

Azure Exam Prep, Azure Preparation, Azure Tutorial and Material, Azure Career, Azure Guides
Figure 2: Enterprise-scale with hub and spoke architecture

This architecture includes the enterprise-scale foundation, and in addition, deploys:

◉ An Azure subscription dedicated for connectivity, which deploys core Azure networking resources such as a hub virtual network, Azure Firewall (optional), Azure ExpressRoute gateway (optional), VPN gateway (optional), and Azure private domain name system (DNS) zones for Private Link.

◉ An Azure subscription dedicated for identity in case your organization requires to have Azure Active Directory domain controllers in a dedicated subscription (optional).

◉ Landing zone management group for corp-connected applications that require connectivity to on-premises, to other landing zones, or to the internet through shared services provided in the hub virtual network.

◉ Landing zone management group for online applications that will be internet-facing, where a virtual network is optional and hybrid connectivity is not required.

◉ Landing zone subscriptions for Azure native and internet-facing online applications and resources.

◉ Landing zone subscriptions for corp-connected applications and resources, including a virtual network that will be connected to the hub through VNet peering.

◉ Azure Policies for online and corp-connected landing zones.

Enterprise-scale virtual WAN


The Enterprise-scale virtual wide-area network (WAN) reference implementation includes the foundation as well as Azure Virtual WAN, Azure ExpressRoute, and VPN. This allows Tailwind Traders and other organizations to add hybrid connectivity to their on-premises datacenter, branch offices, factories, retail stores, or other edge locations and take advantage of a global transit network.

Azure Exam Prep, Azure Preparation, Azure Tutorial and Material, Azure Career, Azure Guides
Figure 3: Enterprise-scale Virtual WAN architecture

This architecture includes the enterprise-scale foundation, and in addition, deploys:

◉ An Azure subscription dedicated to connectivity, which deploys core networking resources such as Azure Virtual WAN, Azure Firewall and policies, and more.

◉ An Azure subscription dedicated for identity, where customers can deploy the Azure Active Directory domain controllers required for their environment.

◉ Landing zone management group for corp-connected applications that require hybrid connectivity. This is where you will create your subscriptions that will host your corp-connected workloads.

◉ Landing zone management group for online applications that will be internet-facing, which doesn't require hybrid connectivity. This is where you will create your subscriptions that will host your online workloads.

Source: microsoft.com