Tuesday, 29 March 2022

Manage port forwarding for backend pool with Azure Load Balancer

Starting today, Azure Load Balancer enables customers to manage port forwarding for Virtual Machine Scale Sets or a group of virtual machines in a backend pool with a single set-up with minimum management overhead.

What is port forwarding?

Port forwarding lets you connect to specific virtual machines by using the Load Balancer frontend IP address and port number. Load Balancer, upon receiving the traffic on a certain frontend port, will forward traffic to the target virtual machine on a specific backend port.

By employing an inbound NAT rule, you can access virtual machines in a virtual network through the IP address and port of the Load Balancer.

Inbound NAT rule or load balancing rule?

In a nutshell, inbound NAT rule is used for traffic forwarding by mapping each virtual machine to a frontend port of the Load Balancer for access. All traffic arriving at the specified frontend port of the Load Balancer will be directly forwarded to a single targeted virtual machine. Therefore, traffic is not load balanced in this case. Inbound NAT rules are useful when you want to forward traffic directly to a specific virtual machine and port combination. While load balancing rules are used to distribute traffic coming to a specific frontend of the Load Balancer across all virtual machines in the backend pool.

Azure Load Balancer, Networking, Virtual Machines, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Job, Microsoft Guides
Figure 1: Inbound NAT rule for single machine.

Azure Load Balancer, Networking, Virtual Machines, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Job, Microsoft Guides
Figure 2: Load Balancing rule.

You can have both load balancing rules and inbound NAT rules on the same Load Balancer. For example, a load balancing rule can be set up for the specific backend pool from frontend port 80 to backend port 80, so that incoming traffic can be distributed across the virtual machines in the backend pool. At the same time, you can also set up an inbound NAT rule for each virtual machine on backend port 443 so that you can use Remote Desktop Protocol (RDP) to access each virtual machine for management purposes.

Azure Load Balancer, Networking, Virtual Machines, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Job, Microsoft Guides
Figure 3: Use both inbound NAT rule and load balancing rule on a backend pool.

What are the customer challenges that we aim to solve?


Inbound NAT rules for individual target machine

When you have a large backend pool with frequent scaling up and down, this approach will create unnecessary overhead. An inbound NAT rule needs to be deleted before the target virtual machine can be taken out of the backend pool. Similarly, whenever scaling up occurs, a new inbound NAT rule for the new virtual machine needs to be created to complete the port forwarding set-up.

Inbound NAT pool

Inbound NAT pool is an existing option for backend pool composed of Virtual Machine Scale Sets. However, updating inbound NAT pool introduces additional steps behind the curtain which could result in longer deployment time. Additionally, this approach is not viable when your backend pool is composed of a group of virtual machines.

Universal port forwarding solution for backend pool of all types


Inbound NAT rule for Virtual Machines Scale Sets, Virtual Machines, or IP addresses in a backend pool

Today, we are announcing the addition of port management for backend pool to inbound NAT rules. It is the preferred solution for setting up port forwarding for backend pool of all types including Virtual Machines Scale Sets, virtual machines, and IP addresses.

Based on frontend port range start, maximum number of machines in backend pool defined for an inbound NAT rule, Load Balancer will pre-allocate a range of frontend ports starting from frontend port range start. The range end is calculated by adding maximum number of machines in the backend pool to frontend port range start to ensure that there are enough frontend ports in the event of scaling up.

Azure Load Balancer, Networking, Virtual Machines, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Job, Microsoft Guides
Figure 4: Add inbound NAT rule.

In this example, Load Balancer will pre-allocate 1000 frontend ports starting from port 500. In this case, the backend pool can scale up to 1000 machines without requiring extra management from the customer.

Azure Load Balancer, Networking, Virtual Machines, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Job, Microsoft Guides
Figure 5: Inbound NAT rule for backend pool.

Retrieve port mapping for individual machines

Once the inbound NAT rule is created, Load Balancer will assign the frontend port to each virtual machine in the backend pool. Port mappings of existing machines will not change in the event of scaling up or down. In this example, to access centralus-vm1 on port 443, you can use https://40.89.253.167:502.

Azure Load Balancer, Networking, Virtual Machines, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Job, Microsoft Guides
Figure 6: Port mapping for individual machines.

Source: microsoft.com

Thursday, 24 March 2022

Cloud migration for medical imaging data using Azure Health Data Services and IMS

Azure Health Data Services, Azure IMS, Cloud Strategy, Artificial Intelligence, Migration, Azure Career, Azure Skills, Azure Job

This blog is part of a series in collaboration with our partners and customers leveraging the newly announced Azure Health Data Services. Azure Health Data Services, a platform as a service (PaaS) offering designed to support Protected Health Information (PHI) in the cloud, is a new way of working with unified data—providing care teams with a platform to support both transactional and analytical workloads from the same data store and enabling cloud computing to transform how we develop and deliver AI across the healthcare ecosystem.

The first implementation of digital imaging techniques in clinical use started in the 1970s. Since then, the medical imaging industry has grown exponentially—over the last two and a half decades, there has been a significant development in image acquisition solutions, which has boosted image quality and adoption in different clinical applications. Healthcare is projected to deliver the greatest industry-specific CAGR of 36 percent out to 2025 (Global healthcare data is forecasted to reach 2.3 zettabytes* in this coming year alone) and medical imaging data represents approximately 80 – 90 percent of that growth.

While the amount of data generated by the medical imaging industry has continued to grow, the solutions for storing and handling this data remain archaic and on-premises due to limited products with insufficient computing power, storage size, and continuously outdated hardware. In addition, the lack of interoperability of these on-premises systems with other types of clinical data solutions and increasing workloads within imaging departments resulted in a big struggle to achieve predictive diagnosis and improved outcomes for patients. Bringing health data into the cloud has been met with challenges ranging from concerns about the security and privacy of the data to a lack of understanding of the opportunities it opens.

For the most part, interoperability in the health industry has also been limited and focused on clinical data. However, other types of health data such as imaging, IoT, and unstructured data also play a critical role in getting a full view of the patient, thereby contributing to better patient diagnosis and care.

This is why Microsoft has released Azure Health Data Services which aims to support the combining clinical, imaging, and MedTech data in the cloud using global interoperability standards like Fast Healthcare Interoperability Resources (FHIR®) and Digital Information Communication in Medicine (DICOM). The DICOM service within Azure Health Data Services allows standards-based communication with any DICOMweb™ enabled systems such as medical imaging systems, vendor-neutral archives (VNAs), picture archiving, and communication systems (PACS), etc. The goal is to fully leverage the power of the cloud infrastructures for medical images, creating a service that is fast, highly reliable, scalable, and designed for security.

Within the DICOM service, QIDO, WADO, and STOW protocols support query, retrieve, and storage of DICOM objects, while custom tags allow for user-defined, searchable tags. You can also use DICOMcast as a single source to query for cross-domain scenarios. The DICOMcast injects DICOM metadata into the FHIR service, or FHIR server, allowing a single source of truth for both clinical data and imaging metadata.

Once imaging data is persisted in the cloud, there is also a need for seamless integration of workloads into the cloud with minimum disruption and without extra investment in devices and software. In order to enable customers currently relying on DICOM DIMSE to be able to smoothly adopt cloud-based imaging storage and solutions powered by our DICOM service.

IMS collaborated with Microsoft to leverage its cloud technologies for IMS to provide a solution for this challenge resulting in a powerful tool that migrates medical imaging data from legacy workstations to the cloud using Azure Health Data Services. IMS selected Microsoft Azure because it has the most comprehensive offering and active road map to support the transition of healthcare to the cloud.

Using CloudSync as a synchronization tool

It was apparent from the beginning that creating a simple protocol converter or gateway to push images from on-premises to the cloud was not an optimal solution: since the data will flow only in one direction (from a healthcare organization to the cloud for storage, archival or advanced analytics). With that, the institution would be missing most of the benefits, such as calling back the image set into the existing on-premises viewer after performing annotations, running cloud-enabled AI models, or advanced analytics. On the other hand, having access to prior imaging studies of the patients during the current visit also plays a vital role in validating abnormal conditions over time for better clinical outcomes.

To bridge this gap, IMS designed and developed CloudSync, which is a software-only DICOM device that actively synchronizes the on-premises archive (or multiple archives) with an Azure DICOMweb endpoint. CloudSync allows the data to flow both ways and furthermore allows the implementation of business logic for the proactive staging of patient historical imaging data for immediate access, thereby reducing the latency experienced by the user.

This synchronization allows integration of organizations’ existing on-prem solutions with Azure Health Data Services and machine learning environments so that they can store, archive, slice-and-dices their data for superior cohort management. With the possibility to conveniently connect to Microsoft Power BI and Azure Synapse Analytics through Azure Health Data Services, institutions can curate their datasets, develop and deploy models, monitor their performance, perform advanced analytics on Azure Machine Learning Pipeline and push results back into their clinical workflow.

Key features of CloudSync include:

◉ Synchronize medical DICOM images from on-premises archives to the cloud using Azure Health Data Services: Enable collaboration among multiple on-prem devices by connecting all of them in one point for ease of access by everyone.

◉ Eliminate network latency while fetching medical imaging data: Proactively push prior medical images of the patient from the cloud to the on-prem devices based on the patient’s schedule and have them ready during the patient’s visit.

◉ Migrate imaging data from legacy workstations to the cloud: Enable seamless and effortless integration of on-premises imaging workstations with the cloud.

Azure Health Data Services, Azure IMS, Cloud Strategy, Artificial Intelligence, Migration, Azure Career, Azure Skills, Azure Job

CloudVue: A one-stop-shop for medical image viewing


To fully leverage the power of Azure, IMS also provides a zero-footprint diagnostic viewer called CloudVue. CloudVue allows users to safely review the data stored in the Azure DICOMweb archive on any device making it possible to access imaging data from anywhere. On top of the standard security mechanisms, CloudVue also encrypts the data during transmission.

In addition to providing the standard viewing features and tools of a web viewer, CloudVue also provides:

◉ Organizations with the ability to grant granular secure access to specific medical imaging data for distribution such as authorizing users to access specific studies in the archive, and not the entire repository. Therefore, the organization can safely grant access to referring physicians and even patients.

◉ The ability to deliver and improve AI workloads on Azure: CloudVue can store and handle data originating from AI predictions and track user behavior at the same time so it can determine if the prediction is correct. Therefore, CloudVue is implementing a positive feedback loop to monitor and improve AI over time.

◉ Annotation capabilities for images used in AI modeling.

CloudVue is the perfect companion for CloudSync in allowing users to take advantage of storing data in Azure using Azure Health Data Services.

Azure Health Data Services, Azure IMS, Cloud Strategy, Artificial Intelligence, Migration, Azure Career, Azure Skills, Azure Job

Within the current healthcare market space, one of the biggest challenges facing radiologists, clinicians, and care teams while making the diagnosis is the easy availability of a complete history of the patient—while a radiologist might have access to the medical images, not being able to query and find a patient’s medical history, medications and other lab work in the same place, makes predictive diagnosis difficult and time-consuming. With Azure Health Data Services, all of this data can now be accessed together, and using DICOMcast, a new feature, clinical information can sit alongside metadata from medical images, making them searchable quickly. This technology used alongside IMS’s CloudSync and CloudVue can change how radiologists interact with medical images and give them the ability to use the data they have for diagnosis and research.

The beginning of next-gen medical imaging viewing


IMS has created a solution for the medical imaging industry that allows health organizations to take full advantage of Azure Health Data Services at their own pace while:

◉ Avoiding any disruptions to the current workflow.
◉ Maintaining the current investments in devices and software.

Do more with your data with Microsoft Cloud for Healthcare


With Azure Health Data Services, health organizations are empowered to transform their patient experience, discover new insights with the power of machine learning and AI, and manage PHI data with confidence. Enable your data for the future of healthcare innovation with Microsoft Cloud for Healthcare.

Source: microsoft.com

Tuesday, 22 March 2022

Azure HBv3 VMs for HPC now generally available with AMD EPYC CPUs with AMD 3D V-Cache

Azure HBv3 virtual machines (VMs) are now upgraded to and generally available with AMD EPYC 3rd Gen AMD EPYC™ processors with AMD 3D V-Cache™ technology, formerly codenamed “Milan-X”, in the Azure East US, South Central, and West Europe regions. In addition, we are announcing that HBv3 VMs will also soon come to Central India, UK South, China North 3, Southeast Asia, and West US 3 Azure regions. Customers can view estimated time of arrival for these new regions at Azure Availability by region.

To access these enhanced CPUs, customers need only deploy new HBv3 VMs, as all VM deployments from today onward will occur on machines featuring the new processors. Existing HBv3 VMs deployed prior to today’s launch will continue to see 3rd Gen AMD EPYC processors, formerly codenamed “Milan”, until they are de-allocated and a customer creates new VMs in their place.

Significant performance upgrade for all HBv3 customers

As previously detailed, EPYC processors with AMD 3D V-Cache can significantly improve the performance, scaling efficiency, and cost-effectiveness of a variety of memory performance-bound workloads such as CFD, explicit finite element analysis, computational geoscience, weather simulation, and silicon design right-to-left (RTL) workflows.

Compared to the performance HBv3-series delivered prior to the upgrade to the new processors, customers will experience up to:

◉ 80 percent higher performance for CFD.

◉ 60 percent higher performance for EDA RTL.

◉ 50 percent higher performance for explicit FEA.

◉ 19 percent higher performance for weather simulation.

HBv3-series VMs retain their existing pricing and do not require changes to customer workloads. No other changes are being made to the HBv3-series VM sizes customers already know and rely on for their critical research and business workloads.

The highest performance, most cost-effective cloud HPC

Based on testing of a broad array of customer HPC workloads against the best publicly demonstrated performance from other major cloud providers, Azure HBv3-series VMs with 3rd Gen AMD EPYC processors with AMD 3D V-Cache and InfiniBand from NVIDIA Networking deliver 2.23-3.88 times higher performance.

Azure HBv3 VMs, Microsoft Exam Prep, Microsoft Exam Preparation, Microsoft Certification, Microsoft Career, Microsoft Skills, Microsoft Jobs
Figure 1: Relative at-scale workload performance in CFD, molecular dynamics, and weather simulation.

Continuous improvement for Azure HPC customers


Microsoft and AMD share a vision for a new era of high-performance computing in the cloud. One defined by continuous improvements to the critical research and business workloads that matter most to our customers. Azure has teamed with AMD to make this vision a reality by raising the bar on the performance, scalability, and value we deliver with every release of Azure HB-series virtual machines.

Azure HBv3 VMs, Microsoft Exam Prep, Microsoft Exam Preparation, Microsoft Certification, Microsoft Career, Microsoft Skills, Microsoft Jobs
Figure 2: Azure HB-Series virtual machine generational performance improvement.

“Rescale is excited to see the dedication by Microsoft to continually raise the bar, the new Azure HBv3 VMs featuring AMD EPYC™ CPUs with AMD 3D V-Cache™ technology specifically targets memory bandwidth bottlenecks impacting the most widely used commercial CFD codes on the Rescale platform. Preliminary testing has demonstrated a 25 percent performance boost across three of the most common CFD applications and a positive impact on virtually all software running on the upgraded instances,” said Chris Langel, HPC Engineering Manager at Rescale and Mulyanto Poort, VP of HPC Engineering at Rescale. “We are seeing a strong customer demand for “Milan-X” and are excited to offer the updated Azure HBv3 VMs to our customers,” said Ethan Rasa, Senior Director of Strategic Alliances at Rescale.

“Ansys Fluent is the industry-leading computational fluid dynamics tool and our customers are always looking for ways to run larger problems more quickly, or with more granularity.  The super-linear scaling we are seeing with the AMD Milan-X chip on the Azure HBv3 virtual machines will be received with a lot of excitement by our user base across many industries.”—Jeremy McCaslin, Product Manager, Fluids, Ansys

"Customers who require high-fidelity production simulations in demanding industries rely on Siemens Simcenter STAR-CCM+ software,” said Patrick Niven, Senior Director of Fluid and Thermal Product Management, Siemens Digital Industries Software. “Customers usually need those results quickly, so Siemens and Microsoft collaborate to ensure Azure HB-series instances deliver true HPC-class performance. The new Azure HBv3 instances featuring 3rd Gen AMD EPYC™ CPUs with AMD 3D V-Cache™ technology can accelerate simulations by up to 50 percent, so Microsoft can offer Simcenter STAR-CCM+ users cutting-edge performance on an accessible platform.”

Source: microsoft.com

Sunday, 20 March 2022

Introducing dynamic lineage extraction from Azure SQL Databases in Azure Purview

Data citizens including both technical and business users rely on data lineage for root cause analysis, impact analysis, data quality tracing, and other data governance applications. In the current data landscape, where data is fluidly moving across locations (on-premises to and across clouds) and across data platforms and applications, it is increasingly important to map the lineage of data. That’s why we’re introducing dynamic lineage extraction currently in preview.

Conventional systems map lineage by parsing data transformation scripts, otherwise called static code analysis. This works well in simple scenarios. For example, when a SQL script is used to produce a target table Customer_Sales by joining two tables called Customer and Sales, static code analysis can map data lineage. However, in many real use cases, the data processing workloads are quite complicated. The scripts could be wrapped in a stored procedure that is parametrized and uses dynamic SQL. There could be a decision tree with an if then else statement executing different scripts at runtime. Or simply, data transactions could have failed to commit at runtime.

In all these examples, dynamic analysis is required to track lineage effectively. Even more importantly, static lineage analysis does not associate data and processes with runtime metadata, limiting customer applications significantly. For instance, dynamic lineage encoding by whom and when a stored procedure was run, and from what application and which server, will enable customers to govern privacy, comply with regulations, increase time-to-insight, and better understand their overall data and processes.

Dynamic data lineage—Azure SQL Databases

Today, we are announcing the preview release of dynamic lineage extraction from Azure SQL Databases in Azure Purview. Azure SQL Database is one of the most widely used relational database systems in enterprises. Stored procedures are commonly used to perform data transformations and aggregations on SQL tables for downstream applications. With this release, the Azure Purview Data Map can be further enriched with dynamic lineage metadata such as run status, impacted number of rows, the client from which the stored procedure is run, user info, and other operational details from actual runs of SQL stored procedures in Azure SQL Databases.

Azure SQL Databases, Azure Purview, Microsoft Exam Prep, Microsoft Tutorial and Material, Microsoft Career, Microsoft Skills, Microsoft Jobs
Limited lineage metadata from static code analysis*

The actual implementation involves Azure Purview Data Map tapping into the instrumentation framework of the SQL engine, and extracting runtime logs to aggregate dynamic lineage. The runtime logs also provide actual queries executed in the SQL engine for data manipulation, using Azure Purview can map data lineage and gather additional detailed provenance information. Azure Purview scanners run several times a day to keep up the freshness of dynamic lineage and provenance from Azure SQL Databases.

Azure SQL Databases, Azure Purview, Microsoft Exam Prep, Microsoft Tutorial and Material, Microsoft Career, Microsoft Skills, Microsoft Jobs

To learn more about Azure Purview dynamic data lineage from Azure SQL Databases, check out the video:

Azure SQL Databases, Azure Purview, Microsoft Exam Prep, Microsoft Tutorial and Material, Microsoft Career, Microsoft Skills, Microsoft Jobs

Source: microsoft.com

Thursday, 17 March 2022

Secure your APIs with Private Link support for Azure API Management

Azure API Management is a fully managed service that enables customers to publish, secure, transform, maintain, and monitor APIs. With a few clicks in the Azure portal, you can create an API facade that acts as a “front door” through which external and internal applications can access data or business logic implemented by your custom-built backend services, running on Azure, for example on Azure App Service or Azure Kubernetes Service, or hosted outside of Azure, in a private datacenter or on-premises. Azure API Management handles all the tasks involved in mediating API calls, including request authentication and authorization, rate limit and quota enforcement, request and response transformation, logging and tracing, and API version management.

Azure API Management helps you in:

◉ Unlocking legacy assets—APIs are used to abstract and modernize legacy backends and make them accessible from new cloud services and modern applications. APIs allow innovation without the risk, cost, and delays of migration.

◉ Create API-centric app integration—APIs are easily consumable, standards-based, and self-describing mechanisms for exposing and accessing data, applications, and processes. They simplify and reduce the cost of app integration.

◉ Enable multi-channel user experiences—APIs are frequently used to enable user experiences such as web, mobile, wearable, or Internet of Things (IoT applications. Reuse APIs to accelerate development and return on investment (ROI).

◉ Business-to-business (B2B) integration—APIs exposed to partners and customers lower the barrier to integrate business processes and exchange data between business entities. APIs eliminate the overhead inherent in point-to-point integration. Especially with self-service discovery and onboarding enabled, APIs are the primary tools for scaling B2B integration.

We are happy to announce the preview of Azure Private Link support for Azure API Management service. If you are not familiar with Azure API Management, when you deploy this service, you get three main components: Azure portal, gateway, and management plane. With Azure Private Link we can create a private endpoint for the gateway component, which will be exposed through a private IP within your virtual network. This will allow inbound traffic coming to the private IP to reach Azure API Management gateway.

Azure Private Link

With Azure Private Link, communications between your virtual network and the Azure API Management gateway travel over the Microsoft backbone network privately and securely, eliminating the need to expose the service to public internet.

Key benefits of Azure Private Link

Through this functionality we will provide the same consistent experience found in other PaaS services with private endpoints:

◉ Private access from Azure Virtual Network resources, peered networks, and on-premises networks.

◉ Built-in data exfiltration protection for Azure resources.

◉ Predictable private IP addresses for PaaS resources.

◉ Consistent and unified experience across PaaS services.

Private endpoints and public endpoints

Azure API Management, Microsoft, Microsoft Exam Prep, Microsoft Preparation, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Careers
Figure 1: Architecture diagram depicting the secure and private connectivity to Azure API Management Gateway—when using Azure Private Link.

Azure Private Link provides private endpoints to be available through private IPs. In the above case, the contoso.azure-api.net gateway has a private IP of 10.0.0.6 which is only available to resources in contoso-apim-eastus-vnet. This allows the resources in this virtual network to securely communicate. The other resources may be restricted to resources only within the virtual network.

At the same time, the public endpoint for the contoso.azure-api.net gateway may still be public for the development team. In this release, Azure Private Link will support disabling the public endpoint, limiting access to only private endpoints, configured under Private Link.

How to decide which networking model to use with Azure API Management?


Azure API Management also supports virtual network injection, allowing all components to be deployed inside a virtual network. With the addition of private endpoints, we have the following options for integrating inside a custom Azure Virtual Network:

Network model

Supported tiers  Supported components  Supported traffic 
Virtual network—external Developer and Premium. Azure portal, gateway, management plane, and Git repository. Inbound and outbound traffic can be allowed to internet, peered virtual networks, Express Route, and VPN S2S connections.
Virtual network—internal  Developer and Premium. Developer portal, Gateway, Management Plane, and Git repository.  Inbound and outbound traffic can be allowed to peered virtual networks, Express Route, and VPN S2S connections. 
Private endpoint connection (preview)  Developer, Basic, Standard, and Premium.  Gateway only (managed gateway supported, self-hosted gateway not supported).  Only inbound traffic can be allowed to internet, peered virtual networks, Express Route, and VPN S2S connections. 

At this moment, these three options are mutually exclusive, you cannot choose a virtual network integration option (external or internal) in combination with private endpoint connections. Also notice that only our managed gateways will support private endpoint connections, the Self-Hosted Gateway does not support private endpoints in Azure.

Preview limitations


Azure API Management, Microsoft, Microsoft Exam Prep, Microsoft Preparation, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Careers
During the preview period, we will only support inbound traffic coming to the gateway, instances using STV2 compute platform, all pricing tiers except consumption, and Azure Private Link is limited to instances that are not using virtual network injection (internal or external). The feature will move to general availability as we assess feedback.

With the preview of Azure Private Link for Azure API Management, you are now empowered to bring your Azure API Management instances to a virtual network using the same consistent experience of other Azure PaaS services. You can create and manage private endpoints for the gateway of your Azure API Management instance. We will be sharing more updates and content in the future, so stay tuned for new updates towards the general availability of this feature.

Source: microsoft.com

Tuesday, 15 March 2022

Scaling cloud solutions to new heights with Microsoft’s partner ecosystem

Scaling Cloud Solutions, Microsoft’s Partner Ecosystem, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Tutorial and Material, Microsoft Learning

Companies building cloud solutions (such as independent software vendors (ISVs), SaaS providers, app builders, and more)—have never been more important to the world today.

With the continued acceleration of digital transformation, every organization, small or large, in every industry across the globe, will require cloud infrastructure and services to power their business. As customers’ needs for cloud solutions exponentially increase, so do the opportunities for ISVs to connect with partners and customers across the Microsoft Cloud and the commercial marketplace. To help our ecosystem harness these opportunities, we are announcing:

◉ Private offers with margin sharing to motivate 90,000-plus cloud partners: Now generally available, ISVs can use the private offer capability in the commercial marketplace to create and share margins to partners in the Cloud Solution Provider program—creating new sales channels instantly.

◉ Increased agility with private offers for customers: With enhancements to private offers in the commercial marketplace, ISVs can now create a unique private offer per customer in less than 15 minutes. This helps ISVs unlock enterprise customers for seven-digit deals and sell directly to customers with a cloud consumption commitment (if the ISV solution is eligible for Azure IP co-sell).

For Microsoft, the commercial marketplace is the connector between ISVs and customers—it’s an engine dedicated to accelerating growth. By selling through the commercial marketplace, ISVs get instant access to global reach: 1 billion people that use Microsoft technology, 95 percent of Fortune 500 companies who use Microsoft Azure, and 270M monthly active users on Microsoft Teams. 

Shifts in business-to-business (B2B) buying

Before COVID-19, customers in both B2C and B2B environments already expressed a preference for digital commerce experiences, COVID-19 only accelerated digital adoption—digital-first selling is here to stay.

Harvard Business Review recently surveyed 1,000 B2B buyers. 43 percent of those surveyed would prefer a purely digital experience for all sales. When the data was cut by generation, 29 percent of Baby Boomers preferred digital experiences in B2B buying and 54 percent of millennials had the same sentiment. Considering ten years from now, the channels we use for B2B buying today will be obsolete or a least forever transformed. Commercial marketplaces deliver on digital-first. Through B2B marketplaces, customers get a trusted buying experience that simplifies the purchase and deployment while helping customers optimize costs with pre-committed cloud spend.

Private offers to scale and motivate 90K-plus cloud partners

The ISV margin sharing to partners in the Cloud Solution Provider program (CSPs) became generally available on February 14, 2022. With margin-sharing, ISVs can directly incentivize CSPs to sell their solutions, this delivers on the promise of partner-to-partner marketing.  

Collaborating with CSPs, ISVs can lower customer acquisition costs and scale business to new customers globally. We are seeing pairings of ISV and CSP partners having tremendous success. Just two months into partnering with Pax8 (the CSP) and LawToolBox (the ISV) has seen a 105% increase in licenses transacted through marketplace.  

Another partner pairing, Sherweb (the CSP) and Nimble (the ISV), were able to work together and scale without adding any overhead. 

“The outcome of becoming a P2P co-seller with Microsoft has enabled Nimble to scale our simple serum for Microsoft 365 to over 22 countries around the world without hiring one person. That's amazing.”

Jon Ferrara, CEO Nimble

ISVs can offer margin to 400 eligible partners at once to open new sales channels, mobilizing a global ecosystem of partners. This also helps ISVs lower acquisition costs and simplify the sales process while increasing customer retention. And finally, when CSPs sell an ISV solution, they can bundle it with Microsoft Cloud solutions and their own value-add services to drive scale and recurring revenue.

https://prod-video-cms-rt-microsoft-com.akamaized.net/cms/api/am/videofiledata/RE4PiNj-tscriptenus?ver=1717

Guidance on how to create a private offer and extend a margin to partners in the Cloud Solution Provider program.

Increased agility with private offers—accelerating seven-digit sales

To meet the needs of customers with agility, ISVs often use private offers. Private offers are the key to enterprise deal-making in the marketplace delivering flexibility like negotiated pricing, private terms and conditions, and specialized configurations. Microsoft has recently made substantial improvements to this functionality—ISVs can now create unique private offers per customer in less than 15 minutes.

Additional improvements include:

◉ Create an unlimited number of private offers.

◉ Ability to time-bound the private offer.

◉ Offer custom terms and conditions.

◉ Bundle multiple products in the same private offer.

One of the main motivators for customers to buy through B2B marketplaces is to decrement pre-committed cloud spend. Microsoft offers 100 percent of sales through the Azure Marketplace for Azure IP co-sell eligible solutions to count towards a customer’s Microsoft Azure Consumption Commitment (MACC). These deals are often in the millions and commonly transacted via private offers—the large deal sizes often need customized terms and conditions, special pricing considerations, and so on.

The recent improvements in private offers help ISVs connect with MACC-eligible customers. According to tackle.io’s annual State of Cloud Marketplaces report, 82 percent of ISVs listed unlocking pre-committed cloud spend as their number one reason to sell through commercial marketplaces, and 43 percent of customers listed spending pre-committed cloud spend as their number one reason to buy through commercial marketplaces. Microsoft has a rich set of enterprise customers that require private offers, and we are seeing the acceleration. Year-over-year we have seen a 300 percent increase in customers buying Azure IP co-sell solutions through the commercial marketplace and we expect those numbers to continue to grow.

For agility and speed, ISVs can leverage APIs to create private offers and can view all private offers in a centralized dashboard with the flexibility to copy, withdraw, and upgrade offers as appropriate. As customers accept private offers, or when private offers are set to expire, the ISV will be notified in Partner Center. For the customer, they will see all the private offers associated with their account and when they purchase, they simply accept the offer with a click. No need to re-deploy their virtual machines—the solution deploys right from the Azure portal and is configured to work in the customer’s tenant.

Embracing the marketplace as a sales channel

With the proliferation of cloud solutions, commercial marketplaces simplify selling and offer customers convenience and a trusted environment to buy and deploy solutions to run their business. ISVs can accelerate their growth by embracing a third-party marketplace as a major sales channel. The improvements to private offers give ISVs the agility they need whether selling to customers with cloud consumption commitments or scaling through our 90,000-plus partners in the CSP program.

As the most trusted and comprehensive cloud—the commercial marketplace is how we are helping deliver tech intensity at scale—connecting over 30,000 solutions from partners to the 1 billion customers who use Microsoft products. Activate this channel by becoming a Microsoft partner and by publishing a transactable offer to the commercial marketplace.

Source: microsoft.com

Sunday, 13 March 2022

Technical leaders agree: AI is now a necessity to compete

AI is enabling new experiences everywhere. When people watch a captioned video on their phone, search for information online, or receive customer assistance from a virtual agent, AI is at the heart of those experiences. As users increasingly expect the conveniences that AI can unlock, they’re seen less as incremental improvements and more as the core to any app experience. A recent Forrester study shows that 84 percent of technical leaders feel they need to implement AI into apps to maintain a competitive advantage. Over 70 percent agree that the technology has graduated out of its experimental phase and now provides meaningful business value.

To make AI a core component of their business, organizations need faster, responsible ways to implement AI into their systems, ideally using their teams’ existing skills. In fact, 81 percent of technical leaders surveyed in the Forrester study say they would use more AI if it were easier to develop and deploy.

So, how can leaders accelerate the execution of their AI ambitions? Here are three important considerations for any organization to streamline AI deployments into their apps:

1. Take advantage of cloud AI services

There are cloud AI services that provide prebuilt AI models for key use cases, like translation and speech-to-text transcription. This makes it possible to implement these capabilities into apps without requiring data science teams to build models from scratch. Two-thirds of technical leaders say the breadth of use cases supported by cloud AI services is a key benefit. Using the APIs and SDKs provided, developers can add and customize these services to meet their organization’s unique needs. And prebuilt AI models benefit from regular updates for greater accuracy and regulatory compliance.

Azure has two categories of these services:

Azure Applied AI Services that are scenario-specific to accelerate time to value.

◉ Cognitive Services that make high-quality AI models available through APIs for a more customized approach.

Microsoft, Microsoft Exam, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Learning, Microsoft Online Guides

2. Empower your developers


Your developers can use APIs and SDKs within your cloud AI services to build intelligent capabilities into apps within their current development process. Developers of any skill level can get started quickly using the programming languages they already know. And should developers need added support, cloud vendors readily offer learning resources for quicker onboarding and troubleshooting.

3. Prioritize your most relevant use cases first


With AI, time to value is a matter of selecting use cases that will provide the most utility in the shortest time. Identify the needs within your organization to determine where AI capabilities can deliver the greatest impact.

For example, customers like Ecolab harness knowledge mining with Azure Cognitive Search to help their agents retrieve key information instantly, instead of spending over 30 minutes sifting through thousands of documents each time. KPMG applies speech transcription and language understanding with Azure Cognitive Services to reduce the amount of time to identify compliance risks in contact center calls from 14 weeks to two hours. And Volkswagen uses machine translation with Azure Translator to rapidly localize content including user manuals and management documents into 40 different languages.

These are just a few of the practical ways organizations have found efficiency and utility in out-of-the-box AI services that didn’t demand an unreasonable investment of time, effort, or customization to deploy.

Create business value with AI starting today


Implementing AI is simpler and more accessible than ever. Organizations of every size are deploying AI solutions that increase efficiencies, drive down overhead, or delight employees and customers in ways that are establishing them as brands of choice. It’s a great time to join them.

Source: microsoft.com

Thursday, 10 March 2022

The anatomy of a datacenter—how Microsoft's datacenter hardware powers the Microsoft Cloud

Anatomy of a DataCenter, Microsoft Cloud, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Preparation

Leading hardware engineering at a company known for its vast portfolio of software applications and systems is not as strange as it sounds, as the Microsoft Cloud depends on hardware as the foundation of trust, reliability, capacity, and performance, to make it possible for Microsoft and our customers to achieve more. The underlying infrastructure that powers our 60 plus datacenter regions across 140 countries consists of hardware and systems that sit within the physical buildings of datacenters—enabling millions of customers to execute critical and advanced workloads, such as AI and quantum computing, as well as unleashing future innovations.

Datacenter hardware development is imperative to the evolution of the Microsoft Cloud

As the Microsoft Cloud offers services and products to meet the world’s ever-growing computing demands, it is critical that we continuously design and advance hardware systems and infrastructure to deliver greater performance, higher efficiency, and more resiliency to customers—all with security and sustainability in mind. Today, our hardware engineering efforts and investments focus heavily on roadmap and lifecycle planning, sourcing and provisioning of servers, and innovating to deliver next-generation infrastructure for datacenters. In our new Hardware Innovation blog series, I’ll be sharing some of the hardware development and investments that are driving the most impact for the Microsoft Cloud and making Azure the trusted cloud that delivers innovative, reliable, and sustainable hybrid cloud solutions. But first, let’s look “under the hood” of a Microsoft datacenter:

From server to cloud: the end-to-end cloud hardware lifecycle

Our hardware planning starts with what customers want: capacity, differentiated services, cost-savings, and ultimately the ability to solve harder problems with the help of the Microsoft Cloud. We integrate key considerations such as customer feedback, operational analysis, technology vetting, with evaluation of disruptive innovations into our strategy and roadmap planning, improvement of existing hardware in our datacenters for compute, network architecture, and storage, while future-proofing innovative workloads for scale. Our engineers then design, build, test, and integrate software and firmware into hardware fleets that meet a stringent set of quality, security, and compliance requirements before deploying them into Microsoft’s datacenters across the globe.

Sourcing and provisioning cloud hardware, sustainably and securely

With Microsoft’s scale, the ways in which we provision, deploy, and decommission hardware parts have the potential to drive massive planetary impact. While we work with suppliers to reimagine a more resilient and efficient supply chain using technologies such as blockchain and digital twins, we also aim to have sustainability built into every step of the way. An example of our sustainability leadership is the execution of Microsoft Circular Centers, where servers and hardware that are being decommissioned are repurposed—efforts that are expected to increase the reuse of servers and components by up to 90 percent by 2025. I will be sharing more on our Circular Centers progress this year. We also have in place the Azure Security and Resiliency Architecture (ASRA) as an approach to drive security and resiliency consistently and comprehensively across the Microsoft Cloud infrastructure supply chain.

Innovating to deliver next-generation datacenter infrastructure

We are investigating and developing technology that would allow datacenters to be more agile, efficient, and sustainable to operate while meeting the computing demands of the future. We showcased development in datacenter energy efficiency, such as our two-phase liquid immersion cooling, allowing more densely packed servers to fit in smaller spaces, and addressing processor overclocking for higher computing efficiency with a lower carbon footprint. We also continue to invest in and develop workload-optimized infrastructure—from servers, racks, systems, to datacenter designs—for more custom general-purpose offerings as well as specialized compute such as AI, high-performance computing, quantum, and beyond.

Building the most advanced and innovative hardware for the intelligent cloud and the intelligent edge

The journey of building Microsoft Cloud’s hardware infrastructure is an exciting and humbling one as we see continual advancement in technology to meet the needs of the moment. I have been in the hardware industry for more than thirty years—yet, I’m more excited each day as I work alongside leaders and experts on our team, with our partners across the industry, and with the open source community. Like many of the cloud services that sit on top of it, Microsoft’s hardware engine runs on consistency in quality, reliability, and scalability. Stay tuned as we continue to share more deep dives and updates of our cloud hardware development, progress, and results—and work to drive forward technology advancement, enable new capabilities, and push the limits of what we can achieve in the intelligent cloud and the intelligent edge.

Source: azure.microsoft.com

Tuesday, 8 March 2022

Microsoft DDoS protection response guide

Receiving Distributed Denial of Service (DDoS) attack threats?

DDoS threats have seen a significant rise in frequency lately, and Microsoft stopped numerous large-scale DDoS attacks last year. This guide provides an overview of what Microsoft provides at the platform level, information on recent mitigations, and best practices.

Microsoft DDoS platform

⇒ Microsoft provides robust protection against layer three (L3) and layer four (L4) DDoS attacks, which include TCP SYN, new connections, and UDP/ICMP/TCP floods.

⇒ Microsoft DDoS Protection utilizes Azure’s global deployment scale, is distributed in nature, and offers 60Tbps of global attack mitigation capacity.

⇒ All Microsoft services (including Microsoft365, Azure, and Xbox) are protected by platform level DDoS protection. Microsoft's cloud services are intentionally built to support high loads, which help to protect against application-level DDoS attacks.

⇒ All Azure public endpoint VIPs (Virtual IP Address) are guarded at platform safe thresholds. The protection extends to traffic flows inbound from the internet, outbound to the internet, and from region to region.

⇒ Microsoft uses standard detection and mitigation techniques such as SYN cookies, rate limiting, and connection limits to protect against DDoS attacks. To support automated protections, a cross-workload DDoS incident response team identifies the roles and responsibilities across teams, the criteria for escalations, and the protocols for incident handling across affected teams.

⇒ Microsoft also takes a proactive approach to DDoS defense. Botnets are a common source of command and control for conducting DDoS attacks to amplify attacks and maintain anonymity. The Microsoft Digital Crimes Unit (DCU) focuses on identifying, investigating, and disrupting malware distribution and communications infrastructure to reduce the scale and impact of botnets.

Recent incidents

At Microsoft, despite the evolving challenges in the cyber landscape, the Azure DDoS Protection team was able to successfully mitigate some of the largest DDoS attacks ever, both in Azure and in the course of history.

⇒ Last October 2021, Microsoft reported on a 2.4 terabit per second (Tbps) DDoS attack in Azure that we successfully mitigated. Since then, we have mitigated three larger attacks.

⇒ In November 2021, Microsoft mitigated a DDoS attack with a throughput of 3.47 Tbps and a packet rate of 340 million packets per second (pps), targeting an Azure customer in Asia. As of February 2022, this is believed to be the largest attack ever reported in history. It was a distributed attack originating from approximately 10,000 sources and from multiple countries across the globe, including the United States, China, South Korea, Russia, Thailand, India, Vietnam, Iran, Indonesia, and Taiwan.

Protect your applications in Azure against DDoS attacks in three steps:

Customers can protect their Azure workloads by onboarding to Azure DDoS Protection Standard. For web workloads it is recommended to use web application firewall in conjunction with DDoS Protection Standard for extensive L3-L7 protection.

1. Evaluate risks for your Azure applications. This is the time to understand the scope of your risk from a DDoS attack if you haven’t done so already.

     a. If there are virtual networks with applications exposed over the public internet, we strongly recommend enabling DDoS Protection on those virtual networks. Resources in a virtual network that requires protection against DDoS attacks are Azure Application Gateway and Azure Web Application Firewall (WAF), Azure Load Balancer, virtual machines, Bastion, Kubernetes, and Azure Firewall. Review “DDoS Protection reference architectures” to get more details on reference architectures to protect resources in virtual networks against DDoS attacks.

Microsoft DDoS, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Jobs, Microsoft Preparation, Microsoft Skills

2. Validate your assumptions. Planning and preparation are crucial to understanding how a system will perform during a DDoS attack. You should be proactive to defend against DDoS attacks and not wait for an attack to happen and then act.

     a. It is essential that you understand the normal behavior of an application and prepare to act if the application is not behaving as expected during a DDoS attack. Have monitors configured for your business-critical applications that mimic client behavior and notify you when relevant anomalies are detected.

     b. Azure Application Insights is an extensible application performance management (APM) service for web developers on multiple platforms. Use Application Insights to monitor your live web application. It automatically detects performance anomalies. It includes analytics tools to help you diagnose issues and to understand what users do with your app. It's designed to help you continuously improve performance and usability.

     c. Finally, test your assumptions about how your services will respond to an attack by generating traffic against your applications to simulate DDoS attack. Don’t wait for an actual attack to happen! We have partnered with Ixia, a Keysight company, to provide a self-service traffic generator (BreakingPoint Cloud) that allows Azure DDoS Protection customers to simulate DDoS test traffic against their Azure public endpoints.

3. Configure alerts and attack analytics. Azure DDoS Protection identifies and mitigates DDoS attacks without any user intervention.

     a. To get notified when there’s an active mitigation for a protected public IP, we recommend configuring an alert on the metric under DDoS attack or not. DDoS attack mitigation alerts are automatically sent to Microsoft Defender for Cloud.

     b. You should also configure attack analytics to understand the scale of the attack, traffic being dropped, and other details.

Microsoft DDoS, Microsoft Exam Prep, Microsoft Certification, Microsoft Career, Microsoft Jobs, Microsoft Preparation, Microsoft Skills

Best practices to be followed


◉ Provision enough service capacity and enable auto-scaling to absorb the initial burst of a DDoS attack.
◉ Reduce attack surfaces; reevaluate the public endpoints and decide whether they need to be publicly accessible.
◉ If applicable, configure Network Security Group to further lock-down surfaces.
◉ If IIS (Internet Information Services) is used, leverage IIS Dynamic IP Address Restrictions to control traffic from malicious IPs.
◉ Setup monitoring and alerting if you have not done so already.

Some of the counters to monitor:
   ◉ TCP connection established
   ◉ Web current connections
   ◉ Web connection attempts
◉ Optionally, use third-party security offerings, such as web application firewalls or inline virtual appliances, from the Azure Marketplace for additional L7 protection that is not covered via Azure DDoS Protection and Azure WAF (Azure Web Application Firewall).

When to contact Microsoft support


◉ During a DDoS attack if you find that the performance of the protected resource is severely degraded, or the resource is not available. Review step two above on configuring monitors to detect resource availability and performance issues.

◉ You think your resource is under DDoS attack, but DDoS Protection service is not mitigating the attack effectively.

◉ You're planning a viral event that will significantly increase your network traffic.

For attacks that have a critical business impact, create a severity-A support ticket to engage DDoS Rapid Response team.

Source: microsoft.com

Saturday, 5 March 2022

Microsoft Azure expands to new region in the China market

Microsoft Azure, Microsoft Azure Exam, Microsoft Azure Exam Prep, Microsoft Azure Certification, Microsoft Azure Guides, Microsoft Azure Skill, Microsoft Azure Jobs, Microsoft Azure Tutorial and Material

Since March 1, 2022, a new Azure region in North China will gain unrestricted access by customers, which adds the fifth Azure region to the China market and doubles the capacity of Microsoft’s intelligent cloud portfolio in China.

Announced in 2012, and officially launched in March 2014 with two initial regions, Microsoft Azure operated by 21Vianet was the first international public cloud service that became generally available in the China market. Following Azure, Microsoft Office 365, Microsoft Dynamics 365, and Microsoft Power Platform operated by 21Vianet successively launched in China in 2014, 2019, and 2020 respectively.

“We see fast-growing needs for global public cloud services in the China market, both from multi-national companies coming to China, Chinese companies seeking for global presence, and Chinese companies to digitally transform their businesses and processes on clouds—that’s the strong momentum driving us to keep expanding and upgrading our cloud services for almost eight years here in China,” said Dr. Hou Yang, Microsoft Corporate Vice President Chairman and CEO of Microsoft Greater China Region (GCR). “Microsoft’s intelligent cloud, the most comprehensive approach to security in the world, has been empowering hundreds of thousands of developers, partners, and customers from both China and the world to achieve more with technical innovation and business transformation. The new Azure region will further reinforce Microsoft’s capabilities to enable and empower innovation, growth, and discover opportunities across the business, ecosystem, and a future with sustainability."

Cloud for innovation

According to an IDC report, China has become the world’s fastest-growing public cloud market with a year-on-year growth rate of 49.7 percent, and the China market’s global share will increase to more than 10.5 percent by 2024. The fast development of China’s digital economy demands advanced technologies and services like Microsoft Azure, to support its emerging digital innovation and industrial digital transformation.

With the availability of the new Azure region, Microsoft will better empower our customers and partners from both China and global to harness the opportunities of China’s digital development, with capabilities over hybrid and multi-cloud deployment, IoT, edge computing, data intelligence, and more.

Along with the new Azure region launch, a set of new cloud innovation capabilities will be available in China in 2022, which includes: Azure availability zone, which gives customers industry-leading 99.99 percent SLA when VMs are running in two or more availability zones and provides the most comprehensive resiliency strategy to protecting against large scale events with failover to separate regions. Azure Digital Twins, an IoT capability that enables customers to create ”digital twins” of physical objects in the cloud; Azure Arc, which helps customers manage data and applications across hybrid and multi-cloud environments; Flexible Server deployment option for Azure Database for MySQL, to provide maximum control for an organization’s databases, high availability options to help ensure zero data loss, built-in capabilities for cost optimization, and increased productivity enabled by the Azure ecosystem. Azure Purview is a unified data governance solution that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data.

"As Microsoft’s global partner, we’ve been working closely to develop and deploy digital solutions for industries,” said Jin Jia, Managing Director and Lead of Technology in Accenture Greater China. “With the new Azure region and services coming to China, we will further enable end-to-end transformations, by delivering a broad range of Azure services across infrastructure, platform, data, IoT, and cognitive computing."

Cloud for sustainability

Cloud computing provides massive efficiency that reduces the collective carbon footprint required to support the world’s computing needs. As Microsoft cloud scales its computing power, we’re also pursuing breakthrough technologies to incorporate sustainability into datacenter design and operations. Microsoft cloud’s latest key advanced development initiatives include:

◉ Reducing water use in datacenters: a new approach to datacenter temperature management will further reduce the amount of water used in our evaporative cooled datacenters.

◉ Research in liquid immersion cooling, toward waterless cooling options: Microsoft achieved the first cloud provider that is running two-phase liquid immersion cooling in a production environment. The efficiencies in both power and cooling that liquid cooling unlock new potential for datacenter rack design.

◉ Datacenter design to support local ecosystems: Microsoft benchmarked the ecosystem performance in terms of water, air, carbon, climate, biodiversity, and more in twelve datacenter regions, to renew and revitalize the surrounding area so that we can restore and create a pathway to provide regenerative value for the local community and environment.

◉ Cutting carbon footprint in datacenter design and construction: Embodied carbon counts for emissions associated with materials and construction processes throughout the whole lifecycle of a building or infrastructure. We use a tool called the Embodied Carbon in Construction Calculator (EC3) to identify building materials and reduce concrete and steel embodied carbon by 30–60 percent. Some of all the initiatives will have been adapted in the new Azure region in China step by step.

In addition to the investments in cloud infrastructure, Microsoft Cloud for Sustainability is now available for preview globally, which allows organizations to record, report, and reduce their carbon emissions on a path to net-zero more effectively. It provides a common foundation to ensure carbon emissions are measured in an accurate, consistent, and reliable manner globally. Now, both multi-national companies and Chinese companies could use this offering on global Azure to measure their sustainability efforts.

At 2021 China International Import Expo (CIIE), SGS China announced the first global and China dual-standard carbon management platform, S-Carbon, based on Microsoft Azure. Dr. Sandy Hao, Managing Director of SGS China, said, “SGS’s expertise in carbon emission, combined with Microsoft’s world-leading cloud platform, will enable more Chinese companies to accelerate the implementation of their sustainable development strategies.”

“We are pleased to be a Net Zero Technology Partner to Microsoft, jointly enabling customers with a full suite of end-to-end carbon neutrality solutions to support their green transition and ambitions,” said Michael Ding, Global Executive Director of Microsoft’s global net-zero partner Envision Group.

“We are partnering with Microsoft to conceive, build and manage smart and sustainable buildings in China and worldwide,” said Michael Zhu, Vice President and General Manager, Building Solutions, China, Johnson Controls. “Our OpenBlue digital platform, closely connected with Microsoft’s cloud platform and workplace technologies, represents an unbeatable opportunity to help our customers make modern spaces safer, more agile, and more sustainable.”

“DELTA’s Energy Management System (EMS) is built on Microsoft Azure platform completed with IoT, machine learning, and data insights from Power BI,” said Kevin Tan, General Manager of Delta GreenTech (China) Co., Ltd. ”By cooperating with Microsoft, we empower customers effectively monitor, manage, and save energy consumption to achieve a green factory.”

Cloud with trust and compliance

Microsoft cloud is comprised of over 200 physical datacenters across more than 34 markets. It serves over one billion customers and over 20 million companies worldwide—95 percent of Fortune 500 businesses run on Microsoft cloud services. With over 90 compliance certifications globally, Microsoft’s cloud platform meets a broad range of industry and regulatory standards in China, Europe, the US, and many other global markets. For China, Microsoft Azure operated by 21Vianet has also obtained a number of certifications for local compliance.

Microsoft Azure was the first international public cloud service compliantly launched in the China market. In accordance with Chinese regulatory requirements, Azure regions operated by 21Vianet in China are physically separated instances from Microsoft’s global cloud but are built on the same cloud technology as other regions operated by Microsoft globally.

The consistent architecture across China and global markets makes it easy, efficient, and secure for multinational companies to roll out their IT systems and business applications to China or vice versa.

Source: microsoft.com