Thursday, 30 April 2020

Azure + Red Hat: Expanding hybrid management and data services for easier innovation anywhere

For the past few years, Microsoft and Red Hat have co-developed hybrid solutions enabling customers to innovate both on-premises and in the cloud. In May 2019, we announced the general availability of Azure Red Hat OpenShift, allowing enterprises to run critical container-based production workloads via an OpenShift managed service on Azure, jointly operated by Microsoft and Red Hat.

Microsoft and Red Hat are now working together to further extend Azure services to hybrid environments across on-premises and multi-cloud with upcoming support of Azure Arc for OpenShift and Red Hat Enterprise Linux (RHEL), so our customers will be able to more effectively develop, deploy, and manage cloud-native applications anywhere. With Azure Arc, customers will have a more consistent management and operational experience across their Microsoft hybrid cloud including Red Hat OpenShift and RHEL.

What’s new for Red Hat Customers with Azure Arc


As part of the Azure Arc preview, we’re expanding Azure Arc’s Linux and Kubernetes management capabilities to add support specifically for Red Hat customers, enabling you to:

Organize, secure, and govern your Red Hat ecosystem across environments

Many of our customers have workloads sprawling across clouds, datacenters, and edge locations. Azure Arc enables customers to centrally manage, secure, and control RHEL servers and OpenShift clusters from Azure at scale. Wherever the workloads are running, customers can view inventory and search from the Azure Portal. They can apply policies and manage compliance for connected servers and clusters from Azure Policy; either one or many clusters at a time. Customers can enhance their security posture through built-in Azure security policies and RBAC for the managed infrastructure that works the same way wherever they run. As Azure Arc progresses towards general availability, more policies will be enabled, such as reporting on expiring certificates, password complexity, managing SSH keys, and enforcing disk encryption.

In addition, with SQL Server 2019 for RHEL 8 is now quicker to deploy via new images now available in the Azure Marketplace, we’re expanding Azure Arc to manage SQL Server on RHEL, providing integrated database and server governance via unified Azure Policies.

Finally, Azure Arc makes it easy to use Azure Management services such as Azure Monitor and Azure Security Center when dealing with workloads and infrastructure running outside of Azure.

Manage OpenShift clusters and applications at scale

Manage container-based applications running in Azure Red Hat OpenShift service on Azure, as well as OpenShift clusters running on IaaS, virtual machines (VMs), or on-premises bare metal. Applications defined in Github repositories can be automatically deployed via Azure Policy and Azure Arc to any repo-linked OpenShift cluster, and policies can be used to keep them up to date. New application versions can be distributed globally to all Azure Arc-managed OpenShift clusters using Github pull requests, with full DevOps CI/CD pipeline integrations for logging and quality testing. Additionally, if an application is modified in an unauthorized way, the change is reverted, so your OpenShift environment remains stable and compliant.

Run Azure Data Services on OpenShift and anywhere else

Azure Arc enables you to run Azure data services on OpenShift on-premises, at the edge, and in multi-cloud environments, whether a self-deployed cluster or a managed container service like Azure Red Hat OpenShift. With Azure Arc support for Azure SQL Managed Instance on OpenShift, you’ll know your container-based data infrastructure is always current and up to date; Microsoft SQL Big Data Cluster (BDC) support for OpenShift provides a new container-based deployment pattern for big data storage and analytics, allowing you to elastically scale your data with your dynamic OpenShift based application anywhere it runs.

Azure Tutorial and Material, Azure Learning, Azure Certification, Azure Learning, Azure Exam Prep

Managing multiple configurations for an on-premises OpenShift deployment from Azure Arc.

Azure Tutorial and Material, Azure Learning, Azure Certification, Azure Learning, Azure Exam Prep

Azure SQL Managed Instances within Azure Arc.

If you’d like to learn more about how Azure is working with Red Hat to make innovation easier for customers in hybrid cloud environments, join us for a fireside chat between Scott Guthrie, EVP of Cloud and AI at Microsoft, and Paul Cormier, president and CEO of Red Hat, including a demo of Azure Arc for Red Hat today at the Red Hat Summit 2020 Virtual Experience.

Private hybrid clusters and OpenShift 4 added to Azure Red Hat OpenShift


Rounding out our hybrid offerings for Red Hat customers, today we’re announcing the general availability of Azure Red Hat OpenShift on OpenShift 4.

This release brings key innovations from Red Hat OpenShift 4 to Azure Red Hat OpenShift. Additionally we‘re enabling features to support hybrid and enterprise customer scenarios, such as:

◉ Private API and ingress endpoints: Customers can now choose between public and private cluster management (API) and ingress endpoints. With private endpoints and Azure Express Route support we’re enabling private hybrid clusters, allowing our mutual customers to extend their on-premises solutions to Azure.

◉ Industry compliance certifications: To help customers meet their compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is now PCI DSS, HITRUST, and FedRAMP certified. Azure maintains the largest compliance portfolio in the industry both in terms of total number of offerings, as well as number of customer-facing services in assessment scope.

◉ Multi-Availability Zones clusters: To ensure the highest resiliency, cluster components are now deployed across 3 Azure Availability Zones in supported Azure regions to maintain high availability for the most demanding mission-critical applications and data. Azure Red Hat OpenShift has a Service Level Agreement (SLA) of 99.9 percent.

◉ Cluster-admin support: We’ve enabled the cluster-admin role on Azure Red Hat OpenShift clusters, enabling full cluster customization capabilities, such as running privileged containers and installing Custom Resource Definitions (CRDs).

Tuesday, 28 April 2020

DCsv2-series VM now generally available from Azure confidential computing

Azure Tutorial and Material, Azure Learning, Azure Certification, Azure Guides

Security and privacy are critically important when storing and processing sensitive information in the cloud, from payment transactions, to financial records, personal health data, and more. With the general availability of DCsv2-series VMs, we are ushering in the start of a new level of data protection in Azure.

With more workloads moving to the cloud and more customers putting their trust in Microsoft, the Azure confidential computing team continues to innovate to provide offerings that keep and build upon that trust. Starting with our world-class security researchers, and working closely with industry partners, we are developing new ways to protect data while it’s in use with Azure confidential computing. DCsv2-series VMs can protect the confidentiality and integrity of your data even while it’s processed.

What is confidential computing?


There are ways to encrypt your data at rest and while in transit, but confidential computing protects the confidentiality and integrity of your data while it is in use. Azure is the first public cloud to offer virtualization infrastructure for confidential computing that uses hardware-based trusted execution environments (TEEs). Even cloud administrators and datacenter operators with physical access to the servers cannot access TEE-protected data.

By combining the scalability of the cloud and ability to encrypt data while in use, new scenarios are possible now in Azure, like confidential multi-party computation where different organizations combine their datasets for compute-intensive analysis without being able to access each other’s data. Examples include banks combining transaction data to detect fraud and money laundering, and hospitals combining patient records for analysis to improve disease diagnosis and prescription allocation.

Data protection powered by Intel hardware


Our DCsv2 confidential computing virtual machines run on servers that implement Intel Software Guard Extensions (Intel SGX). Because Intel SGX hardware protects your data and keeps it encrypted while the CPU is processing it, even the operating system and hypervisor cannot access it, nor can anyone with physical access to the server.

Microsoft and Intel are committed to providing best-in-class cloud data protection through our deep ongoing partnership:

“Customers are demanding the capability to reduce the attack surface and help protect sensitive data in the cloud by encrypting data in use. Our collaboration with Microsoft brings enterprise-ready confidential computing solutions to market and enables customers to take greater advantage of the benefits of cloud and multi-party compute paradigms using Intel® SGX technology.” —Anil Rao, VP Data Center Security and Systems Architecture, Intel

Partners in the Azure Marketplace


Microsoft works directly with platform partners to provide seamless solutions, development, and deployment experiences running on top of our Azure confidential computing infrastructure. Software offerings can be discovered through our Azure Marketplace including:

◉ Fortanix— Offers a cloud-native data security solution including key management, HSM, tokenization, and secrets management built on Azure confidential computing.

◉ Anjuna— Delivers secure Azure instances using end-to-end CPU hardware-level encryption without changing your application or operations.

◉ Anqlave— A valued partner in Singapore, offers enterprise ready confidential computing solutions.

“Anqlave’s proprietary, institutional-grade modern key management and data encryption solution addresses the most critical security issues we face today. With Anqlave Data Vault (ADV) secret management allows users to securely create, store, transport and use its secrets. Leveraging Azure confidential computing, allows us to make this technology more accessible to our enterprise customers and easily support their scale. Providing a secure enclave that is portable in the cloud is one the key reasons why our enterprises will prefer to host their ADV on Azure confidential computing regardless of their other cloud infrastructure.” —Assaf Cohen, CEO, Anqlave

How customers are succeeding with Azure confidential computing


Customers are already using Azure confidential computing for production workloads. One customer is Signal:

“Signal develops open source technology for end-to-end encrypted communications, like messaging and calling. To meet the security and privacy expectations of millions of people every day, we utilize Azure confidential computing to provide scalable, secure environments for our services. Signal puts users first, and Azure helps us stay at the forefront of data protection with confidential computing.” —Jim O'Leary, VP of Engineering, Signal

While many applications and services can take advantage of data protection with confidential computing, we have seen particular benefits with regulated industries, such as financial, government, and healthcare. Companies can now take advantage of the cloud for processing sensitive customer data with reduced risk and higher confidence that their data can be protected, including when processing.

For example, MobileCoin, a new international cryptocurrency trusts Azure confidential computing to support digital currency transfers. Their network code is now available in open source, and a TestNet is available to tryout:

“MobileCoin partners with Azure because Microsoft has decided to invest in trustworthy systems. Confidential computing rides the edge between what we can imagine and what we can protect. The praxis we've experienced with Azure allows us to commit to systems that are integral, high trust, and performant.” —Joshua Goldbard, CEO, MobileCoin

Confidential computing has proven useful for enterprise-grade blockchain, enabling fast and secure transaction verification across a decentralized network. Fireblocks is yet another customer taking advantage of Azure confidential computing infrastructure:

“At Fireblocks, our mission is to secure blockchain-based assets and transactions for the financial industry. Once we realized the traditional tech stack was not suitable for this challenge, we turned to Azure confidential computing and Intel SGX to implement our patent-pending technology. Our customers trust Fireblocks to securely store and move their digital assets—over $6.5 billion of them each month—and Azure provides a backbone for us to deliver on that promise.” —Michael Shaulov, CEO and co-founder, Fireblocks

Industry leadership bringing confidential computing to the forefront


Microsoft is not alone in bringing confidential computing to the forefront of the cloud computing industry. In September 2019, we were a founding member of the Confidential Computing Consortium (CCC), which now consists of dozens of companies working to develop and open source technologies and best practices for protecting data while it’s in use. These companies include hardware, cloud, platform, and software providers.

Microsoft is also committed to the developer experience to ensure platform partners and application developers can build solutions that take advantage of confidential computing. We donated our Open Enclave SDK to the consortium, an open source SDK for developing platforms and applications on top of confidential computing infrastructure.

Source: microsoft.com

Sunday, 26 April 2020

Optimize cost and performance with Query Acceleration for Azure Data Lake Storage

The explosion of data-driven decision making is motivating businesses to have a data strategy to provide better customer experiences, improve operational efficiencies, and make real-time decisions based on data. As businesses become data driven, we see more customers build data lakes on Azure. We also hear that more cost optimization and more performance are two of the most important features of data lake architecture on Azure. Normally, these two qualities are traded off for each other—if you want more performance, you will need to pay more; if you want to save money, expect your performance curve to go down.

We’re announcing the preview of Query Acceleration for Azure Data Lake Storage—a new capability of Azure Data Lake Storage, which improves both performance and cost. The feature is now available for customers to start realizing these benefits and improving their data lake deployment on Azure.

How Query Acceleration for Azure Data Lake improves performance and cost


Big data analytics frameworks, such as Spark, Hive, and large-scale data processing applications, work by reading all of the data using a horizontally-scalable distributed computing platform with techniques such as MapReduce. However, a given query or transformation generally does not require all of the data to achieve its goal. Therefore, applications typically incur the costs of reading, transferring over the network, parsing into memory and finally filtering out the majority of the data that is not required. Given the scale of such data lake deployments, these costs become a major factor that impacts the design and how ambitious you can be. Improving cost and performance at the same time enhances how much valuable insight you can extract from your data.

Query Acceleration for Azure Data Lake Storage allows applications and frameworks to push-down predicates and column projections, so they may be applied at the time data is first read, meaning that all downstream data handling is saved from the cost of filtering and processing unrequired data.

The following diagram illustrates how a typical application uses Query Acceleration to process data:

Microsoft Online Guides, Microsoft Tutorial and Material, Azure Certification, Azure Data Lake

1. The client application requests file data by specifying predicates and column projections.

2. Query Acceleration parses the specified query and distributes work to parse and filter data.

3. Processors read the data from the disk, parses the data by using the appropriate format, and then filters data by applying the specified predicates and column projections.

4. Query Acceleration combines the response shards to stream back to client application.

5. The client application receives and parses the streamed response. The application doesn't need to filter any additional data and can apply the desired calculation or transformation directly.

Azure offers powerful analytic services


Query Acceleration for Azure Data Lake Storage is yet another example of how we’re committed to making Azure the best place for organizations to unlock transformational insights from all data. Customers can benefit from tight integration with other Azure Services for building powerful cloud scale end-to-end analytics solutions. These solutions support modern data warehousing, advanced analytics, and real-time analytics easily and more economically.

We’re also committed to remaining an open platform where the best-in-breed open source solutions benefit equally from the innovations occurring at all points within the platform. With Azure Data Lake Storage underpinning an entire ecosystem of powerful analytics services, customers can extract transformational insights from all data assets.

Saturday, 25 April 2020

Azure Migrate now available in Azure Government

Microsoft’s service for datacenter migration, Azure Migrate, is now available in Azure Government—unlocking the whole range of functionality for government customers. Previously, Azure Migrate V1 was available to US Azure Government customers, which performed limited scale assessment for VMware workloads. Azure Migrate V2 for Azure Government, now available, includes a one-stop shop for discovery, assessment, and migration of largescale datacenters.

Why migrate to Azure Government


We know how important security is for Government customers. Fortunately, Azure Government, Microsoft’s government cloud offering, provides industry-leading security with more compliance certifications than any other cloud provider. By using a cloud government solution, your organization can meet high compliance certifications that aren’t available on-premises.

Azure Government has six government-exclusive datacenter regions across the US, with an Impact Level 5 Provisional Authorization. This means Azure Government can host workloads for the most sensitive organizations, like the US Department of Defense. Azure Government also offers hybrid flexibility, which allows you to customize your digital transformation by keeping select data and functionality on-premises.

Leading-edge innovations in Azure ensure your government organization is modernized and effective, with advanced data analytics, artificial intelligence (AI), IoT, and high-performance computing. Transform how your organization learns from and interacts with citizens. Analyze smart devices real-time to improve weather sensors and optimize emergency services. Take preemptive action against evolving security threats with predictive models.

Azure Migrate supports your migration to Azure Government


Azure Migrate provides a central hub of Microsoft and ISV migration tools. The hub helps identify the right tools for your migration scenario and features end-to-end progress tracking to help with largescale datacenter migrations and cloud transformation projects.

Azure Migrate provides comprehensive coverage for a variety of migration scenarios, now all available for government customers, including:

◉ Windows and Linux servers— Largescale discovery, assessment, and migration for VMware, Hyper-V, and bare metal servers. Features include agentless discovery, application inventory mapping, dependency mapping, and cost analysis. You can also migrate VMware VMs (now generally available) to Azure with zero data loss and minimal downtime using an agentless migration, in addition to the agent-based migration capability.

◉ SQL and other databases— Assessment and migration for a variety of on-premises databases to Azure SQL database and Azure SQL Database managed instance.

◉ Web-apps— Assessment and migration of .NET and PHP web apps to Azure App Service.

◉ Virtual Desktop Infrastructure (VDI) migration— Migration of virtual desktop infrastructure to Windows Virtual Desktop in Azure.

◉ Data migration— Migration of block data to Azure using Data Box.

Azure Study Material, Azure Certifications, Azure Guides, Azure Exam Prep

Azure Migrate Hub.

Azure Study Material, Azure Certifications, Azure Guides, Azure Exam Prep

Agentless Discovery.

Azure Study Material, Azure Certifications, Azure Guides, Azure Exam Prep

Dependency Mapping.

Geographic and regional availability for Azure Migrate


Azure Migrate is currently available in Asia Pacific, Australia, Canada, Europe, India, Japan, United Kingdom, and United States for public cloud. Now, Azure Migrate capabilities will be extended to US Gov Arizona and US Gov Virginia for government customers.

Note the individual SKUs supported in the assessment and migration tools will depend on availability in these regions. See a comparison of Gov SKUs with respect to public cloud SKUs.

Thursday, 23 April 2020

Azure GPUs with Riskfuel’s technology offer 20 million times faster valuation of derivatives

Exchange-traded financial products—like stocks, treasuries, and currencies—have had the benefit of a tremendous wave of technological innovation in the past 20 years, resulting in more efficient markets, lower transaction costs, and greater transparency to investors.

However, large parts of the capital markets have been left behind. Valuation of instruments composing the massive $500 trillion market in over-the-counter (OTC) derivatives—such as interest rate swaps, credit default swaps, and structured products—lack the same degree of immediate clarity that is enjoyed by their more straightforward siblings.

In times of increased volatility, traders and their managers need to know the impacts of market conditions on a given instrument as the day unfolds to be able to take appropriate action. Reports reflecting the conditions at the previous close of business are only valuable in calm markets and even then, firms with access to fast valuation and risk sensitivity calculations have a substantial edge in the marketplace.

Unlike exchange-traded instruments, where values can be observed each time the instrument trades, values for OTC derivatives need to be computed using complex financial models. The conventional means of accomplishing this is through traditional Monte Carlo—a simple but computationally expensive probabilistic sweep through a range of scenarios and resultant outcomes- or finite-difference analysis.

Banks spend tens of millions of dollars annually to calculate the values of their OTC derivatives portfolios in large, nightly batches. These embarrassingly parallel workloads have evolved directly from the mainframe days to run on on-premise clusters of conventional, CPU-bound workers—delivering a set of results good for a given day.

Using conventional algorithms, real-time pricing, and risk management is out of reach. But as the influence of machine learning extends into production workloads, a compelling pattern is emerging across scenarios and industries reliant on traditional simulation. Once computed, the output of traditional simulation can be used to train DNN models that can then be evaluated in near real-time with the introduction of GPU acceleration.

We recently collaborated with Riskfuel, a startup developing fast derivatives models based on artificial intelligence (AI), to measure the performance gained by running a Riskfuel-accelerated model on the now generally available Azure ND40rs_v2 (NDv2-Series) Virtual Machine instance powered by NVIDIA GPUs against traditional CPU-driven methods.

Riskfuel is pioneering the use of deep neural networks to learn the complex pricing functions used to value OTC derivatives. The financial instrument chosen for our study was the foreign exchange barrier option.

The first stage of this trial consisted of generating a large pool of samples to be used for training data. In this instance, we used conventional CPU-based workers to generate 100,000,000 training samples by repeatedly running the traditional model with inputs covering the entire domain to be approximated by the Riskfuel model. The traditional model took an average of 2250 milliseconds (ms) to generate each valuation. With the traditional model, the valuation time is dependent on the maturity of the trade.

The histogram in Figure 1 shows the distribution of valuation times for a traditional model:

Azure Study Materials, Azure Guides, Azure Certification, Azure Tutorial and Material, Azure Guides, Azure Exam Prep

Figure 1: Distribution of valuation times for traditional models.

Once the Riskfuel model is trained, valuing individual trades is much faster with a mean under 3 ms, and is no longer dependent on maturity of the trade:

Azure Study Materials, Azure Guides, Azure Certification, Azure Tutorial and Material, Azure Guides, Azure Exam Prep

Figure 2: Riskfuel model demonstrating valuation times with a mean under 3 ms.

These results are for individual valuations and don’t use the massive parallelism that the Azure ND40rs_v2 Virtual Machine can deliver when saturated in a batch inferencing scenario. When called upon to value portfolios of trades, like those found in a typical trading book, the benefits are even greater. In our study, the combination of a Riskfuel-accelerated version of the foreign exchange barrier option model and with an Azure ND40rs_v2 Virtual Machine showed a 20M+ times performance improvement over the traditional model.

In Figure 3 shows the throughput, as measured in valuations per second, of the traditional model running on a non-accelerated Azure Virtual Machine versus the Riskfuel model running on an Azure ND40rs_v2 Virtual Machine (in blue):

Azure Study Materials, Azure Guides, Azure Certification, Azure Tutorial and Material, Azure Guides, Azure Exam Prep

Figure 3: Model comparison of traditional model running versus the Riskfuel model.

For portfolios with 32,768 trades, the throughput on an Azure ND40rs_v2 Virtual Machine is 915,000,000 valuations per second, whereas the traditional model running on CPU-based VMs has a throughput of just 32 valuations per second. This is a demonstrated improvement of more than 28,000,000x.

It is critical to point out here that the speedup resulting from the Riskfuel model does not sacrifice accuracy. In addition to being extremely fast, the Riskfuel model effectively matches the results generated by the traditional model, as shown in Figure 4:

Azure Study Materials, Azure Guides, Azure Certification, Azure Tutorial and Material, Azure Guides, Azure Exam Prep

Figure 4: Accuracy of Riskfuel model.

These results clearly demonstrate the potential of supplanting traditional on-premises high-performance computing (HPC) simulation workloads with a hybrid approach: using traditional methods in the cloud as a methodology to produce datasets used to train DNNs that can then evaluate the same set of functions in near real-time.

The Azure ND40rs_v2 Virtual Machine is a new addition to the NVIDIA GPU-based family of Azure Virtual Machines. These instances are designed to meet the needs of the most demanding GPU-accelerated AI, machine learning, simulation, and HPC workloads, and the decision to use the Azure ND40rs_v2 Virtual Machine was to take full advantage of the massive floating point performance it offers to achieve the highest batch-oriented performance for inference steps, as well as the greatest possible throughput for model training.

The Azure ND40rs_v2 Virtual Machine is powered by eight NVIDIA V100 Tensor Core GPUs, each with 32 GB of GPU memory, and with NVLink high-speed interconnects. When combined, these GPUs deliver one petaFLOPS of FP16 compute.

Riskfuel’s Founder and CEO, Ryan Ferguson, predicts the combination of Riskfuel accelerated valuation models and NVIDIA GPU-powered VM instances on Azure will transform the OTC market:

“The current market volatility demonstrates the need for real-time valuation and risk management for OTC derivatives. The era of the nightly batch is ending. And it’s not just the blazing fast inferencing of the Azure ND40rs_v2 Virtual Machine that we value so much, but also the model training tasks as well. On this fast GPU instance, we have reduced our training time from 48 hours to under four! The reduced time to train the model coupled with on-demand availability maximizes the productivity of our AI engineering team.”

Scotiabank recently implemented Riskfuel models into their leading-edge derivatives platform already live on the Azure GPU platform with NVIDIA GPU-powered Azure Virtual Machine instances. Karin Bergeron, Managing Director and Head of XVA Trading at Scotiabank, sees the benefits of Scotia’s new platform:

“By migrating to the cloud, we are able to spin up extra VMs if something requires some additional scenario analysis. Previously we didn’t have access to this sort of compute on demand. And obviously the performance improvements are very welcome. This access to compute on demand helps my team deliver better pricing to our customers.”

Tuesday, 21 April 2020

Azure Maps updates offer new features and expanded availability

Updates to Azure Maps services include new and recently added features, including the general availability of Azure Maps services on Microsoft Azure Government cloud. Here is a rundown of the new and recently added features for Azure Maps services:

Azure Maps is now generally available on Azure Government cloud


The general availability of Azure Maps for Azure Government cloud allows you to easily include geospatial and location intelligence capabilities in solutions deployed on Azure Government cloud with the quality, performance, and reliability required for enterprise grade applications. Microsoft Azure Government delivers a cloud platform built upon the foundational principles of security, privacy and control, compliance, and transparency. Public sector entities receive a physically isolated instance of Microsoft Azure that employs world-class security and compliance services critical to the US government for all systems and applications built on its architecture.

Azure Maps Batch services are generally available


Azure Maps Batch capabilities available through Search and Route services are now generally available. Batch services allows customers to send batches of queries using just a single API request.

Batch capabilities are supported by the following APIs:

◉ Post Search Address Batch
◉ Post Search Address Reverse Batch
◉ Post Search Fuzzy Batch
◉ Post Route Directions Batch

What’s new for the Azure Maps Batch services?


Users have now an option to submit synchronous (sync) request, which is designed for lightweight batch requests. When the service receives a request, it will respond as soon as the batch items are calculated instead of returning a 202 along with a redirect URL. With sync API there will be no possibility to retrieve the results later. When Azure Maps receives sync request, it responds as soon as the batch items are calculated. For large batches, we recommend continuing to use the Asynchronous API that is appropriate for processing big volumes of relatively complex route requests.

For Search APIs, the Asynchronous API allows developers to batch up to 10,000 queries and sync API up to 100 queries. For Route APIs, the Asynchronous API allows developers to batch up to 700 queries and sync API up to 100 queries.

Azure Maps Matrix Routing service is generally available


The Matrix Routing API is now generally available. The service allows calculation of a matrix of route summaries for a set of routes defined by origin and destination locations. For every given origin, the service calculates the travel time and distance of routing from that origin to every given destination.

For example, let's say a food delivery company has 20 drivers and they need to find the closest driver to pick up the delivery from the restaurant. To solve this use case, they can call Matrix Route API.

What’s new in the Azure Maps Matrix Routing service?

The team worked to improve the Matrix Routing performance and added support to submit synchronous request like for the batch services described above. The maximum size of a matrix for asynchronous request is 700 and for synchronous request it's 100 (the number of origins multiplied by the number of destinations).

For Asynchronous API calls we introduced new waitForResults parameter. If this parameter is set to be true, user will get a 200 response if the request is finished under 120 seconds. Otherwise, user will get a 202 response right away and async API will return users an URL to check the progress of async request in the location header of the response.

Updates for Render services


Introducing Get Map tile v2 API in preview

Like Azure Maps Get Map Tiles API v1, our new Get Map Tile version 2 API, in preview, allows users to request map tiles in vector or raster format typically to be integrated into a map control or SDK. The service allows to request various map tiles, such as Azure Maps road tiles or real-time Weather Radar tiles. By default, Azure Maps uses vector map tiles for its SDKs.

The new version will offer users more consistent way to request data. The new version introduces a concept of tileset, a collection of raster or vector data that are further broken up into a uniform grid of square tiles at preset zoom levels. Every tileset has a tilesetId to request a specific tileset. For example, microsoft.base.

Also, Get Map Tile v2now supports the option to call imagery data that was earlier only available through Get Map Imagery Tile API. In addition, Azure Maps Weather Service radar and infrared map tiles are only available through the version 2.

Dark grey map style available through Get Map Tile and Get Map Image APIs

In addition to serve the Azure Maps dark grey map style through our SDKs, customers can now also access it through Get Map Tile APIs (version 1 and version 2) and Get Map Image API in vector and raster format. This empowers customers to create rich map visualizations, such as embedding a map image into a web page.

Azure Maps, Azure Guides, Azure Learning, Azure Tutorial and Material, Azure Exam Prep

Azure Maps dark grey map style.

Route service: Avoid border crossings, pass in custom areas to avoid

The Azure Maps team has continued to make improvements to the Routing APIs. We have added new parameter value avoid=borderCrossings to support routing scenarios where vehicles are required to avoid country/region border crossings, and keep the route within one country.

To offer more advanced vehicle routing capabilities, customers can now include areas to avoid in their POST Route Directions API request. For example, a customer might want to avoid sending their vehicles to a specific area because they are not allowed to operate in the area without a permission form the local authority. As a solution, users can now pass in the route request POST body polygons in GeoJSON format as a list of areas to avoid.

Cartographic and styling updates


Display building models

Through Azure Maps map control, users have now option to render 2.5D building models on the map. By default, all buildings are rendered as just their footprints. By setting showBuildingModels to true, buildings will be rendered with their 2.5D models. 

Azure Maps, Azure Guides, Azure Learning, Azure Tutorial and Material, Azure Exam Prep

Display building models.

Islands, borders, and country/region polygons

To improve the user experience and give more detailed views, we reduced the boundary data simplification reduction to offer better visual experience at higher zoom levels. User can now see more detailed polygon boundary data.

Azure Maps, Azure Guides, Azure Learning, Azure Tutorial and Material, Azure Exam Prep

Left: Before boundary data simplification reduction. Right: After boundary data simplification reduction.

National Park labeling and data rendering

Based on feedback from our users, we simplified labels for scatters polygons by reducing the number of labels. Also, National park and National Forest labels are displayed already on zoom level 6.

Azure Maps, Azure Guides, Azure Learning, Azure Tutorial and Material, Azure Exam Prep

National Park and National Forest labels displayed on zoom level 6.

Source: microsoft.com

Sunday, 19 April 2020

Microsoft partners with the industry to unlock new 5G scenarios with Azure Edge Zones

Cloud, edge computing, and IoT are making strides to transform whole industries and create opportunities that weren't possible just a few years ago. With the rise of 5G mobile connectivity, there are even more possibilities to deliver immersive, real-time experiences that have demanding, ultra-low latency, and connectivity requirements. 5G opens new frontiers with enhanced mobile broadband up to 10x faster, reliable low-latency communication, and very high device density up to 1 million devices per square kilometer.

Today we’re announcing transformative advances to combine the power of Azure, 5G, carriers, and technology partners around the world to enable new scenarios for developers, customers, and partners, with the preview of Azure Edge Zones.

New 5G customer scenarios with Azure Edge Zones


Azure Edge Zones and Azure Private Edge Zones deliver consistent Azure services, app platform, and management to the edge with 5G unlocking new scenarios by enabling:

◉ Development of distributed applications across cloud, on-premises, and edge using the same Azure Portal, APIs, development, and security tools.

◉ Local data processing for latency critical industrial IoT and media services workloads.

◉ Acceleration of IoT, artificial intelligence (AI), and real-time analytics by optimizing, building, and innovating for robotics, automation, and mixed reality.

◉ New frontiers for developers working with high-density graphics and real-time operations in industries such as gaming.

◉ An evolving platform built with customers, carriers, and industry partners to allow seamless integration and operation of a wide selection of Virtual Network Functions, including 5G software and SD-WAN and firewalls from technology partners such as Affirmed, Mavenir, Nuage Networks from Nokia, Metaswitch, Palo Alto Networks, and VeloCloud By VMware.

Azure Study Materials, Azure Certification, Azure Guides, Azure Certification, Azure Tutorial and Material

Building on our previous work with AT&T, we’re announcing the preview of Azure Edge Zones with carriers, connecting Azure services directly to 5G networks in the carrier’s datacenter. This will enable developers to build optimized and scalable applications using Azure and directly connected to 5G networks, taking advantage of consistent Azure APIs and tooling available in the public cloud. We were the first public cloud to announce 5G integration with AT&T in Dallas in 2019, and now we're announcing a close collaboration with AT&T on a new Edge Zone targeted to become available in Los Angeles in late spring. Customers and partners interested in Edge Zones with AT&T can register for our early adopter program.

“This is a uniquely challenging time across the globe as we rethink how to help organizations serve their customers and stakeholders,” said Anne Chow, chief executive officer, AT&T Business. “Fast and intelligent mobile networks will be increasingly central to all of our lives. Combining our network knowledge and experience with Microsoft’s cloud expertise will give businesses a critical head start.”

These new zones will boost application performance, providing an optimal user experience when running ultra-low latency, sensitive mobile applications, and SIM-enabled architectures including:

◉ Online gaming: Every press of the button, every click is important for a gamer. Responsiveness is critical, especially in multi-player scenarios. Game developers can now develop cloud-based applications optimized for mobile, directly accessing the 5G network at different carrier sites. They can achieve millisecond latency and scale to as many users as they want.

◉ Remote meetings and events: As the prevalence of digital-forward experiences continue to rise in response to global health challenges, we can help bring together thousands of people to enjoy a real-time shared experience. Enabling scenarios like social engagement, mobile digital experiences, live interaction, and payment and processing require ultra-low latency to provide an immersive, responsive experience.

◉ Smart Infrastructure: With the rise of IoT, organizations are looking to create efficiency, savings, and immersive experiences across residential and commercial buildings, or even citywide. With 5G and cloud computing, organizations can reliably connect millions of endpoints, analyze data, and deliver immersive experiences.

With Azure Edge Zones we’re expanding our collaboration with several of our carrier partners to bring the Azure Edge Zones family to our mutual customers later this year.

Azure Study Materials, Azure Certification, Azure Guides, Azure Certification, Azure Tutorial and Material

In addition to partnering with carriers, we'll also deliver standalone Azure Edge Zones in select cities over the next 12 months, bringing Azure closer to customers and developers in highly dense areas.

Azure Private Edge Zones


We’re also announcing the preview of Azure Private Edge Zones, a private 5G/LTE network combined with Azure Stack Edge on-premises delivering an ultra-low latency, secure, and high bandwidth solution for organizations to enable scenarios, like with Attabotics, accelerating e-commerce delivery times by using 3D robotic goods-to-person storage, retrieval, and real-time order fulfillment solutions. This solution leverages Azure Edge Zones and IoT technologies such as Azure IoT Central and Azure Sphere.

“In collaboration with Microsoft, Rogers is delivering new and innovative solutions with our Private LTE capabilities combined with Azure Edge Zones,” said Dean Prevost, President, Rogers for Business. “Working with Attabotics, we’re enabling Canadian businesses to transform the traditional supply model with a retail e-fulfillment solution that showcases the exciting possibilities of today and opens the door to our 5G future.”

Partnering with the broad industry of carriers, systems integrators, and technology partners, we're launching a platform to support orchestration and management of customers' private cellular networks to enable scenarios such as:

◉ Smart Factory/IoT: Off-shore operations or security isolated facilities can now take advantage of the power of edge computing. Connecting everything, from silicon to sensors, leveraging security to AI at the edge, deploying Digital Twins or using mixed reality, with a secure and private connection.

◉ Logistics and operations: Retail customers have high expectations today in online and retail shopping, creating a need for appealing advertising before a potential customer looks away from a product on-line or in an aisle at the store. Wide selection, tailored offers, convenience, and availability are musts for success. The combination of cloud and distributed edge computing, efficiently working together is a game changer for the industry.

◉ Medicine: From remote surgeries to complicated diagnostics that rely on cross-institutional collaboration, efficient compute and storage at the edge, with AI and minimal latency, enables these and multiple other scenarios that will save lives. Private mobile connections will work as smart grids for hospitals, patient data, and diagnostics that will never have to be exposed to the internet to take advantage of Azure technologies.

A consistent Edge Zone solution


Together, Azure, Azure Edge Zones, and Azure Private Edge Zones unlock a whole new range of distributed applications with a common and consistent architecture companies can use. For example, enterprises running a headquarters’ infrastructure on Azure, may leverage Azure Edge Zones for latency sensitive interactive customer experiences, and Azure Private Edge Zones for their remote locations. Enterprise solution providers can take advantage of the consistent developer, management, and security experience, allowing developers to continue using Github, Azure DevOps, and Kubernetes Services to create applications in Azure and simply move the application to either Azure Edge Zones or Private Edge Zones depending on the customer's requirements.

“By combining Vodafone 5G and mobile private networks with Azure Private Edge Zones, our customers will be able to run cloud applications on mobile devices with single-digit millisecond responsiveness. This is essential for autonomous vehicles and virtual reality services, for example, as these applications need to react in real-time to deliver business impact. It will allow organizations to innovate and transform their operations, such as the way their employees work with virtual reality services, high speed and precise robotics, and accurate computer vision for defect detection. Together, we expect Vodafone and Microsoft to provide our customers with the capabilities they need to create high performing, innovative and safe work environments.” - Vinod Kumar, CEO of Vodafone Business

Azure Study Materials, Azure Certification, Azure Guides, Azure Certification, Azure Tutorial and Material

New possibilities for the telecommunication industry with Azure


For the last few decades, carriers and operators have pioneered how we connect with each other, laying the foundation for telephony and cellular. With cloud and 5G, there are new possibilities by combining cloud services, including compute and AI, with mobile high bandwidth and ultra-low latency connections. Microsoft is partnering with carriers and operators to bring 5G to life in immersive applications built by organizations and developers.

Carriers, operators, and networking providers can build 5G-optimized services and applications for their partners and customers with Azure Edge Zones, taking advantage of Azure compute, storage, networking, and AI capabilities. For organizations that want an on-premises, private mobile solution, partners and carriers can deploy, manage, and build offers with Azure Private Edge Zones. Customers need help understanding the complexities of the cellular spectrum, access points, and overall management. Carrier partners can help such enterprises manage these scenarios including manufacturing, robotics, and retail.

Source: azure.microsoft.com

Saturday, 18 April 2020

Using Azure Monitor source map support to debug JavaScript errors

Azure Monitor’s new source map support expands a growing list of tools that empower developers to observe, diagnose, and debug their JavaScript applications.

Difficult to debug


As organizations rapidly adopt modern JavaScript frontend frameworks such as React, Angular, and Vue, they are left with an observability challenge. Developers frequently minify/uglify/bundle their JavaScript application upon deployment to make their pages more performant and lightweight which obfuscates the telemetry collected from uncaught errors and makes those errors difficult to discern.

Source maps help solve this challenge. However, it’s difficult to associate the captured stack trace with the correct source map. Add in the need to support multiple versions of a page, A/B testing, and safe-deploy flighting, and it’s nearly impossible to quickly troubleshoot and fix production errors.

Unminify with one-click


Azure Monitor’s new source map integration enables users to link an Azure Monitor Application Insights Resource to an Azure Blob Services Container and unminify their call stacks from the Azure Portal with a single click. Configure continuous integration and continuous delivery (CI/CD) pipelines to automatically upload your source maps to Blob storage for a seamless end-to-end experience.

Azure Monitor, Azure Learning, Azure Tutorial and Material, Azure Guides, Azure Prep

Microsoft Cloud App Security’s story


The Microsoft Cloud App Security (MCAS) Team at Microsoft manages a highly scalable service with a React JavaScript frontend and uses Azure Monitor Application Insights for clientside observability.

Over the last five years, they’ve grown in their agility to deploying multiple versions per day. Each deployment results in hundreds of source map files, which are automatically uploaded to Azure Blob container folders according to version and type and stored for 30 days.

Daniel Goltz, Senior Software Engineering Manager, on the MCAS Team explains, “The Source Map Integration is a game-changer for our team. Before it was very hard and sometimes impossible to debug and resolve JavaScript based on the unminified stack trace of exceptions. Now with the integration enabled, we are able to track errors to the exact line that faulted and fix the bug within minutes.”

Get started


Configure source map support once, and all users of the Application Insights Resource benefit. Here are three steps to get started:

1. Enable web monitoring using our JavaScript SDK.
2. Configure a Source Map storage account.

a. End-to-end transaction details blade.
b. Properties blade.

3. Configure CI/CD pipeline.

Note: Add an Azure File Copy task to your Azure DevOps Build pipeline to upload source map files to Blob each time a new version of your application deploys to ensure relevant source map files are available.

Azure Monitor, Azure Learning, Azure Tutorial and Material, Azure Guides, Azure Prep

Manually drag source map


If source map storage is not yet configured or if your source map file is missing from the configured Azure Blob storage container, it’s still possible to manually drag and drop a source map file onto the call stack in the Azure Portal.

Azure Monitor, Azure Learning, Azure Tutorial and Material, Azure Guides, Azure Prep

Thursday, 16 April 2020

Keeping your cloud deployments secure during challenging times

As the world comes together to combat COVID-19, and remote work becomes a critical capability for many companies, customers have asked us how to best maintain the security posture of their cloud assets while enabling more remote workers to access them.

Misconfiguration of cloud security controls has been at the root of several recent data breaches, so it’s extremely important to continue monitoring your security posture as usage of cloud assets increases.

To help you prioritize the actions that you need to take, we are listing three common scenarios for remote workers and how to leverage Azure Security Center security controls to prioritize relevant recommendations for these scenarios:

1. As more users need to access resources remotely, you need to ensure that Multi-Factor Authentication (MFA) is enabled to enhance their identity protection.

◉ Azure Security Center has a security control called Enable MFA, ideally you should remediate all recommendations that are part of this security control, as shown below:

Microsoft Study Material, Microsoft Tutorial and Material, Microsoft Prep, Azure Secure

2. Some users might need remote access via RDP or SSH to servers that are in your Azure infrastructure.

◉ Instead of allowing full 24 x 7 access to those servers, ensure that you are using Just-In-Time (JIT) VM access to those servers. Make sure to review the Secure management ports control in Azure Security Center and remediate the recommendations that are relevant for this scenario.

Microsoft Study Material, Microsoft Tutorial and Material, Microsoft Prep, Azure Secure

3. Some of the workloads (servers, containers, databases) that will be accessed remotely by users might be missing critical security updates.

◉ Review the Remediate vulnerabilities control in Azure Security Center to prioritize the updates that must be installed. Make sure to review the result of all recommendations in built-in vulnerability assessment and remediate those items.

Microsoft Study Material, Microsoft Tutorial and Material, Microsoft Prep, Azure Secure

Security posture management is an ongoing process. Review your secure score to understand your progress towards a fully compliant environment.

Tuesday, 14 April 2020

Updates to Azure Maps Web SDK includes powerful new features

Today, we are announcing updates to the Azure Maps Web SDK, which adds support for common spatial file formats, introduces a new data driven template framework for popups, includes several OGC services, and much more.

Spatial IO module


With as little as three lines of code this module makes it easy to integrate spatial data with the Azure Maps Web SDK. The robust features in this module allow developers to:

◉ Read and write common spatial data files to unlock great spatial data that already exists without having to manually convert between file types. Supported file formats include: KML, KMZ, GPX, GeoRSS, GML, GeoJSON, and CSV files containing columns with spatial information.

◉ Use new tools for reading and writing Well-Known Text (WKT). Well-Known Text is a standard way to represent spatial geometries as a string and is supported by most GIS systems.

◉ Connect to Open Geospatial Consortium (OGC) services and integrate with Azure Maps web SDK.
     ◉ Overlay Web Map Services (WMS) and Web Map Tile Services (WMTS) as layers on the map.
     ◉ Query data in a Web Feature Service (WFS).

◉ Overlay complex data sets that contain style information and have them render automatically using minimal code. For example, if your data aligns with the GitHub GeoJSON styling schema, many of these will automatically be used to customize how each shape is rendered.

◉ Leverage high-speed XML and delimited file reader and writer classes.

Try out these features in the sample gallery.

Azure Maps, Azure Tutorial and Material, Azure Certification, Azure Learning, Azure Prep, Azure SDK, Azure Maps

WMS overlay of world geological survey.

Popup templates


Popup templates make it easy to create data driven layouts for popups. Templates allow you to define how data should be rendered in a popup. In the simplest case, passing a JSON object of data into a popup template will generate a key value table of the properties in the object. A string with placeholders for properties can be used as a template. Additionally, details about individual properties can be specified to alter how they are rendered. For example, URLs can be displayed as a string, an image, a link to a web page or as a mail-to link. 

Azure Maps, Azure Tutorial and Material, Azure Certification, Azure Learning, Azure Prep, Azure SDK, Azure Maps

A popup template displaying data using a template with multiple layouts.

Additional Web SDK enhancements

◉ Popup auto-anchor — The popup now automatically repositions itself to try and stay within the map view. Previously the popup always opened centered above the position it was anchored to. Now, if the position it is anchored to is near a corner or edge, the popup will adjust the direction it opens so that is stays within the map view. For example, if the anchored position is in the top right corner of the map, the popup would open down and to the left of the position.

◉ Drawing tools events and editing — The drawing tools module now exposes events and supports editing of shapes. This is great for triggering post draw scenarios, such as searching within the area the user just drew. Additionally, shapes also support being dragged as a whole. This is useful in several scenarios, such as copying and pasting a shape then dragging it to a new location. 

◉ Style picker layout options — The style picker now has two layout options. The standard flyout of icons or a list view of all the styles.

Azure Maps, Azure Tutorial and Material, Azure Certification, Azure Learning, Azure Prep, Azure SDK, Azure Maps

Style picker icon layout.

Code sample gallery


The Azure Maps code sample gallery has grown to well over 200 hundred samples. Nearly every single sample was created as a response to a technical query we had from a developer using Azure Maps.

An Azure Maps Government Cloud sample gallery has also been created and contains all the same samples as the commercial cloud sample gallery, ported over to the government cloud.

Here are a few of the more recently added samples:

The Route along GeoJSON network sample loads a GeoJSON file of line data that represent a network of paths and calculates the shortest path between two points. Drag the pins around on the map to calculate a new path. The network can be any GeoJSON file containing a feature collection of linestrings, such as a transit network, maritime trade routes, or transmission line network. 

Azure Maps, Azure Tutorial and Material, Azure Certification, Azure Learning, Azure Prep, Azure SDK, Azure Maps

Map showing shortest path between points along shipping routes.

The Census group block analysis sample uses census block group data to estimate the population within an area drawn by the user. Not only does it take into consideration the population of each census block group, but also the amount of overlap they have with the drawn area as well.

Azure Maps, Azure Tutorial and Material, Azure Certification, Azure Learning, Azure Prep, Azure SDK, Azure Maps

Map showing aggregated population data for a drawn area.

The Get current weather at a location sample retrieves the current weather for anywhere the user clicks on the map and displays the details in a nicely formatted popup, complete with weather icon.

Azure Maps, Azure Tutorial and Material, Azure Certification, Azure Learning, Azure Prep, Azure SDK, Azure Maps

Map showing weather information for Paris.

Source: microsoft.com

Saturday, 11 April 2020

Introducing incremental enrichment in Azure Cognitive Search

Incremental enrichment is a new feature of Azure Cognitive Search that brings a declarative approach to indexing your data. When incremental enrichment is turned on, document enrichment is performed at the least cost, even as your skills continue to evolve. Indexers in Azure Cognitive Search add documents to your search index from a data source. Indexers track updates to the documents in your data sources and update the index with the new or updated documents from the data source.

Incremental enrichment is a new feature that extends change tracking from document changes in the data source to all aspects of the enrichment pipeline. With incremental enrichment, the indexer will drive your documents to eventual consistency with your data source, the current version of your skillset, and the indexer.

Indexers have a few key characteristics:

◉ Data source specific.

◉ State aware.

◉ Can be configured to drive eventual consistency between your data source and index.

In the past, editing your skillset by adding, deleting, or updating skills left you with a sub-optimal choice. Either rerun all the skills on the entire corpus, essentially a reset on your indexer, or tolerate version drift where documents in your index are enriched with different versions of your skillset.

With the latest update to the preview release of the API, the indexer state management is being expanded from only the data source and indexer field mappings to also include the skillset, output field mappings knowledge store, and projections.

Incremental enrichment vastly improves the efficiency of your enrichment pipeline. It eliminates the choice of accepting the potentially large cost of re-enriching the entire corpus of documents when a skill is added or updated, or dealing with the version drift where documents created/updated with different versions of the skillset and are very different in shape and/or quality of enrichments.

Indexers now track and respond to changes across your enrichment pipeline by determining which skills have changed and selectively execute only the updated skills and any downstream or dependent skills when invoked. By configuring incremental enrichment, you will be able to ensure that all documents in your index are always processed with the most current version of your enrichment pipeline, all while performing the least amount of work required. Incremental enrichment also gives you the granular controls to deal with scenarios where you want full control over determining how a change is handled.

Azure Cognitive Search, Azure Tutorial and Material, Azure Guides, Azure Certification, Azure Cert Exam, Azure Prep

Indexer cache


Incremental indexing is made possible with the addition of an indexer cache to the enrichment pipeline. The indexer caches the results from each skill for every document. When a data source needs to be re-indexed due to a skillset update (new or updated skill), each of the previously enriched documents is read from the cache and only the affected skills, changed and downstream of the changes are re-run. The updated results are written to the cache, the document is updated in the index and optionally, the knowledge store. Physically, the cache is a storage account. All indexes within a search service may share the same storage account for the indexer cache. Each indexer is assigned a unique cache id that is immutable.

Granular controls over indexing


Incremental enrichment provides a host of granular controls from ensuring the indexer is performing the highest priority task first to overriding the change detection.

◉ Change detection override: Incremental enrichment gives you granular control over all aspects of the enrichment pipeline. This allows you to deal with situations where a change might have unintended consequences. For example, editing a skillset and updating the URL for a custom skill will result in the indexer invalidating the cached results for that skill. If you are only moving the endpoint to a different virtual machine (VM) or redeploying your skill with a new access key, you really don’t want any existing documents reprocessed.

To ensure that that the indexer only performs enrichments you explicitly require, updates to the skillset can optionally set disableCacheReprocessingChangeDetection query string parameter to true. When set, this parameter will ensure that only updates to the skillset are committed and the change is not evaluated for effects on the existing corpus.

◉ Cache invalidation: The converse of that scenario is one where you may deploy a new version of a custom skill, nothing within the enrichment pipeline changes, but you need a specific skill invalidated and all affected documents re-processed to reflect the benefits of an updated model. In these instances, you can call the invalidate skills operation on the skillset. The reset skills API accepts a POST request with the list of skill outputs in the cache that should be invalidated.

Updates to existing APIs


Introducing incremental enrichment will result in an update to some existing APIs.

Indexers

Indexers will now expose a new property:

Cache

◉ StorageAccountConnectionString: The connection string to the storage account that will be used to cache the intermediate results.

◉ CacheId: The cacheId is the identifier of the container within the annotationCache storage account that is used as the cache for this indexer. This cache is unique to this indexer and if the indexer is deleted and recreated with the same name, the cacheid will be regenerated. The cacheId cannot be set, it is always generated by the service.

◉ EnableReprocessing: Set to true by default, when set to false, documents will continue to be written to the cache, but no existing documents will be reprocessed based on the cache data.

Indexers will also support a new querystring parameter:

ignoreResetRequirement set to true allows the commit to go through, without triggering a reset condition.

Skillsets

Skillsets will not support any new operations, but will support new querystring parameter:

disableCacheReprocessingChangeDetection set to true when you want no updates to on existing documents based on the current action.

Datasources

Datasources will not support any new operations, but will support new querystring parameter:

ignoreResetRequirement set to true allows the commit to go through without triggering a reset condition.

Best practices


The recommended approach to using incremental enrichment is to configure the cache property on a new indexer or reset an existing indexer and set the cache property. Use the ignoreResetRequirement sparingly as it could lead to unintended inconsistency in your data that will not be detected easily.

Takeaways


Incremental enrichment is a powerful feature that allows you to declaratively ensure that your data from the datasource is always consistent with the data in your search index or knowledge store. As your skills, skillsets, or enrichments evolve the enrichment pipeline will ensure the least possible work is performed to drive your documents to eventual consistency.