Saturday 29 August 2020

Azure Firewall forced tunneling and SQL FQDN filtering now generally available

Two new key features in Azure Firewall—forced tunneling and SQL FQDN  filtering—are now generally available. Additionally, we increased the limit for multiple public IP addresses from 100 to 250 for both Destination Network Address Translation (DNAT) and Source Network Address Translation (SNAT).

Azure Firewall is a cloud native Firewall as a Service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

Forced tunneling support now generally available


Forced tunneling lets you redirect all internet bound traffic from Azure Firewall to your on-premises firewall or to chain it to a nearby network virtual appliance (NVA) for additional inspection. You enable a firewall for forced tunneling when you create a new firewall. As of today, it is not possible to migrate an existing firewall deployment to a forced tunneling mode.

To support forced tunneling, service management traffic is separated from customer traffic. An additional dedicated subnet named AzureFirewallManagementSubnet is required with its own associated public IP address. The only route allowed on this subnet is a default route to the internet, and Border Gateway Protocol (BGP) route propagation must be disabled.

Within this configuration, the AzureFirewallSubnet can now include routes to any on-premises firewall or NVA to process traffic before it's passed to the internet. You can also publish these routes via BGP to AzureFirewallSubnet if BGP route propagation is enabled on this subnet.

Azure Firewall, Azure Tutorial and Material, Azure Exam Prep, Azure Certification

Figure 1. Azure Firewall in forced tunneling mode.

Avoiding SNAT with forced tunneling


Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. Azure Firewall doesn’t SNAT when the destination IP address is a private IP address range per IANA RFC 1918. This logic works perfectly when you egress directly to the internet. However, with forced tunneling enabled, internet-bound traffic ends up SNATed to one of the firewall private IP addresses in AzureFirewallSubnet, hiding the source from your on-premises firewall. You can configure Azure Firewall to not SNAT regardless of the destination IP address by adding “0.0.0.0/0” as your private IP address range. Note that with this configuration, Azure Firewall can never egress directly to the internet. 

Azure Firewall, Azure Tutorial and Material, Azure Exam Prep, Azure Certification

Figure 2. Azure Firewall doesn’t SNAT private IP prefixes configuration.

Routing to public PaaS and Office 365


While Azure Firewall forced tunneling allows you to direct all internet-bound traffic to your on-premises firewall or a nearby NVA, this is not always desirable. For example, it is likely preferable to egress to public Platform as a Service (PaaS) or Office 365 directly. It is possible to achieve this by adding User Defined Routes (UDR) to the AzureFirewallSubnet with next hop type “Internet” for specific destinations. As this definition is more specific than the default route, it will take precedence. 

As an alternative approach for egressing directly to public PaaS, you can enable Virtual Network (VNet) service endpoints on the AzureFirewallSubnet. These endpoints extend your virtual network private address space and identity to the Azure PaaS services over a direct connection. When enabled, specific routes to the corresponding PaaS services are automatically created. Service endpoints allow you to secure your critical Azure service resources to your VNet only. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network.

It is important to note that with this configuration, you will not be able to add “0.0.0.0/0” as your private IP prefix as shown previously, but you can still add custom ranges that will not be SNATed.

Finally, it is also possible to use Azure Private Endpoint to connect privately and securely to public PaaS services powered by Azure Private Link. However, these connections will bypass your default route to Azure Firewall as described in this documentation. If you require all traffic to go via your firewall, you can mitigate by adding a UDR on all client subnets with the Private Endpoint IP address and a /32 suffix as the destination and Azure Firewall as the next hop. Note that for this configuration to work and for the returned traffic from your private endpoint to go via your firewall as well, you will have to always SNAT, by using 255.255.255.255/32 as your private IP address range.

Azure Firewall, Azure Tutorial and Material, Azure Exam Prep, Azure Certification

Figure 3. A UDR to a Storage Private Endpoint pointing to the firewall as a next hop.

SQL FQDN filtering now generally available


You can now configure SQL FQDNs in Azure Firewall application rules. This allows you to limit access from your VNet to only the specified SQL Server instances. You can filter traffic from VNets to an Azure SQL Database, Azure SQL Data Warehouse, Azure SQL Managed Instance, or SQL IaaS instances deployed in your VNets.

SQL FQDN filtering is currently supported in proxy-mode only (port 1433). If you use non-default ports for SQL Infrastructure as a Service (IaaS) traffic, you can configure those ports in the firewall application rules.

If you use SQL in the default redirect mode, you can still filter access using the SQL service tag as part of network rules. Adding redirect mode support to application rules is on our roadmap.

Azure Firewall, Azure Tutorial and Material, Azure Exam Prep, Azure Certification

Figure 4. SQL FQDN filtering in Azure Firewall application rules.

Multiple public IP addresses limit increase


You can now use up to 250 public IP addresses with your Azure Firewall for both DNAT and SNAT.

◉ DNAT— You can translate multiple standard port instances to your backend servers. For example, if you have two public IP addresses, you can translate TCP port 3389 (RDP) for both IP addresses.

◉ SNAT— Additional ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. Currently, Azure Firewall randomly selects the source public IP address to use for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a public IP address prefix to simplify this configuration.

Thursday 27 August 2020

Prepare and certify your devices for IoT Plug and Play

Developing solutions with Azure IoT has never been faster, easier, or more secure. However, the tight coupling and integration between IoT device software and the software that matches it in the cloud can make it challenging to add different devices without spending hours writing device code.

IoT Plug and Play can solve this by enabling a seamless device-to-cloud integration experience. IoT Plug and Play from Microsoft is an open approach using Digital Twin Definition Language (based on JavaScript Object Notation for Linked Data (JSON-LD)) that allows IoT devices to declare their capabilities to cloud solutions. It enables hardware partners to build devices that can easily integrate with cloud solutions based on Azure IoT Central, as well as third-party solutions built on top of Azure IoT Hub or Azure Digital Twins.

As such, we are pleased to announce that the IoT Plug and Play device certification program is now available for companies to certify and drive awareness of their devices tailored for solutions, while also reducing time to market. In this blog post, we will explore the common ecosystem challenges and business motivations for using IoT Plug and Play, as well as why companies are choosing to pursue IoT Plug and Play certification and the requirements and process involved.

Addressing ecosystem challenges and business needs with IoT Plug and Play


Azure Study Materials, Azure Exam Prep, Azure Certification, Azure Learning, Azure Prep

Across our ecosystem of partners and customers, we continue to see opportunities to simplify IoT. Companies are using IoT devices to help them find valuable insights ranging from how customers are using their products, to how they can optimize operations and reduce energy consumption. Yet there are also challenges to enabling these scenarios across energy, agriculture, retail, healthcare, and other industries as integrating IoT devices into cloud solutions can often be a time-consuming process.

Windows solved a similar industry problem with Plug and Play, which at its core, was a capability model that devices could declare and present to Windows when they were connected. This capability model made it possible for thousands of different devices to connect to Windows and be used without any software having to be installed manually on Windows.

IoT Plug and Play—which was announced during Microsoft Build in May 2019—similarly addresses the ecosystem need to declare an open model language through an open approach. IoT Plug and Play is currently available in preview and offers numerous advantages for device builders, solution builders, and customers alike when it comes to reducing solution development time, cost, and complexity. By democratizing device integration, IoT Plug and Play helps remove entry barriers and opens new IoT device use cases. Since IoT Plug and Play-enabled solutions can understand the device model to start using devices without customization, the same interaction model can be used in any industry. For instance, cameras used on the factory floor for inspection can also be used in retail scenarios.

The IoT Plug and Play certification process validates that devices meet core capabilities and are enabled for secure device provisioning. The use of IoT Plug and Play certified devices is recommended in all IoT solutions, even those that do not currently leverage all the capabilities, as migration of IoT Plug and Play-enabled devices is a simple process.

IoT Plug and Play saves partners time and money


IoT Plug and Play-capable devices can become a major business differentiator for device and solution builders. Microsoft partner, myDevices, is already leveraging IoT Plug and Play in their commercial IoT solutions. According to Adrian Sanchez del Campo, Vice President of Engineering, “The main value in IoT Plug and Play is the ease of developing a device that will be used in a connected fashion. It's the easiest way to connect any hardware to the cloud, and it allows for any company to easily define telemetry and properties of a device without writing any embedded code.”

Sanchez del Campo also says it saves time and money. For devices that monitor or serve as a gateway at the edge, IoT Plug and Play enables myDevices to cut their development cycle by half or more, accelerating proofs of concept while also reducing development costs.

Olivier Pauzet, Vice President Product, IoT Solutions, from Sierra Wireless agrees that IoT Plug and Play is a definite time and money saver. “IoT Plug and Play comes on top of the existing partnership and joint value brought by Sierra Wireless’s Octave all-in-one-edge-to-cloud solution and Azure IoT services,” says Pauzet. “For customers using Digital Twins or IoT Central, being able to leverage IoT Plug and Play on both Octave and Azure will expand capabilities while making solution development even faster and easier.”

In addition to faster time to market, IoT Plug and Play also provides benefits for simplifying solution development. “As a full edge-to-cloud solution provider, Sierra Wireless sees benefits in making customer devices reported through Octave cloud connectors compatible with IoT Plug and Play applications,” says Pauzet. “Making it even simpler for customers and system integrators to build reliable, secure, and flexible end-to-end solutions is a key benefit for the whole ecosystem.”

Benefits of IoT Plug and Play device certification from Microsoft


Achieving IoT Plug and Play certification offers multiple advantages, but at its core, the benefits revolve around device builders having confidence that their tailored devices will be more discoverable, be more readily promoted to a broader audience, and have a reduced time to market.

Once a device is IoT Plug and Play-certified, it can easily be used in any IoT Plug and Play-enabled solution which increases the market opportunity for device builders. IoT Plug and Play-certified devices are also surfaced to a worldwide audience, helping solution builders discover devices with the capabilities they need at a previously unreachable scale.

It also provides device builders with the opportunity to easily partner with other providers who have adopted the same open approach to create true end-to-end solutions. Plus, devices can be deployed in various solutions without a direct relationship between the device builder and solution builder, increasing your addressable market.

Device builders gain additional audience exposure and potential co-sell opportunities by getting IoT Plug and Play-certified devices featured and promoted in the Certified for Azure IoT device catalog. The catalog provides expanded opportunities to reach solution developers and device buyers, who can search for compatible devices.

Finally, IoT Plug and Play-certified devices appeal to solution builders because they enable time to value by simplifying and reducing the solution development cycle. IoT Plug and Play also gives extensibility to IoT Plug and Play-enabled solutions by enabling the seamless addition of more devices.


Achieving IoT Plug and Play certification


To achieve IoT Plug and Play certification from Microsoft, devices must meet the following requirements:

1. Defined device models and compliance with the Digital Twin Definition Language (DTDL) version 2.
2. Support Device Provisioning Services (DPS).
3. Physical device review.

The certification process is comprised of three phases: develop, certify, and publish. Develop phase activities include modeling and developing the code, storing the device models, and then iterating and testing the code. The outcome is a finalized device code that is ready to go through the IoT Plug and Play certification phase.

Certify phase activities require Microsoft Partner Network membership and onboarding to the Azure Certified Device submission portal. To kick off the certification process, developers must submit their IoT Plug and Play device model to the portal, along with relevant marketing details. Once complete, developers can connect and test in the certification portal, which takes the device through an automated set of validation tests.

Upon IoT Plug and Play certification, the device becomes eligible for publication to the Certified for Azure IoT device catalog. Publish phase activities include submitting the test results, device metadata, and Get Started Guide, along with the desired publish date, to Microsoft. Microsoft will work with the device builder to coordinate additional physical device review after the device is published.

Source: microsoft.com

Wednesday 26 August 2020

Be a Globally Recognized with 70-466 MCSE Certification

What Is Microsoft MCSE Certification?

70-466, microsoft 70-466, exam 70-466, mcse data management and analytics, mcse online practice exam, mcse syllabus, mcse data management and analytics cost, mcse data management and analytics certification, microsoft’s mcse data management and analytics, microsoft mcse data management and analytics, data management and analytics mcse, mcse: data management and analytics
Microsoft certifications are placed out as a pyramid, with any certification level leading learners to higher, more specific, multiple certifications. At the base of the Microsoft pyramid, IT professionals form the foundation of experience with the MTA (Microsoft Technology Associate). The next level is the MCSA (Microsoft Certified Solutions Associate) and MCSD (Microsoft Certified Solutions Developer), which cover many of the most successful IT certifications given by Microsoft, including Server 2012, Office 365, and Azure. Ultimately, the MCSE (Microsoft Certified Solution Expert) represents the highest level of certification offered by Microsoft and allows learners to develop specializations related to MCSA-level content.

About the MCSE: Data Management and Analytics Certification

To be available for Microsoft MCSE certification, learners must first earn an Associate-level (MCSA) certification. After making a prerequisite certification, learners must take one additional exam to get an MCSE certification. For expert-level certifications under Microsoft’s new certification structure, learners can take a pool of exams to customize a certification to their needs.

SQL Server is the most widely used database management system. Businesses seek out proven database professionals to build and optimize SQL Server solutions that keep pace with growing business intelligence needs.

The MCSE: Data Management and Analytics certification is designed for IT professionals who already have experience managing and building data solutions and want to take their career to the next level. Data specialists who benefit from this certification include:

  • Database Analysts
  • Database Designers
  • Database Developers
  • BI Analysts

Every organization has different priorities and demands when it comes to data management and business intelligence. Therefore, Microsoft lets you pick from a variety of exams to get your MCSE: Data Management and Analytics certification.
Microsoft 70-466 Certification Paves the Way to A Top Career
Joining this certification to your résumé furthers your status as an MCP. MCSE certifications never expire and stay healthy as long as the technology exists.

Reasons Why to Take Microsoft 70-466 Exam

As we know that the domain of MCP, MCSE certification is continually evolving as well as this evolvement brings Microsoft 70-466 test. There are several factors to start through the Microsoft 70-466 exam questions. Implementing Data Models and Reports with Microsoft SQL Server exam needs to use you so much. The Microsoft SQL Server 70-466 exam aids you obtain the numerous benefits like your market reliability will be boosted when Implementing Data Models, and Reports with Microsoft SQL Server certified will be created on your return.

The Microsoft SQL Server exam likewise helps you get the interest of a large company who is willing to have a 70-466 certified prospect who can solve their big problems quickly. The significant advantage of getting the Microsoft 70-466 exam is that it will additionally help you in your personal skill growth.

Start for the Microsoft 70-466 Exam Preparation

As the Implementing Data Models and Reports with Microsoft SQL Server exam have so much to provide, that is why someone brand-new to the Microsoft SQL Server 70-466 exam questions should get confused concerning the preparation for 70-466 test. Where to start for the Microsoft 70-466 exam questions? Suppose you are likewise among the experts who are getting confused about developing the Microsoft SQL Server 70-466 exam after that. In that case, you are at the best location at the correct time.

"Practice makes a man refines" this quote is implemented correctly in the 70-466 preparation. The, even more, you will practice for the Microsoft 70-466 exam questions, the, even more, there will undoubtedly be the possibilities of getting Implementing Data Models, and Reports with Microsoft SQL Server certified.

Obtain 70-466 Practice Test

You must be thinking where to practice for Microsoft SQL Server 70-466 exam questions. For this, you will have to find the improved Implementing Data Models and Reports with Microsoft SQL Server practice test. You see a valid 70-466 practice test, also not an easy task. You will have to look a great deal for the best Microsoft 70-466 questions.

There are the possibilities that you will wind up in locating nothing for Microsoft SQL Server test preparation. Microsoft experts have curated these 70-466 practice exams. Microsoft 70-466 exam questions are one of the most dependable resources of the development of Microsoft SQL Server 70-466 exam questions.

Concluding Words

The Implementing Data Models and Reports with Microsoft SQL Server certification is an attractive entry point for learners looking to develop their careers and skills in developing and managing SQL Server and databases. This certification will likely open up new professional opportunities.

Take the first step toward your MCSE certification today!
Watch. Learn. Conquer the MCSE!

Tuesday 25 August 2020

How Azure Synapse Analytics can help you respond, adapt, and save

Business disruptions, tactical pivots, and remote work have all emphasized the critical role that analytics plays in all organizations. Uncharted situations demand proven performance insights so that businesses can quickly determine what is and is not working. In recent months, the urgency for business-guiding insights has only been heightened, leading to a need for real-time analytics solutions. Equally important is the need to discover and share these insights in the most cost-effective manner.

Not only has COVID-19 been a challenge to world health but also has created new economic challenges to businesses worldwide. These challenges have resulted in an increased need for tools that quickly deliver insights to business leaders—empowering informed decisions. This is where Microsoft Azure Synapse Analytics can help.

New circumstances demand new solutions


Azure Synapse Analytics is a new type of analytics platform that enables you to accelerate your time-to-insight with a unified experience and—just as important—save on costs while doing so. It is up to 14 times faster and costs 94 percent less than other cloud providers. Let’s dive into how Azure Synapse can help you respond, adapt, and save.

Respond to disruption and adapt to a new normal


History shows that proven analytics technologies, such as Azure Synapse, have a strong track record of enabling more dynamic and exploratory responses that can guide businesses through difficult times. Traditional data warehouses and reports can’t scale to provide the intelligence and insight that business executives demand in today’s world.

To make good strategic decisions, businesses need to quickly and effectively find new insights in their data. This can only come through more advanced tools and an improved understanding of how to get the most from them.

Azure Synapse Analytics, Azure Tutorial and Materials, Azure Exam Prep, Azure Tutorial and Material, Azure Guides

Each recent global economic crisis can be correlated with a follow-up increase in data analytics projects as companies worldwide lean on data analytics to boost their recovery.

To enable teams to collaborate and innovate, they need tools and services that help them discover, explore, and quickly and efficiently find new insights.

Azure Synapse Analytics, Azure Tutorial and Materials, Azure Exam Prep, Azure Tutorial and Material, Azure Guides

Azure Synapse has an intelligent architecture that makes it industry-leading in unifying big data workloads with traditional data warehousing while at the same time encouraging collaboration and reducing costs.

Using Azure Synapse, businesses can empower their teams to collaborate, adapt, and create new strategies that are driven by data. Azure Synapse not only makes it easy to start and scale in the cloud, but it has key security, governance, and monitoring tools that are critical for successful data analytics solutions.

Save on costs with Azure Synapse


The current economic challenges have certainly made us all—individuals and businesses—more conscious of our spending. Businesses are looking for new ways to improve productivity and efficiency on limited budgets. Cloud analytics in general, and Azure Synapse in particular, are a great fit for this requirement because it helps businesses start small and scale as needed.

Azure Synapse offers a cost-effective service due to its intelligent architecture, which separates storage, compute power, and resources—but makes them seamlessly available when needed. This means that you do not have to keep paying for cloud services if you experience unexpected events that cause business disruptions and tactical pivots. Services can simply be paused to release resources and save costs. You can also scale compute separately from storage, which brings even more cost savings.

Azure Synapse Analytics, Azure Tutorial and Materials, Azure Exam Prep, Azure Tutorial and Material, Azure Guides

Azure Synapse has been found to offer a significantly better price-to-performance ratio when compared with similar services from other cloud providers. This chart from an independent study shows the price-performance comparison (lower is better).

In a recent study, GigaOm, an independent emerging technology research firm, found that Azure Synapse has the best price-to-performance ratio on the market. The study surveyed many services from all of the major cloud providers and took both performance and cost into account. Besides being powerful and cost-effective, Azure Synapse offers industry-leading features when it comes to governance, monitoring, and collaboration that address key challenges for data analytics projects. These features provide businesses with the right tools to control not only costs but also the entire analytics lifecycle, including security, performance, and accuracy.

Saturday 22 August 2020

Simplifying declarative deployments in Azure

Azure provides customers a simple and intuitive way to declaratively provision and manage infrastructure through Azure Resource Manager (ARM) templates. You can describe your entire Azure environment using template language, and then use your favorite CI/CD or scripting tool to stand up this environment in minutes. The ARM template language takes the form of JSON and is a direct representation of the resource schema. Which means you can create any Azure resource using an ARM Template from day one and configure any setting on the resources. Using ARM templates, you can describe the resources needed to make up the environment in a declarative, parameterized fashion. Because the ARM templates are declarative, you need only specify what you want, and Azure Resource Manager will figure out the rest.

Over the last couple of months, we have renewed our focus in ARM template deployments with a focus on addressing some of the key challenges shared by our customers. Today, we’re sharing some of the investments we’ve made to address some of these challenges.

Simplified authoring experience with Visual Studio Code


Our newest users have shared that their first time authoring and editing an ARM template from scratch can be intimidating. We have simplified the getting started experience by enabling you to create the resources you need in the Azure Portal and exporting an ARM template that you can reuse. We also have a template Quickstart gallery of over 800 sample templates to provision resources. But now we have taken things a step further for you.

With the new Azure Resource Manager (ARM) Tools in Visual Studio Code, we've added support for snippets (pre-created resource definitions), IntelliSense, colorization, ARM template outline, and comments. With comments support in ARM templates, you can deploy any template with comments using CLI, PowerShell, and Azure portal, and it will just work.

What-if: Pre-deployment impact analysis


Azure Study Materials, Azure Tutorial and Materials, Azure Exam Prep, Azure Learning

Our customers often need to assess the impact of deployment to an environment before submitting any changes to the deployed resources. With new what-if features in Azure, customers can do pre-deployment assessments to determine what resources will be created, updated, or deleted, including any resource property changes. The what-if command does a real-time check of the current state of the environment and eliminates the need to manage any state.

Deployment scripts: completing the ‘last mile’ scenarios


There are often scenarios where customers need to run custom script code in an ARM template deployment to complete their environment setup. These scripts that previously required a step outside of a template deployment can now be executed inside of a template deployment using the deploymentScript resource. The new deploymentScript resource will execute any PowerShell or bash script as part of your template deployment. This script can be included as part of your ARM template or referenced from an external source. Deployment scripts now give you the ability to complete your end-to-end environment setup in a single ARM template.

Management group and subscription provisioning at scale


As an organization expands its use of Azure, there are often conversations about the need to create a management group (MG) hierarchy (grouping construct) and Azure Subscriptions to ensure separation of environments, applications, billing, or security. Customers need a consistent and declarative way to provision management group and subscriptions to save time and resources. With the new tenant and MG deployment APIs, we now support the provisioning of MGs and subscriptions using ARM templates. This enables you to automate the setup of your entire estate and the associated infrastructure resources in a single ARM template.

Azure Study Materials, Azure Tutorial and Materials, Azure Exam Prep, Azure Learning

Continued focus on quality and reliability


Quality and reliability are at the forefront of everything we do at Microsoft. This is an area where we have continued our focus, starting with improving the quality of our schemas and having schema coverage for all resources. The benefits of this are seen in the improved authoring experience and template export capabilities. We are diligently working to improve our error messages and enhance the quality of our pre-flight validation to catch issues before you deploy. We have also invested heavily in improving our documentation by publishing all the API versions to template references and added template snippets to resource documentation.

To help with testing your ARM Template code we open sourced the ARM Template Toolkit which we use internally at Microsoft to ensure our ARM Templates follow best practices. Lastly, we recognize speed matters and we have made significant improvements to reduce our deployment times for large-scale deployments by roughly 75 percent.

The future of Infrastructure as Code with Azure Resource Manager templates


We have just begun our journey on enhancing ARM template deployments and the teams are consciously working hard to address current gaps and innovating for the future. You can hear about some of our future investments which we shared at the recent Microsoft Build 2020 conference.

Source: microsoft.com

Thursday 20 August 2020

Share big data at scale with Azure Data Share in-place sharing for Azure Data Explorer

Microsoft Azure Data Share is an open, easy, and secure way to share data at scale by enabling organizations to share data in-place or as a data snapshot. Microsoft Azure Data Explorer is a fast and highly scalable data analytics service for telemetry, time-series, and log data.

Fueled by digital transformation, modern organizations want to increasingly enable fluid data sharing to drive business decisions. Seamlessly sharing data for inter-departmental and inter-organizational collaboration can unlock tremendous competitive advantage. Maintaining control and visibility, however, remains an elusive goal. Even today, data is shared using File Transfer Protocols (FTPs), application programming interfaces (APIs), USB devices, and email attachments. These methods are simply not secure, cannot be governed, and are inefficient at best.

Azure Data Share in-place Sharing for Azure Data Explorer, now generally available, enables you to share big data easily and securely between internal departments and with external partners, vendors, or customers for near real-time collaboration.

Once data providers share data, recipients (data consumers) always have the latest data without needing any additional intervention. Additionally, data providers maintain control over the sharing and can revoke access at will. By being able to centrally manage all shared relationships, data providers gain full control of what data is shared and with whom. Operating within a fully managed environment that can scale on-demand, data providers can focus on the logic while Data Share manages the infrastructure.

How in-place data sharing works


Data providers can initiate sharing by specifying the Azure Data Explorer cluster or database they want to share, who to share with, and terms of use. Next, the Data Share service sends an email invitation to the data consumer who can accept the sharing.

Microsoft Online Exam, Microsoft Prep, Azure Tutorial and Materials, Azure Certification

After the sharing relationship is established, Data Share creates a symbolic link between the provider and consumer's Azure Data Explorer cluster. This enables the data consumer to read and query the data in near real-time. Access to the data uses compute resources from the consumer's Azure Data Explorer cluster.

Microsoft Online Exam, Microsoft Prep, Azure Tutorial and Materials, Azure Certification

With Azure Data Explorer, data is cached, indexed, and distributed on the compute nodes within the cluster and persisted on Azure storage. Since the compute and storage are decoupled, multiple consuming clusters can be attached to the same source storage with different set of caching policies without impacting the performance and security of the source cluster.

The in-place sharing capability is a game changer for organizations looking for near real-time big data collaboration between internal departments or with external partners and customers.

Tuesday 18 August 2020

Advancing the outage experience—automation, communication, and transparency

Azure automation, Azure communication, Azure transparency, Azure Exam Prep, Azure Study Materials

In the cloud industry, we have a commitment to bring our customers the latest technology at scale, keeping customers and our platform secure, and ensuring that our customer experience is always optimal. For this to happen Azure is subject to a significant amount of change—and in rare circumstances, it is this change that can bring about unintended impact for our customers. As previously mentioned in this series of blog posts we take change very seriously and ensure that we have a systematic and phased approach to implementing changes as carefully as possible.

We continue to identify the inherent (and sometimes subtle) imperfections in the complex ways that our architectural designs, operational processes, hardware issues, software flaws, and human factors can align to cause service incidents—also known as outages. The reality of our industry is that impact caused by change is an intrinsic problem. When we think about outage communications we tend not to think of our competition as being other cloud providers, but rather the on-premises environment. On-premises change windows are controlled by administrators. They choose the best time to invoke any change, manage and monitor the risks, and roll it back if failures are observed.

Similarly, when an outage occurs in an on-premises environment, customers and users feel that they are more ‘in the know.’ Leadership is promptly made fully aware of the outage, they get access to support for troubleshooting, and expect that their team or partner company would be in a position to provide a full Post Incident Report (PIR)—previously called Root Cause Analysis (RCA)—once the issue is understood. Although our data analysis supports the hypothesis that time to mitigate an incident is faster in the cloud than on-premises, cloud outages can feel more stressful for customers when it comes to understanding the issue and what they can do about it.

Introducing our communications principles


During cloud outages, some customers have historically reported feeling as though they’re not promptly informed, or that they miss necessary updates and therefore lack a full understanding of what happened and what is being done to prevent future issues occurring. Based on these perceptions, we now operate by five pillars that guide our communications strategy—all of which have influenced our Azure Service Health experience in the Azure portal and include:

1. Speed
2. Granularity
3. Discoverability
4. Parity
5. Transparency

Speed


We must notify impacted customers as quickly as possible. This is our key objective around outage communications. Our goal is to notify all impacted Azure subscriptions within 15 minutes of an outage. We know that we can’t achieve this with human beings alone. By the time an engineer is engaged to investigate a monitoring alert to confirm impact (let alone engaging the right engineers to mitigate it, in what can be a complicated array of interconnectivities including third-party dependencies) too much time has passed. Any delay in communications leaves customers asking, “Is it me or is it Azure?” Customers can then spend needless time troubleshooting their own environments. Conversely, if we decide to err on the side of caution and communicate every time we suspect any potential customer impact, our customers could receive too many false positives. More importantly, if they are having an issue with their own environment, they could easily attribute these unrelated issues to a false alarm being sent by the platform. It is critical that we make investments that enable our communications to be both fast and accurate.

Last month, we outlined our continued investment in advancing Azure service quality with artificial intelligence: AIOps. This includes working towards improving automatic detection, engagement, and mitigation of cloud outages. Elements of this broader AIOps program are already being used in production to notify customers of outages that may be impacting their resources. These automatic notifications represented more than half of our outage communications in the last quarter. For many Azure services, automatic notifications are being sent in less than 10 minutes to impacted customers via Service Health—to be accessed in the Azure portal, or to trigger Service Health alerts that have been configured, more on this below.

With our investment in this area already improving the customer experience, we will continue to expand the scenarios in which we can notify customers in less than 15 minutes from the impact start time, all without the need for humans to confirm customer impact. We are also in the early stages of expanding our use of AI-based operations to identify related impacted services automatically and, upon mitigation, send resolution communications (for supported scenarios) as quickly as possible.

Granularity


We understand that when an outage causes impact, customers need to understand exactly which of their resources are impacted. One of the key building blocks in getting the health of specific resources are Resource Health signals. The Resource Health signal will check if a resource, such as a virtual machine (VM), SQL database, or storage account, is in a healthy state. Customers can also create Resource Health alerts, which leverage Azure Monitor, to let the right people know if a particular resource is having issues, regardless of whether it is a platform-wide issue or not. This is important to note: a Resource Health alert can be triggered due to a resource becoming unhealthy (for example, if the VM is rebooted from within the guest) which is not necessarily related to a platform event, like an outage. Customers can see the associated Resource Health checks, arranged by resource type.

We are building on this technology to augment and correlate each customer resource(s) that has moved into an unhealthy state with platform outages, all within Service Health. We are also investigating how we can include the impacted resources in our communication payloads, so that customers won’t necessarily need to sign in to Service Health to understand the impacted resources—of course, everyone should be able to consume this programmatically.

All of this will allow customers with large numbers of resources to know more precisely which of their services are impacted due to an outage, without having to conduct an investigation on their side. More importantly, customers can build alerts and trigger responses to these resource health alerts using native integrations to Logic Apps and Azure Functions.

Discoverability


Although we support both ‘push’ and ‘pull’ approaches for outage communications, we encourage customers to configure relevant alerts, so the right information is automatically pushed out to the right people and systems. Our customers and partners should not have to go searching to see if the resources they care about are impacted by an outage—they should be able to consume the notifications we send (in the medium of their choice) and react to them as appropriate. Despite this, we constantly find that customers visit the Azure Status page to determine the health of services on Azure.

Before the introduction of the authenticated in-portal Service Health experience, the Status page was the only way to discover known platform issues. These days, this public Status page is only used to communicate widespread outages (for example, impacting multiple regions and/or multiple services) so customers looking for potential issues impacting them don’t see the full story here. Since we rollout platform changes as safely as possible, the vast majority of issues like outages only impact a very small ‘blast radius’ of customer subscriptions. For these incidents, which make up more than 95 percent of our incidents, we communicate directly to impacted customers in-portal via Service Health.

We also recently integrated the ‘Emerging Issues’ feature into Service Health. This means that if we have an incident on the public Status page, and we have yet to identify and communicate to impacted customers, users can see this same information in-portal through Service Health, thereby receiving all relevant information without having to visit the Status page. We are encouraging all Azure users to make Service Health their ‘one stop shop’ for information related to service incidents, so they can see issues impacting them, understand which of their subscriptions and resources are impacted, and avoid the risk of making a false correlation, such as when an incident is posted on the Status page, but is not impacting them.

Most importantly, since we’re talking about the discoverability principle, from within Service Health customers can create Service Health alerts, which are push notifications leveraging the integration with Azure Monitor. This way, customers and partners can configure relevant notifications based on who needs to receive them and how they would best be notified—including by email, SMS, LogicApp, and/or through a webhook that can be integrated into service management tools like ServiceNow, PagerDuty, or Ops Genie.

To get started with simple alerts, consider routing all notifications to email a single distribution list. To take it to the next level, consider configuring different service health alerts for different use cases—maybe all production issues notify ServiceNow, maybe dev and test or pre-production issues might just email the relevant developer team, maybe any issue with a certain subscription also sends a text message to key people. All of this is completely customizable, to ensure that the right people are notified in the right way.

Parity


All Azure users should know that Service Health is the one place to go, for all service impacting events. First, we ensure that this experience is consistent across all our different Azure Services, each using Service Health to communicate any issues. As simple as this sounds, we are still navigating through some unique scenarios that make this complex. For example, most people using Azure DevOps don’t interact with the Azure portal. Since DevOps does not have its own authenticated Service Health experience, we can’t communicate updates directly to impacted customers for small DevOps outages that don’t justify going to the public Status page. To support scenarios like this, we have stood up the Azure DevOps status page where smaller scale DevOps outages can be communicated directly to the DevOps community.

Second, the Service Health experience is designed to communicate all impacting events across Azure—this includes maintenance events as well as service or feature retirements, and includes both widespread outages and isolated hiccups that only impact a single subscription. It is imperative that for any impact (whether it is potential, actual or upcoming) customers can expect the same experience and put in place a predictable action plan across all of their services on Azure.

Lastly, we are working towards expanding our philosophy of this pillar to extend to other Microsoft cloud products. We acknowledge that, at times, navigating through our different cloud products such as Azure, Microsoft 365, and Power Platform can sometimes feel like navigating technologies from three different companies. As we look to the future, we are invested in harmonizing across these products to bring about a more consistent, best-in-class experience.

Transparency


As we have mentioned many times in the Advancing Reliability blog series, we know that trust is earned and needs to be maintained. When it comes to outages, we know that being transparent about what is happening, what we know, and what we don’t know is critically important. The cloud shouldn’t feel like a black box. During service issues, we provide regular communications to all impacted customers and partners. Often, in the early stages of investigating an issue, these updates might not seem detailed until we learn more about what’s happening. Even though we are committed to sharing tangible updates, we generally try to avoid sharing speculation, since we know customers make business decisions based on these updates during outages.

In addition, an outage is not over once customer impact is mitigated. We could still be learning about the complexities of what led to the issue, so sometimes the message sent at or after mitigation is a fairly rudimentary summation of what happened. For major incidents, we follow this up with a PIR generally within three days, once the contributing factors are better understood.

For incidents that may have impacted fewer subscriptions, our customers and partners can request more information from within Service Health by requesting a PIR for the incident. We have heard feedback in the past that PIRs should be even more transparent, so we continue to encourage our incident managers and communications managers to provide as much detail as possible—including information about the issue impact, and our next steps to mitigate future risk. Ideally to ensure that this class of issue is less likely and/or less impactful moving forward.

While our industry will never be completely immune to service outages, we do take every opportunity to look at what happened from a holistic perspective and share our learnings. One of the future areas of investment at which we are looking closely, is how best to keep customers updated with the progress we are making on the commitments outlined in our PIR next steps. By linking our internal repair items to our external commitments in our next steps, customers and partners will be able to track the progress that our engineering teams are making to ensure that corrective actions are completed.

Our communications across all of these scenarios (outages, maintenance, service retirements, and health advisories) will continue to evolve, as we learn more and continue investing in programs that support these five pillars.

Reliability is a shared responsibility


While Microsoft is responsible for the reliability of the Azure platform itself, our customers and partners are responsible for the reliability of their cloud applications—including using architectural best practices based on the requirements of each workload. Building a reliable application in the cloud is different from traditional application development. Historically, customers may have purchased levels of redundant higher-end hardware to minimize the chance of an entire application platform failing. In the cloud, we acknowledge up front that failures will happen. As outlined several times above, we will never be able to prevent all outages. In addition to Microsoft trying to prevent failures, when building reliable applications in the cloud your goal should be to minimize the effects of any single failing component.

To that end, we recently launched the Microsoft Azure Well-Architected Framework—a set of guiding tenets that can be used to improve the quality of a workload. Reliability is one of the five pillars of architectural excellence alongside Cost Optimization, Operational Excellence, Performance Efficiency, and Security. If you already have a workload running in Azure and would like to assess your alignment to best practices in one or more of these areas, try the Microsoft Azure Well-Architected Review.

Specifically, the Reliability pillar describes six steps for building a reliable Azure application. Define availability and recovery requirements based on decomposed workloads and business needs. Use architectural best practices to identify possible failure points in your proposed/existing architecture and determine how the application will respond to failure. Test with simulations and forced failovers to test both detection and recovery from various failures. Deploy the application consistently using reliable and repeatable processes. Monitor application health to detect failures, monitor indicators of potential failures, and gauge the health of your applications. Finally, respond to failures and disasters by determining how best to address it based on established strategies.

Returning to our core topic of outage communications, we are working to incorporate relevant Well-Architected guidance into our PIRs in the aftermath of each service incident. Customers running critical workloads will be able to learn about specific steps to improve reliability that would have helped to avoid and lessen impact from that particular outage. For example, if an outage only impacted resources within a single Availability Zone, we will call this out as part of the PIRs and encourage impacted customers to consider zonal redundancies for their critical workloads.

Going forward


We outlined how Azure approaches communications during and after service incidents like outages. We want to be transparent about our five communication pillars, to explain both our progress to date and the areas in which we’re continuing to invest. Just as our engineering teams endeavor to learn from each incident to improve the reliability of the platform, our communications teams endeavor to learn from each incident to be more transparent, to get customers and partners the right details to make informed decisions, and to support customers and partners as best as possible during each of these difficult situations.

We are confident that we are making the right investments to continuing improving in this space, but we are increasingly looking for feedback on whether our communications are hitting the mark. We include an Azure post-incident survey at the end of each PIR we publish. We strive to review every response to learn from our customers and partners and validate whether we are focusing on the right areas and to keep improving the experience.

We continue to identify the inherent (and sometimes subtle) imperfections in the complex ways that our architectural designs, operational processes, hardware issues, software flaws, and human factors align to cause outages. Since trust is earned and needs to be maintained, we are committed to being as transparent as possible—especially during these infrequent but inevitable service issues.

Source: microsoft.com

Saturday 15 August 2020

Azure Firewall Manager is now generally available

Azure Firewall Manager is now generally available and includes Azure Firewall Policy, Azure Firewall in a Virtual WAN Hub (Secure Virtual Hub), and Hub Virtual Network. In addition, we are introducing several new capabilities to Firewall Manager and Firewall Policy to align with the standalone Azure Firewall configuration capabilities.

Key features in this release include:

◉ Threat intelligence-based filtering allow list in Firewall Policy is now generally available.

◉ Multiple public IP addresses support for Azure Firewall in Secure Virtual Hub is now generally available.

◉ Forced tunneling support for Hub Virtual Network is now generally available.

◉ Configuring secure virtual hubs with Azure Firewall for east-west traffic (private) and a third-party security as a service (SECaaS) partner of your choice for north-south traffic (internet bound).

◉ Integration of third-party SECaaS partners are now generally available in all Azure public cloud regions.

◉ Zscaler integration will be generally available on July 3, 2020. Check Point is a supported SECaaS partner and will be in preview on July 3, 2020. iboss integration will be generally available on July 31, 2020.

◉ Support for domain name system (DNS) proxy, custom DNS, and fully-qualified domain name (FQDN) filtering in network rules using Firewall Policy are now in preview.

Azure Firewall Manager, Azure Exam Prep, Azure Learning, Azure Certification, Azure Learning

Firewall Policy is now generally available

Firewall Policy is an Azure resource that contains network address translation (NAT), network, and application rule collections, as well as threat intelligence and DNS settings. It’s a global resource that can be used across multiple Azure Firewall instances in Secured Virtual Hubs and Hub Virtual Networks. Firewall policies work across regions and subscriptions.

You do not need Firewall Manager to create a firewall policy. There are many ways to create and manage a firewall policy, including using REST API, PowerShell, or command-line interface (CLI).

After you create a firewall policy, you can associate the policy to one or more firewalls using Firewall Manager or using REST API, PowerShell, or CLI.  Refer to the policy-overview document for a more detailed comparison of rules and policy.

Migrating standalone firewall rules to Firewall Policy

You can also create a firewall policy by migrating rules from an existing Azure Firewall. You can use a script to migrate firewall rules to Firewall Policy, or you can use Firewall Manager in the Azure portal.

Azure Firewall Manager, Azure Exam Prep, Azure Learning, Azure Certification, Azure Learning
Importing rules from an existing Azure Firewall.

Firewall Policy pricing

If you just create a Firewall Policy resource, it does not incur any charges. Additionally, a firewall policy is not billed if it is associated with just a single Azure firewall. There are no restrictions on the number of policies you can create.

Firewall Policy pricing is fixed per Firewall Policy per region. Within a region, the price for Firewall Policy managing five firewalls or 50 firewalls is the same. The following example uses four firewall policies to manage 10 distinct Azure firewalls:

◉ Policy 1: cac2020region1policy—Associated with six firewalls across four regions. Billing is done per region, not per firewall.

◉ Policy 2: cac2020region2policy—Associated with three firewalls across three regions and is billed for three regions regardless of the number of firewalls per region.

◉ Policy 3: cac2020region3policy—Not billed because the policy is not associated with more than one firewall.

◉ Policy 4: cacbasepolicy—A central policy that is inherited by all three policies. This policy is billed for five regions. Once again, the pricing is lower compared to per-firewall billing approach.

Azure Firewall Manager, Azure Exam Prep, Azure Learning, Azure Certification, Azure Learning
Firewall Policy billing example.

Configure a threat intelligence allow list, DNS proxy, and custom DNS

With this update, Firewall Policy supports additional configurations including custom DNS and DNS proxy settings (preview) and a threat intelligence allow list. SNAT Private IP address range configuration is not yet supported but is in our roadmap.

While Firewall Policy can typically be shared across multiple firewalls, NAT rules are firewall specific and cannot be shared. You can still create a parent policy without NAT rules to be shared across multiple firewalls and a local derived policy on specific firewalls to add the required NAT rules.

Firewall Policy now supports IP Groups

IP Groups is a new top-level Azure resource in that allows you to group and manage IP addresses in Azure Firewall rules. 

Configure secured virtual hubs with Azure Firewall and a third-party SECaaS partner

You can now configure virtual hubs with Azure Firewall for private traffic (virtual network to virtual network/branch to virtual network) filtering and a security partner of your choice for internet (virtual network to internet/branch to internet) traffic filtering.

A security partner provider in Firewall Manager allows you to use your familiar, best-in-breed, third-party SECaaS offering to protect internet access for your users. With a quick configuration, you can secure a hub with a supported security partner, and route and filter internet traffic from your virtual networks (VNets) or branch locations within a region. This is done using automated route management, without setting up and managing User Defined Routes (UDRs).

You can create a secure virtual hub using Firewall Manager’s Create new secured virtual hub workflow. The following screenshot shows a new secure virtual hub configured with two security providers.

Azure Firewall Manager, Azure Exam Prep, Azure Learning, Azure Certification, Azure Learning
Creating a new secure virtual hub configured two security providers.

Securing connectivity

After you create a secure hub, you need to update the hub security configuration and explicitly configure how you want internet and private traffic in the hub to be routed. For private traffic, you don’t need to specify prefixes if it’s in the RFC1918 range. If your organization uses public IP addresses in virtual networks and branches, you need to add those IP prefixes explicitly.

To simplify this experience, you can now specify aggregate prefixes instead of specifying individual subnets. Additionally, for internet security via a third-party security provider, you need to complete your configuration using the partner portal. 

Azure Firewall Manager, Azure Exam Prep, Azure Learning, Azure Certification, Azure Learning
Selecting a third-party SECaaS for internet traffic filtering.

Secured virtual hub pricing

A secured virtual hub is an Azure Virtual WAN Hub with associated security and routing policies configured by Firewall Manager. Pricing for secured virtual hubs depends on the security providers configured.

Azure Firewall Manager, Azure Exam Prep, Azure Learning, Azure Certification, Azure Learning

Source: azure.microsoft.com

Thursday 13 August 2020

Running SD-WAN virtual appliances natively in Azure Virtual WAN

Today we’re announcing the preview of the new ability to deploy and run third-party Network Virtual Appliances such as SD-WAN natively within the Azure Virtual WAN hubs.

As enterprises increasingly adopt the cloud while reducing their costs, IT teams looking to consolidate, accelerate, or even revamp connectivity models should consider Azure Virtual WAN. Modern enterprises require ubiquitous connectivity between hyper-distributed applications, data, and users across the cloud and on-premises.

Since we announced automated and integrated transit capabilities with Azure Virtual WAN as the first in the public cloud back in 2018, the interest to take advantage of Azure when building or re-architecting branch networks for the cloud has grown fast.

Azure Virtual WAN provides a service architecture to take full advantage of Microsoft's fast global network. Virtual WAN makes it easy to connect Virtual Networks (VNets), Azure ExpressRoute, VPN, and now SD-WAN in "virtual hubs." The connectivity agnostic approach allows full transit between branches, sites, mobile users, and services using Azure's global infrastructure.

Azure Study Materials, Azure Exam Prep, Azure Tutorial and Materials, Azure Guides

Azure Study Materials, Azure Exam Prep, Azure Tutorial and Materials, Azure Guides
Barracuda is the first Virtual WAN partner to bring the advantage of this deep integration allowing customers to take advantage of SD-WAN (Software-Defined Wide Area Network) to improve performance while taking advantage of existing investments and skills.

Networking trends, typically led by an Internet-first approach, are increasingly leveraging technologies such as SD-WAN to improve performance through intelligent path selection and central policies. They work to eliminate traditional networking backhauls by sending traffic directly from branch to the cloud via local breakouts and allows you to leverage your chosen vendor's path selection and policy management.

Azure Study Materials, Azure Exam Prep, Azure Tutorial and Materials, Azure Guides

With built-in ease of use, scale, monitoring, and a unified framework for networking and security in a hub and spoke architecture, customers looking to deploy various hybrid workloads can do so quickly in a matter of minutes. Whether you are looking to simplify interconnectivity between internet (VPN) and private (ExpressRoute) users and sites or considering large scale branch/SD-WAN architectures, using Azure Virtual WAN can be the foundation of your solution. The new virtual appliance-in-hub capability will start rolling out to select Azure regions July 20, 2020.  

"Azure Virtual WAN can now be the central network hub for all our traffic and a key pillar in our cloud strategy," said Leon Sevriens, IT Manager, at Humankind, a large organization in the Netherlands that offers daycare and after-school care, with over 3,000 employees and over 450 locations. "The combination of Virtual WAN and Barracuda CloudGen WAN will give us the ability to test a resilient and modern cloud-native network with the performance and security we need for productivity tools, applications and data in the cloud."

More capabilities now generally available


In addition to SD-WAN integration, we're happy to announce the general availability of hub-to-hub connectivity, custom routing, Firewall Manager, and several other connectivity capabilities in Azure Virtual WAN.

New Virtual WAN Partners Cisco SD-WAN powered by Meraki and VMware now supporting automation of IPsec connectivity between their branch VPN/SD-WAN devices and Azure Virtual WAN VPN service.

Azure Study Materials, Azure Exam Prep, Azure Tutorial and Materials, Azure Guides

Hub-to-hub connectivity providing fully meshed virtual hubs.

Custom routing adding advanced routing enhancements: custom route tables and optimization of virtual network routing.

Virtual Network Transit with 50 Gbps transit speeds between Virtual Networks (VNets) connected with Virtual WAN

VPN and ExpressRoute Transit for seamless interconnectivity between VPN/SD-WAN and ExpressRoute connected sites and users

New VPN capabilities are supporting custom BGP IP (also known as APIPA or Automatic Private IP Addressing) for VPN Site connections.

Tuesday 11 August 2020

Run high scale workloads on Blob storage with new 200 TB object sizes

Azure Blob storage is a massively scalable object storage solution that serves from small amounts to hundreds of petabytes of data per customer across a diverse set of data types, including logging, documents, media, genomics, seismic processing, and more.

Increasing file size support for Blob storage


Customers that have workloads on-premises today utilize files that are limited by the filesystem used with file size maximums up to exabytes in size. Most usage would not go up to the filesystem limit but do scale up to the tens of terabytes in size for specific workloads that make use of large files. We recently announced the preview of our new maximum blob size of 200 TB (specifically 209.7 TB), increasing our current limit of 5TB in size, which is a 40x increase! The increased size of over 200TB per object is much larger than other vendors that provide a 5TB max object size. This increase allows workloads that currently require multi-TB size files to be moved to Azure without additional work to break up these large objects.

This increase in object size limit will unblock workloads, including seismic analysis, backup files, media and entertainment (video rendering and processing), and others which include scenarios where multi-TB object size is used. As an example, a media company which is trying to move from a private datacenter to Azure can now do so with our ability to support files up to 200TB in size. Increasing our object size removes the need to carefully inventory existing file sizes as part of a plan to migrate a workload to Azure. Given many on-premises solutions can store files in the ten to hundreds of terabytes in size, removing this gap simplifies migration to Azure.

With large file size support, being able to break up an object into blocks to ease upload and download is critical. Every Azure Blob is made up of up to 50,000 blocks. This allows a multi-terabyte object to be broken down into manageable pieces for write. The previous maximum of 5 TB (4.75TiB) was based on a max block size of 100 MiB x 50,000 blocks. The preview increases the block size to 4,000 MiB and keeps 50,000 blocks per object for a maximum object size of 4,000 MiB x 50,000 = 190.7 TiB. Conceptually in your application (or within the utility or SDK), the large file is broken into blocks, each block is written to Azure Storage, and, after all, blocks have successfully been uploaded, the entire file (object) is committed.

As an example of the overall relationship within a storage account, the following diagram shows a storage account, Contososa, which contains one container with two blobs. The first is a large blob made up of 50,000 blocks. The second is a small blob made of a single block.

Microsoft Online Exam, Microsoft Tutorial and Material, Azure Learning, Azure Exam Prep

The 200 TB preview block blob size is supported in all regions, using tiers including Premium, Hot, Cool, and Archive. There is no additional charge for this preview capability. We do not support upload of very large objects using Azure Portal. The various methods to transfer data into Azure will be updated to make use of this new blob size. To get started today with your choice in language:

◉ .Net.
◉ Java.
◉ JavaScript.
◉ Python.
◉ REST.