Tuesday, 10 April 2018

Application Security Groups now generally available in all Azure regions

We are pleased to announce the general availability of Application Security Groups (ASG) in all Azure regions. This feature provides security micro-segmentation for your virtual networks in Azure.

Azure Security, Azure Certifications, Azure Learning, Azure Tutorials and Materials

Network security micro segmentation


ASGs enable you to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses. Provides the capability to group VMs with monikers and secure applications by filtering traffic from trusted segments of your network.

Implementing granular security traffic controls improves isolation of workloads and protects them individually. If a breach occurs, this technique limits the potential impact of lateral exploration of your networks from hackers.

Security definition simplified


With ASGs, filtering traffic based on applications patterns is simplified, using the following steps:

◈ Define your application groups, provide a moniker descriptive name that fits your architecture. You can use it for applications, workload types, systems, tiers, environments or any role.

◈ Define a single collection of rules using ASGs and Network Security Groups (NSG), you can apply a single NSG to your entire virtual network on all subnets. A single NSG gives you full visibility on your traffic policies, and a single place for management.

◈ Scale at your own pace. When you deploy VMs, make them members of the appropriate ASGs. If your VM is running multiple workloads, just assign multiple ASGs. Access is granted based on your workloads. No need to worry about security definition again. More importantly, you can implement a zero-trust model, limiting access to the application flows that are explicitly permitted.

Single network security policy


ASGs introduce the ability to deploy multiple applications within the same subnet, and isolate traffic based on ASGs. With ASGs you can reduce the number of NSGs in your subscription. In some cases, you can use a single NSG for multiple subnets of your virtual network. ASGs enable you to centralize your configuration, providing the following benefits in dynamic environments:

◈ Centralized NSG view: All traffic policies in a single place. It’s easy to operate and manage changes. If you need to allow a new port to or from a group of VMs, you can make a change to a single rule.

◈ Centralized logging: In combination with NSG flow logs, a single configuration for logs has multiple advantages for traffic analysis.

◈ Enforce policies: If you need to deny specific traffic, you can add a security rule with high priority and enforce administrative rules.

Filtering east-west traffic


With ASGs, you can isolate multiple workloads and provide additional levels of protection for your virtual network.

In the following illustration, multiple applications are deployed into the same virtual network. Based on the security rules described, workloads are isolated from each other. If a VM from one of the applications is compromised, lateral exploration is limited, minimizing the potential impact of an attacker.

In this example, let’s assume one of the web server VMs from application1 is compromised, the rest of the application will continue to be protected, even access to critical workloads like database servers will still be unreachable. This implementation provides multiple extra layers of security to your network, making this intrusion less harmful and easy to react on such events.

Azure Security, Azure Certifications, Azure Learning, Azure Tutorials and Materials

Filtering north-south traffic


In combination with additional features on NSG, you can also isolate your workloads from on premises and azure services in different scenarios.

In the following illustration, a relatively complex environment is configured for multiple workload types within a virtual network. By describing their security rules, applications have the correct set of policies applied on each VM. Similar to the previous example, if one of your branches is compromised, exploration within the virtual network is limited therefore minimizing the potential impact of an intruder.

In this example, let’s assume someone on one of your branches connected using VPN, compromise a workstation and has access to your network. Normally only a subset of your network is required for this branch, by isolating the rest of your network; all other applications will continue to be protected and unreachable. ASGs another layers of security to your entire network.

Another interesting scenario, assuming you have detected a breach on one of your web servers, a good idea would be to isolate the VM for investigation. With ASGs, you can easily assign a special group predefined for quarantine VMs on your first security policy. These VMs lose access providing an additional benefit to help you react and mitigate this treats.

Azure Security, Azure Certifications, Azure Learning, Azure Tutorials and Materials

Friday, 6 April 2018

A flexible new way to purchase Azure SQL Database

We’re excited to announce the preview of an additional purchasing model to the Azure SQL Database Elastic Pool and Single Database deployment options. SQL Database Managed Instance, the vCore-based model reflects our commitment to customer choice by providing flexibility, control, and transparency. As with Managed Instance, the vCore-based model makes the Elastic Pool and Single Database options eligible for up to 30 percent savings* with the Azure Hybrid Benefit for SQL Server.

Azure SQL Database, Microsoft Tutorials and Materials, Microsoft Learning

Optimize flexibility and performance with two new service tiers


The new vCore-based model introduces two service tiers, General Purpose and Business Critical. These tiers let you independently define and control compute and storage configurations, and optimize them to exactly what your application requires.  If you’re considering a move to the cloud, the new model also provides a straightforward way to translate on-premises workload requirements to the cloud. General Purpose is designed for most business workloads and offers budget-oriented, balanced, and scalable compute and storage options. Business Critical is designed for business applications with high IO requirements and offers the highest resilience to failures.

Choosing between DTU and vCore-based performance levels


You want the freedom to choose what’s right for your workloads and we’re committed to supporting the DTU-based model alongside the new vCore-based option. Looking for a simple way to purchase and configure resources? The DTU-based model provides preconfigured bundles of resources across a range of performance options. If you are not concerned with customizing the underlying resources and prefer the simplicity of paying a fixed amount each month, you may find the DTU-based model more suitable for your needs. However, if you need more insights into the underlying resources or need to scale them independently to achieve optimal performance, the vCore-based model is the best choice. The vCore-based model is also a good choice if you own SQL Server licenses that you would like to move to the cloud. Migration between DTU-based and vCore-based performance levels is a simple online operation and is similar to the current process of upgrading from the Standard to Premium service tiers.

Save up to 30 percent* on vCore-based options with Azure Hybrid Benefit for SQL Server


Save more on vCore-based options when you use the Azure Hybrid Benefit for SQL Server. This benefit is exclusive to Azure and enables you to use your SQL Server Enterprise Edition or Standard Edition licenses with active Software Assurance to pay a reduced rate on a vCore-based Single Database, Elastic Pool or Managed Instance, with savings up to 30 percent.

Getting started!


The new vCore-based service tiers will be available in all Azure regions by April 6, 2018. If you already have an Azure SQL database, you can switch to the new service tiers in the portal and configure the database as illustrated by the following diagrams. Otherwise, you can create a new database in the General Purpose or Business Critical service tiers.

Azure SQL Database, Microsoft Tutorials and Materials, Microsoft Learning

Azure SQL Database, Microsoft Tutorials and Materials, Microsoft Learning

For more information about the vCore-based purchasing options visit our service tier documentation.

*Savings based on a 8 vCore Business Critical Managed Instance in East US Region, running 730 hours per month. Savings are calculated from full price (license included) against reduced price (applying Azure Hybrid Benefit for SQL Server), which includes the Software Assurance cost for SQL Server Enterprise edition. Actual savings may vary based on region, performance tier, and Software Assurance tier.

Tuesday, 3 April 2018

Ingest, prepare, and transform using Azure Databricks and Data Factory

Today’s business managers depend heavily on reliable data integration systems that run complex ETL/ELT workflows (extract, transform/load and load/transform data). These workflows allow businesses to ingest data in various forms and shapes from different on-prem/cloud data sources; transform/shape the data and gain actionable insights into data to make important business decisions.

With the general availability of Azure Databricks comes support for doing ETL/ELT with Azure Data Factory. This integration allows you to operationalize ETL/ELT workflows (including analytics workloads in Azure Databricks) using data factory pipelines that do the following:

1. Ingest data at scale using 70+ on-prem/cloud data sources

2. Prepare and transform (clean, sort, merge, join, etc.) the ingested data in Azure Databricks as a Notebook activity step in data factory pipelines

3. Monitor and manage your E2E workflow.

Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides

Take a look at a sample data factory pipeline where we are ingesting data from Amazon S3 to Azure Blob, processing the ingested data using a Notebook running in Azure Databricks and moving the processed data in Azure SQL Datawarehouse.

Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides

You can parameterize the entire workflow (folder name, file name, etc.) using rich expression support and operationalize by defining a trigger in data factory.

Get started today!


We are excited for you to try Azure Databricks and Azure Data Factory integration and let us know your feedback.

Get started by clicking the Author & Monitor tile in your provisioned v2 data factory blade.

Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides

Click on the Transform data with Azure Databricks tutorial and learn step by step how to operationalize your ETL/ELT workloads including analytics workloads in Azure Databricks using Azure Data Factory.

Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides

Saturday, 31 March 2018

BigDL Spark deep learning library VM now available on Microsoft Azure Marketplace

BigDL deep learning library is a Spark-based framework for creating and deploying deep learning models at scale. While it has previously been deployed on Azure HDInsight and Data Science VM, it is now also available on Azure Marketplace as a fixed VM image, representing a further step by Intel to reduce deployment complexity for users.

Because BigDL is an integral part of Spark, a user does not need to explicitly manage distributed computations. A BigDL application provides high-level control “knobs”, such as the number of compute nodes, cores, and batch size, a BigDL application also leverages stable Spark infrastructure for node communications and resource management during its execution. BigDL applications can be written in either Python or Scala and achieve high performance through both algorithm optimization and taking advantage of close integration with Intel’s Math Kernel Library (MKL).

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

What is the Microsoft Azure Marketplace? The Azure Marketplace is an online applications and services marketplace that enables start-ups, independent software vendors (ISVs), and MSP/SIs to offer their Azure-based solutions or services to customers around the world. 

Introduction


This post describes two use case scenarios to deploy BigDL_v0.4 in Azure VMs:

◈ First scenario: Deploying an Azure VM with a pre-built BigDL_v0.4 image and running a basic deep learning example.

◈ Second scenario: Deploying BigDL_v0.4 on a bare-bones Ubuntu VM (for advanced users).

First scenario: Deploying a pre-built BigDL_v0.4 VM image:


Log in to your Microsoft Azure account. BigDL requires you to have an Azure subscription. You can get a free trial by visiting BigDL offering on Azure Marketplace and clicking Get it now.

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

You should see the following page. Click on the Create button at the bottom.

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

Enter the requested information in the fields at the prompts. Note that Azure imposes syntax limitations on some of the fields (such as using only alphanumeric characters and no CAPS). Use lowercase letters and digits and you will be fine. Use the following three screenshots for guidance.

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

Spark is memory-intensive, so select a machine with a larger amount of RAM. Note that not all VM types and sizes are available in certain regions. For simple tasks and testing, the virtual machine displayed in the following screenshot will meet requirements:

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

After the VM is provisioned, copy its public IP address. Note that this public IP address will change every time you stop and restart your VM. Keep this in mind if you are thinking of BigDL automation.

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

After deployment, you can modify the IP address provided in the resource group and set it up as a static IP address:

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

You are now ready to SSH into your BigDL VM. You can use your favorite SSH client. For this example, MobaXterm is used.

Enter the IP address and the username you selected when creating the VM.

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

Check the versions of installed dependencies:

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

Before using pre-installed BigDL, you will need to change ownership of the directory.

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

BigDL was pre-installed into the bigdlazrmktplc directory. Yet ‘testuser’ does not have full privileges to access it.

To change this, type:

$sudo chown -R testuser:testuser bigdlazrmktplc

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

Now ‘testuser’ owns the bigdlazrmktplc directory.

Finally, test that BigDL actually works in this VM by entering the following commands:

$cd bigdlazrmktplc/BigDL
$export SPARK_HOME=/usr/local/spark/spark-2.2.0-bin-hadoop2.7
$export BIGDL_HOME=/home/bigdlazrmktplc/BigDL
$BigDL/bin/pyspark-with-bigdl.sh --master local[*]

If the commands are successful you will see the following:

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

At the command prompt, copy and paste the following example code, the source can be found on GitHub

from bigdl.util.common import *
from pyspark import SparkContext
from bigdl.nn.layer import *
import bigdl.version
# create sparkcontext with bigdl configuration
sc = SparkContext.getOrCreate(conf=create_spark_conf().setMaster("local[*]"))
init_engine() # prepare the bigdl environment
bigdl.version.__version__ # Get the current BigDL version
linear = Linear(2, 3) # Try to create a Linear layer

If the commands are successful, you will see the following:

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

BigDL is now ready for you to use.

Second Scenario: Deploying BigDL_v0.4 on a bare-bones Ubuntu Azure VM


First, you will need to create an Azure subscription. You can get a free trial by navigating to BigDL offering on Azure Marketplace and clicking Get it now.

Log in to the Azure Portal, go to New, and select Ubuntu server 16.04 LTS VM (LTS = Long Term Support).

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

Enter the basic VM attributes using only lower-case letters and numbers.

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

For Spark jobs you want to select VMs with a large amount of RAM available.

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

After your VM has been created, you can SSH into it using the username and password which you created previously.

Copy the Public IP address of the VM:

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

This creates a very basic Ubuntu machine, so you must install the following addtional components to run BigDL:

◈ Java Runtime Environment (JRE)
◈ Scala
◈ Spark
◈ Python packages
◈ BigDL

Installing the Java Runtime Environment (JRE)


At the command prompt, type the following commands:

$sudo add-apt-repository ppa:webupd8team/java
$sudo apt-get update
$sudo apt-get install oracle-java8-installer
$sudo apt-get install oracle-java8-set-default

Confirm the installation and JRE version by typing

$java -version

Installing Scala and confirming version


At the command prompt, type the following commands:

$sudo apt-get install scala
$scala -version

Installing Spark 2.2.x


At the command prompt, type the following commands:

$sudo wget http://mirrors.ibiblio.org/apache/spark/spark-2.2.0/spark-2.2.0-bin-hadoop2.7.tgz
$sudo tar xvzf spark-2.2.0-bin-hadoop2.7.tgz
$rm spark-2.2.0-bin-hadoop2.7.tgz
$sudo mkdir /usr/local/spark
$sudo mv spark-2.2.0-bin-hadoop2.7 /usr/local/spark
Verify Spark installation:
$cd  /usr/local/spark/spark-2.2.0-bin-hadoop2.7/
$./bin/spark-submit –version

Microsoft Azure BigDL, Microsoft Azure Certifications, Microsoft Azure Learning, Microsoft Azure Guides, Microsoft Azure VM

Installing BigDL


The main repo for BigDL downloadable releases.

For Spark 2.2.0 and Scala 2.11.x, select Dist-spark-2.2.0-scala-2.11.8-all-0.4.0-dist.zip

At the command prompt, type the following commands:

$cd ~
$mkdir BigDL
$cd BigDL
$sudo wget https://s3-ap-southeast-1.amazonaws.com/bigdl-download/dist-spark-2.2.0-scala-2.11.8-all-0.4.0-dist.zip

 $sudo apt-get install unzip
 $unzip dist-spark-2.2.0-scala-2.11.8-all-0.4.0-dist.zip
 $rm dist-spark-2.2.0-scala-2.11.8-all-0.4.0-dist.zip

Installing Python 2.7 packages


Ubuntu 16x on Azure comes with pre-installed python 2.7. However, there are a couple of additional packages that must be installed.
At the command prompt, type the following commands:

$sudo apt-get install python-numpy
$sudo apt-get install python-six

Update all packages and dependencies by typing

$sudo apt-get update

Verify BigDL installation


The instructions to verify that BigDL was installed correctly are available.

At the command prompt, type the following commands:

$export SPARK_HOME=/usr/local/spark/spark-2.2.0-bin-hadoop2.7
$export BIGDL_HOME=/home/bigdlazrmktplc/BigDL
Launch PySpark (from BigDL directory)

$bin/pyspark-with-bigdl.sh --master local[*]

At the prompt, copy and paste the following code, this code can also be found at Github.

from bigdl.util.common import *
from pyspark import SparkContext
from bigdl.nn.layer import *
import bigdl.version
# create sparkcontext with bigdl configuration
sc = SparkContext.getOrCreate(conf=create_spark_conf().setMaster("local[*]"))
init_engine() # prepare the bigdl environment
bigdl.version.__version__ # Get the current BigDL version
linear = Linear(2, 3) # Try to create a Linear layer

You should see the following:

creating: createLinear
cls.getname: com.intel.analytics.bigdl.python.api.Sample
BigDLBasePickler registering: bigdl.util.common  Sample
cls.getname: com.intel.analytics.bigdl.python.api.EvaluatedResult
BigDLBasePickler registering: bigdl.util.common  EvaluatedResult
cls.getname: com.intel.analytics.bigdl.python.api.JTensor
BigDLBasePickler registering: bigdl.util.common  JTensor
cls.getname: com.intel.analytics.bigdl.python.api.JActivity
BigDLBasePickler registering: bigdl.util.common  JActivity
>>>

Finally, install Maven to allow you to build BigDL applications by typing the following:

$sudo apt-get install maven

Your VM is now ready for running deep learning examples at scale.

Friday, 30 March 2018

SQL Database Transparent Data Encryption with Azure Key Vault configuration checklist

Azure SQL Database and Data Warehouse offer encryption-at-rest by providing Transparent Data Encryption (TDE) for all data written to disk, including databases, log files and backups. This protects data in case of unauthorized access to hardware. TDE provides a TDE Protector that is used to encrypt the Database Encryption Key (DEK), which in turn is used to encrypt the data. The TDE protector is by default managed by the service in a fully transparent fashion, rotated every 90 days and maintained in archive for access to backups. Optionally management of the TDE Protector can be assumed by the customer if more control is desired. This requires storing the TDE protector in a customer-owned Azure Key Vault. If this option is chosen, it is important to fully understand all TDE implications and carefully plan for ongoing key management.

Overview of TDE with customer managed keys and Azure Key Vault integration:


Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides

In this scenario, customers must maintain Azure Key Vault, control SQL Database permissions to Azure Key Vault and maintain access to all TDE Protectors to open or restore databases or backups and enable all other operations that require database access to the TDE Protector. The following checklist will help to systematically plan all key management related duties in Azure Key Vault. In addition, we list the most important setup considerations and configuration requirements that must be followed to configure TDE with customer managed keys in Azure Key Vault.

General guidelines:

  • Ensure Azure Key Vault and Azure SQL Database are going to be in the same tenant. Cross-tenant key vault and server interactions are not supported.
  • Decide which subscriptions will be used for the required resources. Moving the server across subscriptions later requires a new setup of TDE with BYOKs.
  • When configuring TDE with BYOK, it is important to consider the load placed on the key vault by repeated wrap/unwrap operations. For example, since all databases associated with a logical server use the same TDE protector, a failover of that server will trigger as many key operations against the vault as there are databases in the server. Based on our experience and documented key vault service limits, we recommend associating at most 500 Standard or 200 Premium databases with one Azure Key Vault in a single subscription to ensure consistently high availability when accessing the TDE protector in the vault.
  • Recommended: Keep a copy of the TDE protector on premises. This requires a HSM device to create a TDE Protector locally and a key escrow system to store a local copy of the TDE Protector.

Guidelines for configuring Azure Key Vault:


  • Use a key vault with soft-delete enabled (required) to protect from data loss in case of accidental key or key vault deletion scenarios:
    • Soft deleted resources are retained for 90 days unless they are recovered or purged.
    • The recover and purge actions have their own permissions associated in a key vault access policy.
  • Grant the SQL server access to the key vault using its Azure Active Directory (Azure AD) Identity. When using the Portal UI, the Azure AD identity will be automatically created and the key vault access permissions will be granted to the server. Using PowerShell, these steps must be completed individually in the correct order and need to be verified. See Configure TDE with BYOK for detailed step-by-step instructions when using PowerShell. Note: The server will lose access to the key vault if the Azure AD Identity is accidentally deleted or the server’s permissions are revoked using the key vault’s access policy.
  • Enable auditing and reporting in Azure Key Vault on all encryption keys: Key Vault provides logs that are easy to inject into other security information and event management (SIEM) tools. Operations Management Suite (OMS) Log Analytics is one example of a service that is already integrated.
  • To ensure high-availability of encrypted databases, configure one logical server with two Azure Key Vaults in different regions.
  • For High Availability of a single SQL database, consider configuring two key vaults:

Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides
  • Use the Backup-AzureKeyVaultKey cmdlet to retrieve the key in encrypted format and then use the Restore-AzureKeyVaultKey cmdlet and specify a key vault in the second region.
  • For Geo-replicated databases, the following AKV configuration is required:

Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides
  • One primary database with a key vault in region and one secondary database with a key vault in region.
  • One secondary is required, up to four secondaries are supported.
  • Secondaries of secondaries (chaining) is not supported.
    • Note: when assigning the server identity, assign the identity for the secondary first, and for the primary second.

Guidelines for configuring TDE Protectors (asymmetric key) stored in Azure Key Vault:

  • Create your encryption key locally on a local HSM device. Ensure this is an asymmetric, RSA 2048 key so it is storable in Azure Key Vault. Larger key sizes are currently not supported by Azure SQL Database.
  • Escrow the key in a key escrow system.
  • Import the encryption key file (.pfx, .byok, or .backup) to Azure Key Vault.
    • (Note: For testing purposes, it is possible to create a key with Azure Key Vault, however this key cannot be escrowed, because the private key can never leave the key vault. A key used to encrypt production data should always be escrowed, as the loss of the key (accidental deletion in key vault, expiration etc.) will result in permanent data loss.)
  • Use a key without an expiration date and never set an expiration date on a key already in use: once the key expires, the encrypted databases lose access to their TDE Protector and are dropped within 24 hours.
  • Ensure the key is enabled and has permissions to perform get, wrap key, and unwrap key operations.
  • Create an Azure Key Vault key backup before using the key in Azure Key Vault for the first time. Learn more about the Backup-AzureKeyVaultKey command.
  • Create a new backup whenever any changes are made to the key (for example, add ACLs, add tags, add key attributes).
  • Keep previous versions of the key in the key vault when rotating keys, so that databases can still access their virtual log files that continue to be encrypted with the original keys. In addition, when the TDE Protector is changed for a database, old backups of the database are not updated to use the latest TDE Protector. Key rotations can be performed following the instructions at Rotate the Transparent Data Encryption Protector Using PowerShell.
  • Keep all previously used keys in Azure Key Vault after changing back to service-managed keys. This will ensure database backups can be restored with the TDE protectors stored in Azure Key Vault. TDE protectors will have to be maintained in Azure Key Vault until all needed backups have been created while using service-managed keys.
  • Make recoverable backup copies of these keys using Backup-AzureKeyVaultKey.
  • To remove a potentially compromised key during a security incident without the risk of data loss, follow the steps at Remove a potentially compromised key.

Tuesday, 27 March 2018

Microsoft creates industry standards for datacenter hardware storage and security

Today I’m speaking at the Open Compute Project (OCP) U.S. Summit 2018 in San Jose where we are announcing a next generation specification for solid state device (SSD) storage, Project Denali. We’re also discussing Project Cerberus, which provides a critical component for security protection that to date has been missing from server hardware: protection, detection and recovery from attacks on platform firmware. Both storage and security are the next frontiers for hardware innovation, and today we’re highlighting the latest advancements across these key focus areas to further the industry in enabling the future of the cloud.

A new standard for cloud SSD storage


Storage paradigms have performed well on-premises, but they haven’t resulted in innovation for increasing performance and cost efficiencies needed for cloud-based models. For this reason, we’re setting out to define a new standard for flash storage specifically targeted for cloud-based workloads and I’m excited to reveal Project Denali, which we’re establishing with CNEX Labs. Fundamentally, Project Denali standardizes the SSD firmware interfaces by disaggregating the functionality for software defined data layout and media management. With Project Denali, customers can achieve greater levels of performance, while leveraging the cost-reduction economics that come at cloud scale.

Project Denali is a standardization and evolution of Open Channel that defines the roles of SSD vs. that of the host in a standard interface. Media management, error correction, mapping of bad blocks and other functionality specific to the flash generation stays on the device while the host receives random writes, transmits streams of sequential writes, maintains the address map, and performs garbage collection. Denali allows for support of FPGAs or microcontrollers on the host side.


This provides an architectural framework that is truly cloud first. The modular architecture proposed will enable agility for new non-volatile media adoption (both NAND and Storage class memory), along with improved workload performance, through closer integration between the application and the SSD device.  It also defines a model for using software-defined data placement on SSDs to disaggregate older, monolithic storage models. When management of data placement is separated from the NAND management algorithms, non-volatile storage media is freed up to follow its own schedule for innovation. Project Denali will allow hardware companies to build simpler, less complicated hardware which will lower costs, decrease time to market, allow for workload specific tuning and enable rapid development of new NAND and memory technologies.

After maturing Project Denali with a full array of ecosystem partners, we intend to contribute the Project Denali standard to the industry to help foster even broader adoption.


Enabling hardware security


Microsoft Azure represents the cutting edge of cloud security and privacy. Microsoft spends one billion dollars per year on cybersecurity, and much of that investment goes to fundamental improvements that make Azure a trusted cloud platform. With such an intense focus on security, we recognize the need for an industry standard for hardware security. Microsoft’s Project Cerberus has been developed with the intent of creating an open industry standard for platform security.

Project Cerberus is a security co-processor that establishes a root of trust in itself for all of the hardware devices on a computing platform and helps defend platform firmware from:

◈ Malicious insiders with administrative privilege or access to hardware
◈ Hackers and malware that exploit bugs in the operating system, application, or hypervisor
◈ Supply chain attacks (manufacturing, assembly, in-transit)
◈ Compromised firmware binaries

Cerberus consists of a cryptographic microcontroller running secure code which intercepts accesses from the host to flash over the SPI bus (where firmware is stored), so it can continuously measure and attest these accesses to ensure firmware integrity and hence protect against unauthorized access and malicious updates. This enables robust pre-boot, boot-time and runtime integrity for all the firmware components in the system.

The specification is CPU and I/O architecture agnostic and is intended to easily integrate into various vendor designs over time, thus enabling more secure firmware implementations on all platform types across the industry, ranging from datacenter to IoT devices. The specification also supports hierarchical root of trust so that platform security can be extended to all I/O peripherals using the same architectural principles.

Since the introduction of Project Cerberus in late 2017, the ecosystem supporting the standard has continued to grow and we’re on the verge of contributing the hardware implementation to the community for greater collaboration and adoption.


Since 2015, we’ve been sharing the server and datacenter designs that power Microsoft Azure with the OCP community, working to empower the industry to take advantage of innovations that improve datacenter performance, efficiency, and power consumption.

Saturday, 24 March 2018

Azure DNS Private Zones now available in public preview

We are pleased to announce the public preview of DNS Private Zones in all Azure Public cloud regions. This capability provides secure and reliable name resolution for your virtual networks in Azure. Private Zones was announced as a managed preview in fall of last year.

Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides

No more custom DNS server burden


Private Zones obviates the need to setup and manage custom DNS servers. You can bring DNS zones to your virtual network as you lift-and-shift applications to the Azure cloud, or if you are building Cloud-Native applications. You also have the flexibility to use custom domain names, such as your company’s domain name.

Name resolution across virtual networks and across regions


Private zones provide name resolution both within a virtual network and across virtual networks. You can have private zones not only span across virtual networks in the same region, but also across regions and subscriptions. This feature is available in all Azure Public cloud regions.

Split-horizon support


You can configure zones with a split-horizon view, allowing for a private and a public DNS zone to share the same name. This is a common scenario when you want to validate your workloads in a local test environment, before rolling out in production for broader consumption. To realize this scenario, simply configure the same DNS zone as both a public zone and private zone in Azure DNS. Now for clients in a virtual network attached to the zone, Azure will return the DNS response from the private zone, and for clients on the internet, Azure will return the DNS response from the public zone. Since name resolution is confined to configured virtual networks, you can prevent DNS exfiltration.

Azure Tutorials and Materials, Azure Learning, Azure Certifications, Azure Guides

Dynamic DNS Registration


We are introducing two concepts to DNS zones with this update; Registration virtual networks and Resolution virtual networks. When you designate a virtual network as a Registration virtual network at the time of creating a private zone or later when you update the zone, Azure will dynamically register DNS A records in the private zone for the virtual machines within this virtual network and will keep track of virtual machine additions or removals within the virtual network to keep your private zone updated. This is without any work on your part.

You can also designate up to 10 virtual networks as Resolution virtual networks when creating or updating a private zone. Forward DNS queries will resolve against the private zone records from any of these virtual networks. There is no dependency or requirement that the virtual networks be peered for DNS resolution to work across virtual networks.

Azure DNS Private Zones also supports Reverse DNS queries for the private IP address space of the Registration virtual network.