Sunday 29 April 2018

Organizing subscriptions and resource groups within the Enterprise

One of the common questions members of the Core Services Engineering and Operations teams frequently get when speaking to customers at the Executive Briefing Center here in Redmond is how do our engineering teams secure our Azure footprint for our Line of Business applications while still giving developers the freedom to go fast, have visibility into our environment and use the capabilities of Visual Studio Team Services for CI/CD, Release, and much more.

At the core of this answer is how we use the combination of subscriptions, resource groups, and Role Based Access Control to ensure compliance with a set of guidelines.

Let's start at the top level: Azure Subscriptions. CSEO, as you can imagine has a lot of Line of Business applications, currently over a thousand.

Microsoft Azure Tutorials and Materials, Azure Guides, Azure Certifications
The business unit pattern

In particular, many of our teams have adopted a common mapping of the above pattern to enterprise/federal/state/local. This common vocabulary provides practical constructs that everyone understands and can relate to, ensuring we're on the same page.

What does this translation look like in reality with examples for subscription organization? It looks like this from the top down:

◈ Enterprise - This stays the same as Enterprise in the Azure scaffold for us. Enterprise level items are common concerns across the entire company – it might be ensuring we don't count internal Azure consumption as public revenue, or how secure we are across all Azure subscriptions in our tenants, or other high-level strategic objectives that we care about regardless of level. Another way to think of this might be how Microsoft reports our global quarterly earnings – it's across the entire company.

◈ Federal - Our major departments are named Federal. For example, CSEO is one of the federal groups. At this level, we may have additional policies and procedures, automation that runs against our footprint, or other things specific to that department. Typically, this is where large budgets have roll-up views, etc.

◈ State - A group of services or a service offerings that are related. For example, the Tax Service Offering within the Finance IT organization. Here you may have additional policies and procedures, for example, HIPAA, PCI, SOX controls, and procedures. A state has a group of services that are related.

◈ Local – This is where a subscription lives and is associated with a service. Each subscription contains multiple applications that are related to delivering the set of functionalities that make up the service. Each application is typically contained in an explicit resource group. The resource group becomes the container for that application, which is part of the service (the subscription). There may sometimes be a shared or common application in the service. At the application/resource group level is where the team of application developers live and they’re accountable for their footprint in Azure from security to optimal Azure spend in everything they do. A great development team operating at this level solves most of the concerns and roll-up reporting questions that are typically asked from higher levels. If each development team looks at the Azure Security Center blade, pinned dashboards built from Azure Log analytics, and the Azure Advisor blades on a daily basis, you wouldn’t have division-wide efforts created to reduce spend or bring up patch compliance, etc.

This hierarchical construct allows the differing level of controls and policies while allowing the developers to go fast. Below is a typical subscription configuration:

Microsoft Azure Tutorials and Materials, Azure Guides, Azure Certifications
An example CSEO Subscription for the Tax Service

Within the resource groups above, the typical components you would see in each of the production resource groups (applications) would be the Azure components used to build that specific service such as:

◈ Azure HDInsight cluster
◈ Storage Accounts
◈ SQL Database
◈ Log Analytics
◈ Application Insights
◈ Etc., etc.

Each resource group has the components specific to that application. On occasion, the subscription might have a Common or Shared Services resource group. These are items that are used across the applications in the service, for example:

◈ Common Log Analytics workspace
◈ Common Blob Storage accounts where files are dumped for processing by the other services
◈ An ExpressRoute VNet

In our CSEO Tax Service there are multiple applications, each in their own resource groups such as the data warehouse, ask tax (the help web portal and some bots), a calculation engine, an archiving application, a reporting application and more.

Within the subscription and resource groups, we use least privilege access principles to ensure that only the people that need to do the work have access to resources. Therefore, only the engineering owners of the service are the owners of the subscription. No contributors exist on the subscription. Some specific identities are added to the reader role, these are typically accounts used by automated tooling.

Each resource group has only the identities necessary added with the minimum required permissions. We try to avoid creating custom roles, which over the passage of time and with scale create management headaches.

Within the resource groups, the owner and reader roles are inherited from the subscription. The VSTS Build Account identity is added in as a contributor to the resource group for automated deployments. This means that only the service owner and the build identities can touch the production service on a continuous basis.

In the pre-production resource group, the engineering team is added to the reader role. This still means that only the service owner and build accounts can touch pre-production on a continuous basis, but the engineering team can see what's going on in the resource group. If a developer needs to do some work for testing, they can’t go putting that in the pre-prod or prod environment.

There are some variations on this but they’re not common. For instance, some teams might want someone from the security as the sub owner, and some teams even remove people from the equation and use a form of service account as the sub owner. Some teams might give engineers contributor on pre-prod if they’re not fully mature on the automation required. It all depends on the needs of the team.

So now that we have these together, what does it mean for typical roles in the organization?

Developers have access to the pre-production resource group to see what's going on in the dev/pre-production/uat/whatever-you-want-to-call-non-production-in-your-company but must get used to using telemetry mechanisms in the platform for debugging, just like they would have to do in production. While the teams are maturing to this level, you may see developers with contributor level access at the pre-production resource groups. The result of this discipline is typically much richer Application Insights portals and Azure Log Analytics dashboards. As teams mature they switch to deploying from a CI/CD system like Visual Studio Team Services that uses Microsoft Release Management and get really good at creating build and release definitions. Developers also script out and automate operational concerns like key rotation.

Security & Operations have access via identities with Just-in-Time VM Access through Azure Security Center for IaaS and Privileged Identity Management through Azure AD. Some operations teams might use automated tools to interrogate our Azure subscription footprint looking for configurations to reduce risk, for example looking for a resource group containing a public internet endpoint (PiP) and an ExpressRoute circuit that might pose a security risk. These teams use those audit account identities added at the subscription level.

Another thing that this model implicitly drives is the shift of accountability from any Central IT team to the development team. This does not mean shift of concerns as security and operations teams still care about compliance and risk. But if the local development team makes part of their daily standup looking at billing, security center, and the Azure Advisor tools, then cost optimization, security compliance and concerns that are inevitably asked from the enterprise, federal and state layers will already be optimized.

Friday 27 April 2018

What’s brewing in Visual Studio Team Services: April 2018 Digest

This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. With the rapidly expanding list of features in Team Services, teams can start to leverage it more efficiently for all areas of their Azure workflow, for apps written in any language and deployed to any OS.

Chain related builds together using build completion triggers


Large products have several components that are dependent on each other. These components are often independently built. When an upstream component (a library, for example) changes, the downstream dependencies have to be rebuilt and revalidated. Teams typically manage these dependencies manually.

Now you can trigger a build upon the successful completion of another build. Artifacts produced by an upstream build can be downloaded and used in the later build, and you can also get data from these variables: Build.TriggeredBy.BuildId, Build.TriggeredBy.BuildDefinitionId, Build.TriggeredBy.BuildDefinitionName.

This feature was prioritized based on what is currently the second-highest voted suggestion with 1,129 votes.

Azure Tutorials and Materials, Azure Guides, Azure Certifications, Azure Learning

Keep in mind that in some cases, a single multi-phase build could meet your needs. However, a build completion trigger is useful if your requirements include different configuration settings, options, or a different team to own the dependent process.

Trigger CI builds from YAML


You can now define your continuous integration (CI) trigger settings as part of your YAML build definition file. By default, when you push a new .vsts-ci.yml file to your Git repository, CI will be configured automatically for all branches.

To limit the branches that you want triggered, simply add the following to your file to trigger builds on pushes to master or any branch matching the releases/* pattern.

YAMLCopy

trigger:
- master
- releases/*

If you want to disable the trigger or override the trigger settings in the YAML files you can do so on the definition.

Azure Tutorials and Materials, Azure Guides, Azure Certifications, Azure Learning

Streamline deployment to Kubernetes using Helm


Helm is a tool that streamlines installing and managing Kubernetes applications. It has also gained a lot of popularity and community support in the last year. A Helm task in Release is now available for packaging and deploying Helm charts to Azure Container Service (AKS) or any other Kubernetes cluster.

VSTS already has support for Kubernetes and Docker containers. With the addition of this Helm task, now you can set up a Helm based CI/CD pipeline for delivering containers into a Kubernetes cluster.

Azure Tutorials and Materials, Azure Guides, Azure Certifications, Azure Learning

Continuously deploy to Azure Database for MySQL


You can now continuously deploy to Azure Database for MySQL - Azure’s MySQL database as a service. Manage your MySQL script files in version control and continuously deploy as part of a release pipeline using a native task rather than PowerShell scripts.

Configure Go and Ruby applications using Azure DevOps Projects


Azure DevOps Projects makes it easy to get started on Azure. It helps you launch an application on the Azure service of your choice in just a few steps. DevOps Projects sets up everything you need for developing, deploying, and monitoring your app. Now you can setup an entire DevOps pipeline for Go and Ruby applications too. See the Deploy to Azure documentation for more information.

Deploy Ruby on Rails applications


A new Azure App Service release definition template now includes the tasks needed for deploying Ruby on Rails applications to Azure WebApp on Linux. When this release definition template is used, the App Service Deploy task gets pre-populated with an inline deployment script that makes bundler (dependency manager) install the applications dependencies.

Build applications written in Go


Now you can build your Go applications in VSTS. Use the Go Tool Installer task to install one or more versions of Go Tool on the fly. This task acquires a specific version of Go Tool needed by your project and adds it to the PATH of the build agent. If the targeted Go Tool version is already installed on the agent, this task will skip the process of downloading and installing it again. The Go task helps you download dependencies, build, or test your application. You can also use this task to run a custom Go command of your choice.

Deployment Groups are generally available


We are excited to announce that Deployment Groups is out of preview and is now generally available. Deployment Groups is a robust out-of-the-box multi-machine deployment feature of Release Management in VSTS/TFS.

With Deployment Groups, you can orchestrate deployments across multiple servers and perform rolling updates, while ensuring high availability of your application throughout. You can also deploy to servers on-premises or virtual machines on Azure or any cloud, plus have end-to-end traceability of deployed artifact versions down to the server level.

Azure Tutorials and Materials, Azure Guides, Azure Certifications, Azure Learning

Agent-based deployment relies on the same agents your builds and releases use, which means you can use the full task catalog on your target machines. From an extensibility perspective, you can also use the REST APIs for deployment groups and targets for programmatic access.

The agent-based deployment capability relies on the same build and deployment agents that are already available. You can use the full task catalog on your target machines in the Deployment Group phase. From an extensibility perspective, you can also use the REST APIs for deployment groups and targets for programmatic access.

Improve code quality with the latest extensions from SonarSource


SonarSource recently released an updated SonarQube extension and a new SonarCloud extension, which enable static code analysis for numerous languages. The VSTS Gradle and Maven tasks take advantage of these extensions for Java builds in particular. Just enable Run SonarQube or SonarCloud Analysis on version 2.* of the Gradle or Maven task, then add the Prepare and Publish SonarQube/SonarCloud tasks as shown below.

Azure Tutorials and Materials, Azure Guides, Azure Certifications, Azure Learning

Publish markdown files from a Git repository as a Wiki


Developers create documentation for “APIs”, “SDKs”, and “help docs explaining code” in code repositories. Readers then need to sift through code to find the right documentation. Now you can simply publish markdown files from code repositories and host them in Wiki.

Azure Tutorials and Materials, Azure Guides, Azure Certifications, Azure Learning

From within Wiki, start by clicking Publish code as wiki. Next, you can specify a folder in a Git repository that should be promoted.

Azure Tutorials and Materials, Azure Guides, Azure Certifications, Azure Learning

Once you click on Publish, all the markdown files under the selected folder will be published as a wiki. This will also map the head of the branch to the wiki so that any changes you make to the Git repo will be reflected immediately.

Integrate Power BI with VSTS Analytics using views


We are excited to announce an easy-to-use solution for integrating Power BI with the VSTS Analytics extension. You don’t have to know how to write OData queries anymore! Our new feature Analytics views makes getting VSTS work tracking data into Power BI simple, and it works for the largest accounts. Similar to a work items query, an Analytics View specifies filters that scope the result of work items data and the columns. Additionally, views allow you to report on past revisions of work items and easily create trend reports.

We provide a set of Default Analytics views that work well for customers with smaller accounts and basic scenarios. Larger accounts might need to scope down the data they are pulling into Power BI. Analytics views let you do just that. Scope your data and history to exactly what you want to report on in Power BI. Analytics views you create in the Analytics hub in VSTS are immediately available to select from the VSTS Power BI Data Connector. Now you can edit your default views and create new views to fine-tune the records, fields, and history returned to Power BI.

Azure Tutorials and Materials, Azure Guides, Azure Certifications, Azure Learning

Wednesday 25 April 2018

Spring Data Azure Cosmos DB: NoSQL data access on Azure

We are pleased to announce that Spring Data Azure Cosmos DB is now available to provide essential Spring Data support for Azure Cosmos DB using SQL API. Azure Cosmos DB is Microsoft’s globally distributed, multi-model database service with exceptional scalability and performance.

With Spring Data Azure Cosmos DB, Java developers now can get started quickly to build NoSQL data access for their apps on Azure. It offers a Spring-based programming model for data access, while keeping the special traits of the underlying data store with Azure Cosmos DB. Features of Spring Data Azure Cosmos DB include a POJO centric model for interacting with an Azure Cosmos DB Collection, and an extensible repository style data access layer.

Getting started


Download the Spring Data Azure Cosmos DB Sample Project to get started. The sample illustrates the process to use annotation to interact with Collection, customize a query operation with specific fields, and expose a discoverable REST API for clients.

Azure Cosmos DB, Azure Cosmos, Azure NoSQL, Azure Tutorials and Materials

Create a new database instance

To get started, first create a new database instance by using the Azure portal. You can find Azure Cosmos DB in Databases and choose SQL (Document DB) for the API. When your database has been created, you can find the URI and keys on the overview page. The values will be used to configure your Spring Boot application.

Azure Cosmos DB, Azure Cosmos, Azure NoSQL, Azure Tutorials and Materials

Configure your project

You can create a simple Spring Boot application using Spring Initializr, and locate the pom.xml file in the directory of your app. In the pom.xml file add spring-data-cosmosdb to list of dependencies. spring-data-cosmosdb is published in Maven Central Repository. 

<dependency>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>spring-data-cosmosdb</artifactId>
    <version>2.0.3</version>
</dependency>

Features of Spring Data Azure Cosmos DB

Using Spring Data Azure Cosmos DB, you can get started quickly to build NoSQL data access for their apps on Azure.

Use Annotation to interact with Collection

@Id annotation: Annotate a field in domain class with @Id, this field will be mapped to document id in Azure Cosmos DB.

@Document annotation: By default, collection name will use the name of the domain class. To customize it, add annotation @Document(collection="yourCollectionName").

@Document(collection = "mycollection")
public class User {
    @Id
    private String id;
    private String email;
    private String name;
    private Address address;
    private List<Role> roleList;
   ...
}

Customize query operation


Customized query is useful for building constraining queries over entities of the repository. You can extend the basic DocumentDbRepository for different business logics.

public interface UserRepository extends DocumentDbRepository<User, String> {

    List<User> findByName(String firstName);
    List<User> findByEmailAndAddress(String email, Address address);
    ...
}

Exposes a discoverable REST API


@RepositoryRestResource Annotation: expose a discoverable REST API for your domain model.

@Repository
@RepositoryRestResource(collectionResourceRel = "user", path = "user")
public interface UserRepository extends DocumentDbRepository<User, String> {

    List<User> findByName(String firstName);
    ...

}

Saturday 21 April 2018

Azure DDoS Protection for virtual networks generally available

We are excited to announce the general availability of the Azure DDoS Protection Standard service in all public cloud regions. This service is integrated with Azure Virtual Networks (VNet) and provides protection and defense for Azure resources against the impacts of DDoS attacks.

Distributed Denial of Service (DDoS) attacks are intended to disrupt a service by exhausting its resources (e.g., bandwidth, memory). DDoS attacks are one of the top availability and security concerns voiced by customers moving their applications to the cloud. With extortion and hacktivism being the common motivations behind DDoS attacks, they have been consistently increasing in type, scale, and frequency of occurrence as they are relatively easy and cheap to launch.

These concerns are justified as the number of documented DDoS amplification attacks increased by more than 357 percent in the fourth quarter of 2017, compared to 2016 according to data from Nexusguard. Further, more than 56 percent of all attacks exploit multiple vector combinations. In February 2018, Github was attacked via a reflection exploit in Memcached generating 1.35 terabits of attack traffic, the largest DDoS attack ever recorded.

As the types and sophistication of network attacks increases, Azure is committed to providing our customers with solutions that continue to protect the security and availability of applications on Azure. Security and availability in the cloud is a shared responsibility. Azure provides platform level capabilities and design best practices for customers to adopt and apply into application designs that meet their business objectives.

Azure DDoS Protection Service offerings


Azure has two DDoS service offerings that provide protection from network attacks (Layer 3 and 4) - DDoS Protection Basic and DDoS Protection Standard.

Azure Tutorials and Materials, Azure Guides, Azure Learning, Azure Certifications

Azure DDoS Protection Basic service


Basic protection is integrated into the Azure platform by default and at no additional cost. The full scale and capacity of Azure’s globally deployed network provides defense against common network layer attacks through always-on traffic monitoring and real-time mitigation. No user configuration or application changes are required to enable DDoS Protection Basic. Basic protection also defends against the most common, frequently occurring Layer 7 DNS Query Floods and volumetric attacks that target your Azure DNS zones. This service also has a proven track record in protecting Microsoft’s enterprise and consumer services from large scale attacks.

Azure Tutorials and Materials, Azure Guides, Azure Learning, Azure Certifications

Azure DDoS Protection Standard Service


Azure DDoS Protection Standard provides enhanced DDoS mitigation capabilities for your application and resources deployed in your virtual networks. Protection is simple to enable on any new or existing virtual network and requires no application or resource changes. DDoS Protection Standard utilizes dedicated monitoring and machine learning to configure DDoS protection policies tuned to your virtual network traffic profiles. Attack telemetry is available through Azure Monitor, enabling alerting when your application is under attack. Integrated Layer 7 application protection can be provided by Application Gateway WAF.

Azure Tutorials and Materials, Azure Guides, Azure Learning, Azure Certifications

Azure DDoS Protection Standard service features


Native platform integration and turn-key protection

DDoS Protection Standard is natively integrated into the Azure platform and includes configuration through the Azure portal and PowerShell when you create a DDoS Protection Plan and enable DDoS Standard on a virtual network. Simplified provisioning immediately protects all resources in a virtual network with no additional application changes required.

Azure Tutorials and Materials, Azure Guides, Azure Learning, Azure Certifications

Azure Tutorials and Materials, Azure Guides, Azure Learning, Azure Certifications

Always-on monitoring and adaptive tuning

When DDoS Protection Standard is enabled, your application traffic patterns are continuously monitored for indicators of attacks. DDoS Protection understands your resources and resource configuration and customizes the DDoS Protection policy to your virtual network. Machine learning algorithms set and adjust protection policies as traffic patterns change over time.

Azure Tutorials and Materials, Azure Guides, Azure Learning, Azure Certifications

L7 protection with Application Gateway

Azure DDoS Protection service in combination with Application Gateway Web Application Firewall provides DDoS Protection for common web vulnerabilities and attacks.

◈ Request rate-limiting
◈ HTTP protocol violations
◈ HTTP protocol anomalies
◈ SQL injection
◈ Cross site scripting

DDoS Protection Standard enabled on a Web application firewall VNet

Azure Tutorials and Materials, Azure Guides, Azure Learning, Azure Certifications

DDoS Protection telemetry, monitoring, and alerting

Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.

Azure Tutorials and Materials, Azure Guides, Azure Learning, Azure Certifications

SLA guarantee and cost protection

DDoS Protection Standard service is covered by 99.99% SLA, and cost protection will provide resource credits for scale out during a documented attack. 

Protection planning


Planning and preparing for a DDoS attack is crucial in understanding the availability and response of an application during an actual attack. Organizations should also establish a well vetted DDoS incident management response plan.

To assist in this planning we have published an end to end DDoS Protection - Best Practices & Reference Architecture guide and encourage all customers to apply those practices while designing applications for resiliency against DDoS attacks in Azure.

We have also partnered with BreakingPoint Cloud to offer tooling for Azure customers to generate traffic load against DDoS Protection enabled public endpoints to simulate attacks. BreakPoint Cloud simulation will allow you to:

◈ Validate how Microsoft Azure DDoS Protection protects your Azure resources from DDoS attacks
◈ Optimize your incident response process while under DDoS attack
◈ Document DDoS compliance
◈ Train your network security teams

Getting started


Friday 20 April 2018

Announcing new Azure Security Center capabilities at RSA 2018

Migrating your workloads to the cloud can enable some inherent security benefits. With cloud scale machine learning and security analytics, you can mitigate threats quickly, making your environment more secure and your organization more productive.

Azure Security Center provides centralized visibility of the security state of your resources and uses the collective intelligence from machine learning and advanced analytics to not only detect threats quickly but to help you prevent them. It’s agent-based approach helps gain deeper security insights from the workloads and extends these protections to workloads running on-premises as well as other clouds, providing a unified security management for you.

Today we are excited to announce several capabilities in Azure Security Center that will provide enhanced protection to help you keep pace with the evolving cybersecurity landscape:

Visibility and governance at the organizational level


Take advantage of a new overview dashboard to gain visibility into your security state from an organizational level instead of a subscription level. To help organizations identify and address the challenges of managing an organization-wide security posture, you can now set security policies for management groups in your organization. You can also monitor it with an organization-wide compliance score as well as a breakdown score per subscription and management group.

Azure Security Center, Azure Certifications, Azure Guides, Azure Learning

Improve your productivity


Integrated security configuration in the Virtual Machine experience: Securing your resources in IaaS is important, which is why we’ve made it even simpler for you to do. As you create virtual machines in Azure, security configuration is now integrated into the virtual machine experience. In just a few clicks, you can enable Security Center and quickly assess the security state of your virtual machine, get actionable recommendations and mitigate risks.

Azure Security Center, Azure Certifications, Azure Guides, Azure Learning

An Identity & Access Management section will make it easier to discover if you have enabled access controls, such as multifactor authentication, for your applications and data. You can also discover identity and access issues and receive instructions for remediation.

Reduce your exposure to threats


Just-in-time VM access general availability: Previously in preview, the Just-in-Time VM access will be generally available today. It allows you to protect against threats such as brute force attacks by reducing access to virtual machine management ports only when it is needed.

Adaptive application controls: Using machine learning, Security Center recommends applications that should be whitelisted. Two new improvements will be available in preview today. First, you can get recommendations for new file types such as MSIs and scripts. Second, you can group virtual machines based on the similarity of applications running on them. Both of these enhancements are to improve the accuracy of the whitelisting policy that Security Center recommends for the virtual machines in a specific workload, and make it even easier for you to block unwanted applications and malware.

Interactive network security monitoring: Get visibility into the network components within your virtual networks in Azure from a new interactive topology. You can explore the connections between your virtual networks, subnets and nodes. You get actionable recommendations if vulnerabilities such as missing network security groups or web application firewalls are detected so you can take the appropriate next step.

Azure Security Center, Azure Certifications, Azure Guides, Azure Learning

File integrity monitoring (FIM): To help protect the integrity of your system and application software, Security Center is continuously monitoring the behavior of your registry and configuration files. If some abnormal change to the files or a malicious behavior is detected, Security Center will alert you so that you can continue to stay in control of your files.

Extending threat protection to containers: You can now get visibility into security posture of container environment and monitor for unsecure configuration on the container engine.

New secure configuration assessments for servers: A new web security configuration assessment helps you find vulnerabilities in your IIS web servers running on IaaS VMs and provides actionable recommendations to mitigate the risks.

Quickly detect and respond to threats


Integration with Windows Defender Advanced Threat Protection for servers (WDATP): Security Center now harnesses the power of WDATP to provide improved threat detection for Windows Servers. Microsoft’s vast threat intelligence enables WDATP to identify and notify you of attackers’ tools and techniques, so you can understand threats and respond. To uncover more information about a breach, you can explore the details in the interactive Investigation Path within Security Center blade. To get started, WDATP is automatically enabled for Azure and on-premises Windows Servers that have onboarded to Security Center.

Azure Security Center, Azure Certifications, Azure Guides, Azure Learning

Fileless Attack Detection: Security Center uses a variety of advanced memory forensic techniques to identify malware that persists only in memory and is not detected via traditional means. You can use the rich set of contextual information for alert triage, correlation, analysis and pattern extraction.

Threat analytics for admin activity: Security Center can now detect threats targeting your admin activity by analyzing the Azure Resource Management logs. If something abnormal is attempted or permissive privileges have been granted, you will be alerted and can investigate the activity.

Security Center is also extending its threat detection capabilities to PaaS resources. It can now detect threats targeting Azure App Services and provide recommendations to protect your applications.

New partner integrations


Security Center integrates with many partner solutions. We are excited to announce the integration with new partner solutions from Palo Alto and McAfee.

Palo Alto: This integration enables you to streamline provisioning for Palo Alto VM series Next Generation Firewall in the Security Center blade along with integrated threat detection and unified health monitoring of the firewall and simplify deployment.

McAfee: Security Center also supports the discovery and health status reporting of McAfee anti-malware on Windows machines, so you can now receive recommendations from another third-party antimalware service and mitigate potential issues.

Azure Security Center released several new capabilities today that will provide you with better insight, more control, and confidence to keep pace in this ever-changing cybersecurity landscape. Learn how one of our customers, Icertis, achieved better security and gained more productivity by saving 30 percent of operations time by using Azure Security Center.

Wednesday 18 April 2018

Automatic tuning introduces Automatic plan correction and T-SQL management

Azure SQL Database automatic tuning, industry’s first truly auto-tuning database based on Artificial Intelligence, is now providing more flexibility and power with the global introduction of the following new features:

◈ Automatic plan correction
◈ Management via T-SQL
◈ Index created by automatic tuning flag

Automatic tuning is capable of seamlessly tuning hundreds of thousands of databases without affecting performance of the existing workloads. The solution has been globally available since 2016 and proven to enable performant and stable workloads while reducing resource consumption on Azure.

Automatic plan correction


Automatic plan correction, feature introduced in SQL Server 2017, is now making its way to Azure SQL Database as a tuning option Force Last Good Plan. This decision was made after a rigorous testing on hundreds of thousands of SQL Databases ensuring there is an overall positive performance gain for workloads running on Azure. This feature shines in cases of managing hundreds and thousands of databases and heavy workloads.

Automatic tuning feature continuously monitors SQL Database workloads and with the Automatic plan correction option Force Last Good Plan, it automatically tunes regressed query execution plans through enforcing the last plan with a better performance. Since the system automatically monitors the workload performance, in case of changing workloads, the system dynamically adjusts to force the best performing query execution plan.

The system automatically validates all tuning actions performed in order to ensure that each tuning action is resulting in a positive performance gain. In case of a performance degradation due to a tuning action, the system automatically learns and it promptly reverts such tuning recommendation. Tuning actions performed by automatic tuning can be viewed by users in the list of recent tuning recommendations through Azure Portal and T-SQL queries.

Manage automatic tuning via T-SQL


Recognizing the needs of a large community of professionals using T-SQL and scripting procedures to manage their databases, Auto tuning team has developed a new feature making it possible to enable, disable, configure, and view the current and historical tuning recommendations using T-SQL. This makes it possible to develop custom solutions managing auto tuning, including custom monitoring, alerting and reporting capabilities.

In the upcoming sections, this blog post outlines a few examples on how to use some of the T-SQL capabilities we have made available to you.

Viewing the current automatic tuning configuration via T-SQL


In order to view the current state of the automatic tuning options configured on an individual server, connect to an SQL Database using a tool such as SSMS and execute the following query to read the system view sys.database_automatic_tuning_options:

SELECT * FROM sys.database_automatic_tuning_options

Resulting output are values from the automatic tuning options system view, as shown in the following example:


In the column name it can clearly be seen that there are three Auto tuning options available:

◈ FORCE_LAST_GOOD_PLAN
◈ CREATE_INDEX, and
◈ DROP_INDEX.

Column desired_state indicates settings for an individual tuning option, with its description available in the column desired_state_desc. Possible values for desired_state are 0 = OFF and 1 = ON for custom settings, and 2 = DEFAULT for inheriting settings from the parent server or Azure platform defaults.

Values of the column desired_state_desc indicate if an individual automatic tuning option is set to ON, OFF, or inherited by DEFAULT (corresponding to their numerical values in the column desired_state). The column actual_state indicates if the automatic tuning option is actually working on a database with value 1 indicating it is, and with value 0 indicating it is not working.

Please note that although you might have one of the auto tuning options set to ON, the system might decide to temporarily disable automatic tuning if it deems necessary to protect the workload performance. It also could be that if the Query Store is not enabled on a database, or if it is in a read-only state, this will also render automatic tuning as temporarily disabled. In this case, view of the current state will indicate “Disabled by the system” and the value of the column actual_state will be 0.

The last part of the sys.database_automatic_tuning_options system view indicates in the columns reason and reason_desc if each of the individual automatic tuning options is configured through defaults from Azure, defaults from the parent server, or if it is custom configured. In case of inheriting Azure platform defaults, column reason will have value 2 and column reason_desc value AUTO_CONFIGURED. In case of inheriting parent server defaults, column reason will have value 1 and column reason_desc value INHERITED_FROM_SERVER. In case of a custom setting for an individual automatic tuning option, both columns reason and reason_desc will have the value NULL.

Enable automatic tuning via T-SQL


In order to enable automatic tuning on a single database with inheriting Azure configuration defaults, execute a query such as this one:

ALTER DATABASE current SET AUTOMATIC_TUNING = AUTO /* possible values AUTO, INHERIT and CUSTOM */

Possible values to enable automatic tuning are AUTO, INHERIT and CUSTOM. Setting the automatic tuning value to AUTO will apply Azure configuration defaults for automatic tuning. Using the value INHERIT will inherit the default configuration from the parent server. This is especially useful if you would like to customize automatic tuning configuration on a parent server, and have all the databases on such server INHERIT these custom settings.

Please note that in order for the inheritance to work, the three individual tuning options FORCE_LAST_GOOD_PLAN, CREATE_INDEX and DROP_INDEX need to be set to DEFAULT. This is because one, or several of these individual tuning options could be custom configured and there could be a combination between DEFAULT, and custom forced ON or OFF settings in place.

Using the value CUSTOM, you will need to manually custom configure each of the automatic tuning options available.

Custom configuring automatic tuning options via T-SQL


Available options in Auto tuning that could be custom configured, independent of Azure platform and parent server defaults, are FORCE_LAST_GOOD_PLAN, CREATE_INDEX and DROP_INDEXES. These options can be custom configured through executing a query such as this one:

ALTER DATABASE current
SET AUTOMATIC_TUNING (
FORCE_LAST_GOOD_PLAN = [ON | OFF | DEFAULT],
CREATE_INDEX = [ON | OFF | DEFAULT],
DROP_INDEX = [ON | OFF | DEFAULT])
)

Possible values to set one of the three available options are ON, OFF and DEFAULT. Setting an individual tuning option to ON will custom configure it to be explicitly turned on, while setting it to OFF will custom configure it to be explicitly turned off. Setting an individual tuning option to DEFAULT will make such option inherit default value from the Azure platform or the parent server, depending on the setting of the AUTOMATIC_TUNING being set to AUTO or INHERIT, as described above.

If you are using Azure defaults, please note that current Azure defaults for Auto tuning options are to have the FORCE_LAST_GOOD_PLAN and CREATE_INDEX turned ON, while we are having DROP_INDEX option turned OFF by default. We have made this decision as DROP_INDEX option when turned ON drops unused or duplicated user created indexes. With this our aim was to protect user defined indexes and let users explicitly choose if they would like automatic tuning to manage dropping indexes as well.

If you would like to use the DROP_INDEX tuning option, please set the DROP_INDEX option to ON through executing the following query:

ALTER DATABASE current SET AUTOMATIC_TUNING (DROP_INDEX = ON) /* Possible values DEFAULT, ON, OFF */

The resulting output will denote in the columns desired_state and desired_state_desc that the DROP_INDEX option has been set to ON:


Reverting back automatic tuning from custom to inheriting defaults via T-SQL


Please note that once you set the AUTOMATIC_TUNING to the CUSTOM setting in order to custom configure each of the three automatic tuning options (FORCE_LAST_GOOD_PLAN, CREATE_INDEX and DROP_INDEX) manually (ON or OFF), in order to revert back to the default inheritance, you will need to:

◈ set AUTOMATIC_TUNING back to AUTO or INHERIT, and also
◈ for each tuning option FORCE_LAST_GOOD_PLAN, CREATE_INDEX and DROP_INDEX for which you need to have inheritance set back to DEFAULT.

This is because automatic tuning always respects decisions users explicitly took while customizing options and it never overrides them. In such case, you will first need to set the preference to inherit the values from Azure or the parent server, followed by setting each of the three available tuning options to DEFAULT through executing the following query:

ALTER DATABASE current SET AUTOMATIC_TUNING = AUTO /* Possible values AUTO for Azure defaults and INHERIT for server defaults */
ALTER DATABASE current SET AUTOMATIC_TUNING (
FORCE_LAST_GOOD_PLAN = DEFAULT,
CREATE_INDEX = DEFAULT,
DROP_INDEX = DEFAULT
)

The result of executing the above query will be set all three automatic tuning options to inherit defaults from Azure, with the following output:


View tuning recommendations and history via T-SQL


In order to view the history of recent automatic tuning recommendations, you can retrieve this information from the system view sys.dm_db_tuning_recommendations through executing the following query:

SELECT * FROM sys.dm_db_tuning_recommendations 

Output from this view provides detailed information on the current state of automatic tuning:


To highlight some of the values available in this view, we will start with the type and the reason as why a tuning recommendation was made. The column type indicates a type of the tuning recommendation made. The column reason indicates identified reason as why a particular tuning recommendation was made.

Columns valid_since and last_refresh indicate a timespan when a tuning recommendation was made and the time until the system has considered such recommendation as beneficial to the workload performance.

The column state provides a JSON document with details of automatically applied recommendation with a wealth of information related to index management and query execution plans.

If a tuning recommendation can be executed automatically, the column is_executable_action will be populate with a bit value 1. Auto tuning recommendations flagged with a bit 1 in the column is_revertable_action denote tuning recommendations that can be automatically reverted by the system if required. The column execute_action_start_time provides a timestamp when a tuning recommendation was applied.

System created indexes column


Automatic tuning team has also added a new column auto_created in the system view (sys.indexes) that contains a bit indicating if an index was created by the automatic tuning. This is now making it possible to clearly distinct between the system and user created indexes on a database.

The column auto_created accepts a bit value of 0 and 1, whereas 0 indicates that an index was created by user, and 1 indicates that the index was created by automatic tuning.

With this flag, customers can differentiate between user and automatic tuning created indexes. This is because automatic tuning created indexes behave differently than user created indexes. When automatic tuning index is created over a column that user wants to drop, automatic tuning index will move out of the way. If this was not the case, user index would prevent this operation.

Monday 16 April 2018

SQL Database: Long-term backup retention preview includes major updates

The preview for long-term backup retention in Azure SQL Database was announced in October 2016, providing you with a way to easily manage long-term retention for your databases – up to 10 years – with backups stored in your own Azure Backup Service Vault.

Based upon feedback gathered during the preview, we are happy to announce a set of major enhancements to the long-term backup retention solution. With this update we have eliminated the need for you to deploy and manage a separate Backup Service Vault. Instead, SQL Database will utilize Azure Blob Storage under the covers to store and manage your long-term backups. This new design will enable flexibility for your backup strategy, and overall more control over costs.

This update brings you the following additional benefits:

◈ More regional support – Long-term retention will be supported in all Azure regions and national clouds.
◈ More flexible backup policies – You can customize the frequency of long-term backups for each database with policies covering weekly, monthly, yearly, and specific week-within-a-year backups.
◈ Management of individual backups – You can delete backups that are not critical for compliance.
◈ Streamlined configuration – No need to provision a separate backup service vault.

What happens with your existing long-term backup retention policies?


Your existing backups will be automatically transitioned to the SQL Database managed RA-GRS storage containers.

◈ All existing long-term backups are already copied from your recovery vaults to the new storage containers free of charge.

◈ The new API that supports the enhanced feature set will be available in parallel with the existing API until May 31, 2018. You are expected to update your configuration scripts to the new API by that deadline.

Note, backups associated with servers that are already dropped are not migrated.

The portal experience is updated to support the additional LTR capabilities as illustrated by the following image. If you configured your long-term retention policy using the portal no actions are expected from you. ­

The following diagram illustrates how you can configure a new long-term retention policy for a database.

Azure SQL Database, Azure Tutorials and Materials, Azure Certifications

The long-term policies for individual databases are shown in a single table as illustrated by the next diagram.

Azure SQL Database, Azure Tutorials and Materials, Azure Certifications

The next diagram illustrates how you can restore a specific long-term backup.

Azure SQL Database, Azure Tutorials and Materials, Azure Certifications

How will this impact your bill?


If you are using the existing LTR preview you will notice a new charge on your bill with the name LTR backup storage. At the same time, you no longer will be billed for the backups in recovery vaults. The new LTR solution is more cost efficient, which can mean lower overall long-term backup retention storage costs. In addition, the added flexibility in the backup retention policy helps you reduce costs even further by letting you select less frequent backups, e.g. once a month or once a year, or by deleting individual backups that you don’t need. If you are new to LTR and just configured your first LTR policy, your next monthly bill will include the LTR backup storage charges.

Does long-term retention impact my GDPR compliance?


If the backup contains personal data that is subject to General Data Protection Regulation (GDPR), you are required to apply enhanced security measures to protect the data from unauthorized access. In order to comply with GDPR, you need a way to manage the data requests of data owners without having to access backups. This layer of protection to the personal data stored in backups can be achieved by storing only "pseudonymized" data in backups. For example, if data about a person needs to be deleted or updated, it will not require deleting or updating the existing backups.

Friday 13 April 2018

Three common analytics use cases with Microsoft Azure Databricks

Data science and machine learning can be applied to solve many common business scenarios, yet there are many barriers preventing organizations from adopting them. Collaboration between data scientists, data engineers, and business analysts and curating data, structured and unstructured, from disparate sources are two examples of such barriers - and we haven’t even gotten to the complexity involved when trying to do these things with large volumes of data.

Recommendation engines, churn analysis, and intrusion detection are common scenarios that many organizations are solving across multiple industries. They require machine learning, streaming analytics, and utilize massive amounts of data processing that can be difficult to scale without the right tools. Companies like Lennox International, E.ON, and renewables.AI are just a few examples of organizations that have deployed Apache Spark™ to solve these challenges using Microsoft Azure Databricks.

Your company can enable data science with high-performance analytics too. Designed in collaboration with the original creators of Apache Spark, Azure Databricks is a fast, easy, and collaborative Apache Spark™ based analytics platform optimized for Azure. Azure Databricks is integrated with Azure through one-click setup and provides streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts. Native integration with Azure Blob Storage, Azure Data Factory, Azure Data Lake Store, Azure SQL Data Warehouse, and Azure Cosmos DB allows organizations to use Azure Databricks to clean, join, and aggregate data no matter where it sits.

Learn how your organization can improve and scale your analytics solutions with Azure Databricks, a high-performance processing engine optimized for Azure. Now is the perfect time to get started. Not sure how? Sign up for our webinar on April 12, 2018 and we’ll walk you through the benefits of Spark on Azure, and how to get started with Azure Databricks.

Get started with Azure Databricks today!

Recommendation engine


Microsoft Azure Databricks, Microsoft Tutorials and Materials, Azure Certifications, Azure Guides

As mobile apps and other advances in technology continue to change the way users choose and utilize information, recommendation engines are becoming an integral part of applications and software products.

Churn analysis


Microsoft Azure Databricks, Microsoft Tutorials and Materials, Azure Certifications, Azure Guides

Churn analysis also known as customer attrition, customer turnover,  or customer defection, is the loss of clients or customers. Predicting and preventing customer churn is vital to a range of businesses.

Intrusion detection


Microsoft Azure Databricks, Microsoft Tutorials and Materials, Azure Certifications, Azure Guides

Intrusion detection is needed to monitor network or system activities for malicious activities or policy violations and produces electronic reports to a management station.