Sunday, 30 September 2018

Strengthen your security posture and protect against threats with Azure Security Center

In my recent conversations with customers, they have shared the security challenges they are facing on-premises. These challenges include recruiting and retaining security experts, quickly responding to an increasing number of threats, and ensuring that their security policies are meeting their compliance requirements.

Moving to the cloud can help solve these challenges. Microsoft Azure provides a highly secure foundation for you to host your infrastructure and applications while also providing you with built-in security services and unique intelligence to help you quickly protect your workloads and stay ahead of threats. Microsoft’s breadth of security tools range span across identity, networking, data, and IoT and can even help you protect against threats and manage your security posture. One of our integrated, first-party services is Azure Security Center.

Security Center is built into the Azure platform, making it easy for you start protecting your workloads at scale in just a few steps. Our agent-based approach allows Security Center to continuously monitor and assess your security state across Azure, other clouds and on-premises. It’s helped customers like Icertis or Stanley Healthcare strengthen and simplify their security monitoring. Security Center gives you instant insight into issues and the flexibility to solve these challenges with integrated first-party or third-party solutions. In just a few clicks, you can have peace of mind knowing Security Center is enabled to help you reduce the complexity involved in security management.

Today we are announcing several capabilities that will help you strengthen your security posture and protect against threats across hybrid environments.

Strengthen your security posture


Improve your overall security with Secure Score: Secure Score gives you visibility into your organizational security posture. Secure Score prioritizes all of your recommendations across subscriptions and management groups guiding you which vulnerabilities to address first. When you quickly remediate the most pressing issues first, you can see how your actions greatly improve your Secure Score and thus your security posture.

Azure Security Center, Security, Azure Tutorial and Material, Azure Certification, Azure Study Material

Interact with a new network topology map: Security Center now gives you visibility into the security state of your virtual networks, subnets and nodes through a new network topology map. As you review the components of your network, you can see recommendations to help you quickly respond to detected issues in your network. Also, Security Center continuously analyzes the network security group rules in the workload and presents a graph that contains the possible reachability of every VM in that workload on top of the network topology map.

Azure Security Center, Security, Azure Tutorial and Material, Azure Certification, Azure Study Material

Define security policies at an organizational level to meet compliance requirements: You can set security policies at an organizational level to ensure all your subscriptions are meeting your compliance requirements. To make things even simpler, you can also set security policies for management groups within your organization. To easily understand if your security policies are meeting your compliance requirements, you can quickly view an organizational compliance score as well as scores for individual subscriptions and management groups and then take action.

Monitor and report on regulatory compliance using the new regulatory compliance dashboard: The Security Center regulatory compliance dashboard helps you monitor the compliance of your cloud environment. It provides you with recommendations to help you meet compliance standards such as CIS, PCI, SOC and ISO.

Azure Security Center, Security, Azure Tutorial and Material, Azure Certification, Azure Study Material

Customize policies to protect information in Azure data resources: You can now customize and set an information policy to help you discover, classify, label and protect sensitive data in your Azure data resources. Protecting data can help your enterprise meet compliance and privacy requirements as well as control who has access to highly sensitive information. To learn more on data security, visit our documentation.

Assess the security of containers and Docker hosts: You can gain visibility into the security state of your containers running on Linux virtual machines. Specifically, you can gain insight into the virtual machines running Docker as well as the security assessments that are based on the CIS for Docker benchmark.

Protect against evolving threats


Integration with Windows Defender Advanced Threat Protection servers (WDATP): Security Center can detect a wide variety of threats targeting your infrastructure. With the integration of WDATP, you now get endpoint threat detection (i.e. Server EDR) for your Windows Servers as part of Security Center. Microsoft’s vast threat intelligence enables WDATP to identify and notify you of attackers’ tools and techniques, so you can understand threats and respond. To uncover more information about a breach, you can explore the details in the interactive Investigation Path within Security Center blade. To get started, WDATP is automatically enabled for Azure and on-premises Windows Servers that have onboarded to Security Center.

Threat detection for Linux: Security Center’s advanced threat detection capabilities are available across a wide variety of Linux distros to help ensure that whatever operation system your workloads are running on or wherever your workloads are running, you gain the insights you need to respond to threats quickly. Capabilities include being able to detect suspicious processes, dubious login attempts, and kernel module tampering.

Adaptive network controls: One of the biggest attack surfaces for workloads running in the public cloud are connections to and from the public internet. Security Center can now learn the network connectivity patterns of your Azure workload and provide you with a set of recommendations for your network security groups on how to better configure your network access policies and limit your exposure to attack. These recommendations also use Microsoft’s extensive threat intelligence reports to make sure that known bad actors are not recommended.

Threat Detection for Azure Storage blobs and Azure Postgre SQL: In addition to being able to detect threats targeting your virtual machines, Security Center can detect threats targeting data in Azure Storage accounts and Azure PostgreSQL servers. This will help you respond to unusual attempts to access or exploit data and quickly investigate the problem.

Security Center can also detect threats targeting Azure App Services and provide recommendations to protect your applications.

Fileless Attack Detection: Security Center uses a variety of advanced memory forensic techniques to identify malware that persists only in memory and is not detected through traditional means. You can use the rich set of contextual information for alert triage, correlation, analysis and pattern extraction.

Adaptive application controls: Adaptive applications controls helps you audit and block unwanted applications from running on your virtual machines. To help you respond to suspicious behavior detected with your applications or deviation from the policies you set, it will now generate an alert in the Security alerts if there is a violation of your whitelisting policies. You can now also enable adaptive application controls for groups of virtual machines that fall under the “Not recommend” category to ensure that you whitelist all applications running on your Windows virtual machines in Azure.

Just-in-Time VM Access: With Just-in-Time VM Access, you can limit your exposure to brute force attacks by locking down management ports, so they are only open for a limited time. You can set rules for how users can connect to these ports, and when someone needs to request access. You can now ensure that the rules you set for Just-in-Time VM access will not interfere with any existing configurations you have already set for your network security group.

File Integrity Monitoring (FIM): To help protect your operation system and application software from attack, Security Center is continuously monitoring the behavior of your Windows files, Windows registry and Linux files. For Windows files, you can now detect changes through recursion, wildcards, and environment variables. If some abnormal change to the files or a malicious behavior is detected, Security Center will alert you so that you can continue to stay in control of your files.

Start using Azure Security Center’s new capabilities today


The following capabilities are generally available: Enterprise-wide security policies, Adaptive application controls, Just-in-Time VM Access for a specific role, adjusting network security group rules in Just-in-Time VM Access, File Integrity Monitoring (FIM), threat detection for Linux, detecting threats on Azure App Services, Fileless Attack Detection, alert confidence score, and integration with Windows Defender Advanced Threat Protection (ATP).

These features are available in public preview: Security state of containers, network visibility map, information protection for Azure SQL, threat detection for Azure Storage blobs and Azure Postgre SQL and Secure Score.

Thursday, 27 September 2018

Eight new features in Azure Stream Analytics

We are excited to announce eight new features in Azure Stream Analytics (ASA). These new features include

◈ Support for query extensibility with C# custom code in ASA jobs running on Azure IoT Edge.

◈ Custom de-serializers in ASA jobs running on Azure IoT Edge.

◈ Live data Testing in Visual Studio.

◈ High throughput output to SQL.

◈ ML based Anomaly Detection on IoT Edge.

◈ Managed Identities for Azure Resources (formerly MSI) based authentication for egress to Azure Data Lake Storage Gen 1.

◈ Blob output partitioning by custom date/time formats.

◈ User defined custom re-partition count.

The features that are generally available and the ones in public preview will start rolling imminently.

General availability features


Parallel write operations to Azure SQL

Azure Stream Analytics now supports high performance and efficient write operations to Azure SQL DB and Azure SQL Data Warehouse to help customers achieve four to five times higher throughput than what was previously possible. To achieve fully parallel topologies, ASA will transition SQL writes from serial to parallel operations while simultaneously allowing for batch size customizations.

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides
Configuring hi-throughput write operation to SQL

Public previews


Query extensibility with C# UDF on Azure IoT Edge

Azure Stream Analytics offers a SQL-like query language for performing transformations and computations over streams of events. Though there are many powerful built-in functions in the currently supported SQL language, there are instances where a SQL-like language doesn't provide enough flexibility or tooling to tackle complex scenarios.

Developers creating Stream Analytics modules for Azure IoT Edge can now write custom C# functions and invoke them right in the query through User Defined Functions.  This enables scenarios like complex math calculations, importing custom ML models using ML.NET and programming custom data imputation logic. Full fidelity authoring experience is made available in Visual Studio for these functions. 

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides
Definition of the C# UDF in Visual Studio

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides
Calling the C# UDF from ASA Query

Output partitioning to Azure Blob Storage by custom date and time formats

Azure Stream Analytics users can now partition output to Azure Blob storage based on custom date and time formats. This feature greatly improves downstream data-processing workflows by allowing fine-grained control over the blob output especially for scenarios such as dashboarding and reporting. In addition, partition by custom date and time formats enables stronger alignment with downstream Hive supported formats and conventions when consumed by services such as Azure HDInsight or Azure Databricks.

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides
Partition by custom date or time on Azure portal

Ability to partition output to Azure Blob storage by custom field or attribute continues to be in private preview.

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides
Setting partition by custom attribute on Azure portal

Live data testing in Visual Studio

Available immediately, Visual Studio tooling for Azure Stream Analytics further enhances the local testing capability to help users test their queries against live data or event streams from cloud sources such as Azure Event Hubs or IoT hub. This includes full support for Stream Analytics time policies in local simulated Visual Studio IDE environment.

This significantly shortens development cycles as developers no longer need to start/stop their job to run test cycles. Also, this feature provides a fluent experience for checking the live output data while the query is running.

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides
Live Data Testing in Visual Studio IDE

User defined custom re-partition count

We are extending our SQL language to optionally enable users to specify the number of partitions of a stream when performing repartitioning. This will enable better performance tuning for scenarios where the partition key can’t be changed to upstream constraints, or when we have fixed number of partitions for output, or partitioned processing is needed to scale out to larger processing load. Once repartitioned, each partition is processed independently of others.

With this new language feature, query developer can simply use a newly introduced keyword INTO after PARTITION BY statement. For example, the query below reads from the input stream (regardless of it being naturally partitioned) and repartition the stream into 10 based on the DeviceID dimension and flush the data to output.

SELECT * INTO [output] FROM [input] PARTITION BY DeviceID INTO 10

Private previews


Built-in models for Anomaly Detection on Azure IoT Edge and cloud

By providing ready-to-use ML models right within our SQL-like language, we empower every developer to easily add Anomaly Detection capabilities to their ASA jobs, without requiring them to develop and train their own ML models. This in effect reduces the whole complexity associated with building ML models to a simple single function call.

Currently, this feature is available for private preview in cloud, and we are happy to announce that these ML functions for built-in Anomaly Detection are also being made available for ASA modules running on Azure IoT Edge runtime. This will help customers who demand sub-second latencies, or within scenarios where connectivity to cloud is unreliable or expensive.

In this latest round of enhancements, we have been able to reduce the number of functions from five to two while still detecting all five kinds of anomalies of Spike, Dip, Slow positive increase, Slow negative decrease, and Bi-level changes. Also, our tests are showing a remarkable five to ten times improvement in performance.

Sedgwick is a global provider of technology enabled risk, benefits and integrated business solutions who has been engaged with us as an early adopter for this feature.

“Sedgwick has been working directly with Stream Analytics engineering team to explore and operationalize compelling scenarios for Anomaly Detection using built-in functions in the Stream Analytics Query language. We are convinced this feature holds a lot of potential for our current and future scenarios”.

Custom de-serializers in Stream Analytics module on Azure IoT Edge


Today, Azure Stream Analytics supports input events in JSON, CSV or AVRO data formats out of the box. However, millions of IoT devices are often optimized to generate data in other formats to encode structured data in a more efficient yet extensible format.

Going forward, IoT devices sending data in any format can leverage the power of Azure Stream Analytics. Be it Parquet, Protobuf, XML or any binary format. Developers can now implement custom de-serializers in C# which can then be used to de-serialize events received by Azure Stream Analytics.

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides
Configuring input with a custom serialization format

Managed identities for Azure resources (formerly MSI) based authentication for egress to Azure Data Lake Storage Gen 1

Users of Azure Stream Analytics will now be able to operationalize their real-time pipelines with MSI based authentication while writing to Azure Data Lake Storage Gen 1.

Previously, users depended on Azure Active Directory based authentication for this purpose, which had several limitations.  For instance, users will now be able to automate their Stream Analytics pipelines through PowerShell. Secondly, this allows users to have long running jobs without being interrupted for sign-in renewals periodically. Finally, this makes user experience consistent across almost all ingress and egress services that are integrated out-of-the-box with Stream Analytics.

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides

Azure Stream Analytics, Azure Certification, Azure Learning, Azure Study Materials, Azure Guides
Configuring MSI based authentication to Data Lake Storage

Sunday, 23 September 2018

How Security Center and Log Analytics can be used for Threat Hunting

Organizations today are constantly under attack. Azure Security Center (ASC) uses advanced analytics and global threat intelligence to detect malicious threats, and the new capabilities that our product team is adding everyday empower our customers to respond quickly to these threats.

However, just having great tools that alert about the threats and attacks is not enough. The reality is that no security tool can detect 100 percent of the attack. In addition, many of the tools that raise alerts are optimized for low false positive rates. Hence, they might miss some suspicious outlier activity in your environment which could have been flagged and investigated. This is something that Security Center and the Azure Log Analytics team understands. The product has built-in features that you can use to launch your investigations and hunting campaigns in addition to responding to alerts that it triggers.

In the real world, if you need to do threat hunting, there are several considerations that you should consider. You not only need a good analyst team, you need an even larger team of service engineers and administrators that worry about deploying an agent to collect the investigations related data, parsing them in a format where queries could be run, building tools that help query this data and lastly indexing the data so that your queries run faster and actually give results. ASC and Log Analytics take care of all of this and will make hunting for threats much easier. What organizations need is a change in mindset. Instead of being just alert driven, they should also incorporate active threat hunting into their overall security program.

What is Threat Hunting?


Loosely defined it is the process of proactively and iteratively searching through your varied log data with the goal of detecting threats that evade existing security solutions. If you think about it, Threat Hunting is a mindset. A mindset wherein - instead of just reacting to alerts you are proactive about securing your organization’s environment and are looking for signs of malicious activity within your enterprise, without prior knowledge of those signs. Threat hunting involves hypothesizing about attackers’ behavior. Researching these hypotheses and techniques to determine the artifacts that would be left in the logs. Checking if your organization collects and stores these logs. Then verifying these hypotheses that you derived - in your environment's logs.

Hunting teaches you how to find data, how to distinguish between normal activity and an outlier, gives a better picture of your network and shows you your detection gaps. Security analysts who do regular hunting are better trained to respond and triage during an actual incident.

Today, we are going to look at some examples of these simple hunts that an analyst can start with. In our previous posts, we have already touched a little bit about this. You can read more about detecting malicious activity and finding hidden techniques commonly deployed by attackers and how Azure Security Center helps analyze attacks using Investigation and Log Search.

Scenario 1


A lot of security tools look for abnormally large data transfers to an external destination. To evade these security tools and to reduce the amount of data sent over the network, attackers often compress the collected data prior to exfil. The popular tools of choice for compression are typically 7zip/Winzip/WinRar etc. Attackers have also been known to use their own custom programs for compressing data.

For example, while using WinRar to compress the data a few of the switches that seem to be most commonly used are "a -m5 –hp." While the " a " switch specifies adding the file to an archive, the “ -m5” switch specifies the level of compression where “5” is the maximum compression level. The “-hp” switch is used to encrypt content and header information. With the knowledge of these command line switches, we may detect some of this activity.

Below is a simple query to run this logic where we are looking for these command line switches. In this example, if we look at the result we can see that all the command line data looks benign except where Svchost.exe is using the command line switches associated with WinRAR. In addition, the binary Svchost.exe is running from a temp directory when ideally it should be running from %windir%/system32 folder. Threat actors have been known to rename their tools to a well-known process name to hide in plain sight. This is considered suspicious and a good starting point for an investigation. An analyst can take one of the many approaches from here to uncover the entire attack sequence. They can pivot into logon data to find what happened in this logon session or focus on what user account was used. They can also investigate what IP addresses may have connected to this machine or what IP address this machine connected to during the given time frame.

SecurityEvent
| where TimeGenerated >= ago(2d)
| search CommandLine : " -m5 " and CommandLine : " a "
| project NewProcessName , CommandLine

Azure Security Center, Microsoft Tutorial and Material, Azure Study Materials, Azure Certifications

Another good example like this could be the use of popular Nirsoft tools like mail password viewer or IE password viewer being used maliciously by attackers to gather passwords from email clients as well as password stored in a browser. Knowing the command line for these tools, one may find interesting log entries if they search for command line parameters such as: /stext or /scomma, which allows discovery of potentially malicious activity without needing to know the process name. To provide a previously seen example, seeing a command line like “notepad.exe /stext output.txt” is a good indication that notepad might be a renamed Nirsoft tool and likely malicious activity.

Scenario 2


Building on the earlier example where we saw Rar.exe being renamed to svchost.exe. Malware writers often use windows system process names for their malicious process names to make them blend in with other legitimate commands that the Windows system executes. If an analyst is familiar with the well-known Windows processes they can easily spot the bad ones. For example, Svchost.exe is a system process that hosts many Windows services and is generally the most abused by attackers. For the svchost.exe process, it is common knowledge that:

◈ It runs from %windir%/system32 or %windir%/SysWOW64.
◈ It runs under NT AUTHORITY\SYSTEM, LOCAL SERVICE, or NETWORK SERVICE accounts.

Based on this knowledge, an analyst can create a simple query looking for a process named Svchost.exe. It is recommended to filter out well-known security identifiers (SIDs) that are used to launch the legitimate svchost.exe process. The query also filters out the legitimate locations from which svchost.exe is launched.

SecurityEvent
| where TimeGenerated >= ago(2d)
| where ProcessName contains "svchost.exe"
| where SubjectUserSid != "S-1-5-18"
| where SubjectUserSid != "S-1-5-19"
| where SubjectUserSid != "S-1-5-20"
| where NewProcessName !contains "C:\\Windows\\System32"
| where NewProcessName !contains "C:\\Windows\\Syswow64"

Azure Security Center, Microsoft Tutorial and Material, Azure Study Materials, Azure Certifications

Additionally, from the returned results we also check if svchost.exe is a child of services.exe or if it is launched with a command line that has –k switch (e.g. svchost.exe -k defragsvc). Filtering using these conditions will often give interesting results that an analyst can dig further into in order to find if this is the normal activity or if it is part of a compromise or attack.

There is nothing new or novel here. Security analysts know about it. In fact, a lot of security tools including ASC will detect this. However, the goal here is a change in mindset of not only responding to alerts but proactively looking for anomalies and outliers in your environment.

Scenario 3


After initial compromise either through Brute Force attack, spear phishing or other methods, attackers often move to the next step which can loosely be called network propagation stage. The goal of the Network Propagation phase is to identify and move to desired systems within the target environment with the intention of discovering credentials and sensitive data. Sometimes as part of this one might see one account being used to log in on unusually high number of machines in the environment or lot of different account authentication requests coming from one machine. Say if this is the second scenario where we want to find machines that have been used to authenticate accounts more than our desired threshold we could probably write a query like below:

SecurityEvent
    | where EventID == 4624
    | where AccountType == "User"
    | where TimeGenerated >= ago(1d)
    | summarize IndividualAccounts = dcount(Account) by Computer
    | where IndividualAccounts > 4

If we also wanted to see what alerts fired on these machines we could extend the above query and join them with the SecurityAlerts table.

SecurityEvent
    | where EventID == 4624
    | where AccountType == "User"
    | where TimeGenerated >= ago(1d)
    | extend Computer = toupper(Computer)
    | summarize IndividualAccounts = dcount(Account) by Computer
    | where IndividualAccounts > 4
| join (SecurityAlert
                 | extend ExtProps=parsejson(ExtendedProperties)
                 | extend Computer=toupper(tostring(ExtProps["Compromised Host"]))
                 )
on Computer

If you want some more hands-on guide or are interested to validate ASC security detections against attacks you could also look at the Azure Security Center Hunting Threats Playbook. This playbook presents some nice hunting examples for a post-breach scenario that you can work through using Log Analytics and Security Center.

These are just a few examples. The possibilities are endless. With practice, a good analyst knows when to dig deeper and when to move on to the next item on their hunting journey. Nothing in the world of cyber gets better unless victims start defending themselves more holistically. Enabling our customers on this journey and providing them with the tools to protect themselves when they move to Azure is what drives us every day.

Saturday, 22 September 2018

Deep dive into Azure Artifacts

Azure Artifacts manages the dependencies used in your codebase and provides easy tools to ensure the immutability and performance of those components. Released as one of the new services available for developers in Azure DevOps, the current features in Artifacts will help you and your users produce and consume artifacts. For teams that use or produces binary packages, Azure Artifacts provides a secure, highly performant store and easy feed.

Getting started with Artifacts: Package feeds


Azure Artifacts groups packages in to feeds, which are containers for packages that help you consume and publish.

Azure DevOps, Azure Artifacts, Azure Guides, Azure Certification, Azure Study Materials

We’ve optimized default settings to be most useful to feed users, such as making your feed account visible to easily share a single source of packages across your entire team. However, if you’d like to customize your settings, simply access the settings tab to refresh your preferences.

New feature: Universal Packages


Azure Artifacts is a universal store for all the artifacts you use as part of development and deployment. In addition to NuGet, npm, and Maven packages, feeds now support Universal Packages, which can store any file or set of files. You create and consume Universal Packages via the Visual Studio Team Services (VSTS) CLI. Consider using them to store deployment inputs like installers, large datasets or binary files that you need during development, or as a versioned container for your pipeline outputs. To try them out, look for the Universal Packages toggle in your preview features panel by clicking your profile image in the upper right, followed by clicking on “Preview features”.

Next up, enabling Views


The views in Azure Artifacts enable you to share subsets of the NuGet and npm package-versions in your feed with consumers. A common use for views is to share package-versions that have been tested, validated, or deployed but hold back packages still under development and not ready for public consumption.

Azure DevOps, Azure Artifacts, Azure Guides, Azure Certification, Azure Study Materials

Views and upstream sources are designed to work together to make it easy to produce and consume packages at enterprise scale.

Control your dependencies with Upstream Sources


Upstream sources enable you to use a single feed to store both the packages you produce and the packages you consume from "remote feeds". This includes both public feeds, such as npmjs.com and nuget.org, and authenticated feeds, such as other Azure DevOps feeds in your organization. Once you've enabled an upstream source, any user connected to your feed can install a package from the remote feed, and your feed will save a copy. 

Note: For each component served from the upstream, a copy will be always available to consume, even if the original source is down or, for TFS users, your internet connection isn’t available.

Azure DevOps, Azure Artifacts, Azure Guides, Azure Certification, Azure Study Materials

In short, enabling upstream sources to public sources makes it easy to use your favorite or most used dependencies, and can also give you additional protection against outages and corrupted or compromised packages.

Easy to use Symbols and the Symbol Server


To debug compiled executables, especially executables compiled from native code languages like C++, you need symbol files that contain debugging information. Artifacts makes Symbol support and publishing quick and simple.

The updated “Index Sources and Publish Symbols” task now publishes symbols to the Azure DevOps Symbol Server with a single checkbox. No advanced configuration or file share setup is required.

Azure DevOps, Azure Artifacts, Azure Guides, Azure Certification, Azure Study Materials

We also have made it simple to consume symbols from Visual Studio:

1. With VS2017 Update 4.1 (version 15.4.1) or later, type “debugging symbols” in Quick Launch and press Enter.

2. Click the “New Azure DevOps Symbol Server Location…” button (marked in red below). In the dialog that appears, select your Azure DevOps account and click “Connect”.

When you are done, it should look like this:

Azure DevOps, Azure Artifacts, Azure Guides, Azure Certification, Azure Study Materials

If you prefer debugging with the new UWP version of WinDbg, these docs will help you configure your Azure DevOps account on the WinDbg sympath.

Credential Provider authentication for NuGet in Azure Artifacts


Azure Artifacts secures all the artifacts you publish. However, historically it’s been a challenge to get through security to use your NuGet packages, especially on Mac and Linux. Today, that changes with the new Azure Artifacts Credential Provider. We’ve automated the acquisition of credentials needed to restore NuGet packages as part of your .NET development workflow, whether you’re using MSBuild, dotnet, or NuGet(.exe) on Windows, Mac, or Linux. Any time you want to use packages from an Azure Artifacts feed, the Credential Provider will automatically acquire and store a token on behalf of the NuGet client you're using.

Supported protocols versions and compatibility


Some package management services are only compatible with specific versions of TFS. The table below provides the information needed to understand version compatibility.

Feature Azure DevOps Services  TFS 
NuGet Yes TFS 2017
npm  Yes  TFS 2017 Update 1 and newer
NuGet.org upstream source  Yes  TFS 2018 Update 2 and newer 
Maven  Yes  TFS 2018 

Thursday, 20 September 2018

Deep dive into Azure Test Plans

Azure Test Plans, a service launched with Azure DevOps earlier this month, provides a browser-based test management solution for exploratory, planned manual, and user acceptance testing. Azure Test Plans also provides a browser extension for exploratory testing and gathering feedback from stakeholders.

Manual and exploratory testing continue to be important techniques for evaluating quality of a product/service, alongside the DevOps emphasis on automated testing. In modern software development processes, everybody in the team contributes to or owns quality - including developers, testers, managers, product owners, user experience advocates, and more. Azure Test Plans addresses all these needs. Let’s take a closer look.

Note: For automated testing as part of your CI/CD workflow, consider leveraging Azure Pipelines. It provides mechanisms for continuous build, test, and deployment to any platform and cloud.

Testing is integral to DevOps and Agile teams


A common practice is to base tests on user stories, features, or scenarios that are managed on a Kanban board as in Azure Boards. With Azure Test Plans, a team can leverage manual testing right from within their Kanban board. This provides end-to-end traceability because tests and defects are automatically linked to the requirements and builds being tested, which also helps you track the quality of the requirements.

Add, view, and interact with test cases directly from the cards on the Kanban board, and progressively monitor status directly from the card. Developers and testers can use this capability to maximize quality within their teams.

Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Study Materials

Quality is a team sport through exploratory testing


Exploratory testing is an approach to software testing that is described as simultaneous learning, test design and test execution. It complements planned testing by being completely unscripted yet being driven by themes/tours. Quality becomes a shared responsibility as exploratory testing can be leveraged by all team members including developers, testers, managers, product owners, user experience advocates, and more.


The Test & Feedback extension enables exploratory testing techniques in Azure Test Plans. It allows you to spend more time finding issues, and less time filing them. Using the extension is simple:

◈ Capture your findings along with rich diagnostic data. This includes comments, screenshots with annotations, and audio/video recordings that describe your findings and highlight issues. In the background, the extension captures additional information such as user actions via image action log, page load data, and system information about the browser, operating system, and more that later help in debugging or reproducing the issue.

◈ Create work items such as bugs, tasks, and test cases from within the extension. The captured information automatically becomes part of the filed work item and helps with end-to-end traceability.

◈ Collaborate with your team by sharing your findings. Export your session report or connect to Azure Test Plans for a fully integrated experience.

Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Study Materials

The extension also helps in soliciting feedback from stakeholders who may reside outside the development team, such as marketing, sales teams, and others. Feedback can be requested from these stakeholders on user stories and features. Stakeholders can then respond to feedback requests - not just to rate and send comments, but also file bugs and tasks directly. 

Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Study Materials

Planned manual testing for larger teams


Testing from within the Kanban board suffices when your testing needs are simple. However, for larger teams with more complex needs such as creating and tracking all testing efforts within a test plan scope, testing across multiple configurations, distributing the tests across multiple testers, tracking the progress against the test plan, etc., you need a full-scale test management solution and Azure Test Plans fulfils this need. 

Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Study Materials

Planned manual testing in Azure Test Plans lets you organize tests into test plans and test suites. Test suites can be dynamic (requirements-based-suites and query-based-suites) to help you understand the quality of associated requirements under development, or static to help you cover regression tests. Tests can be authored using an Excel-like grid view or other means available. Testers execute tests assigned to them using a runner to test your app(s). The runner can execute in a browser or as a client on your desktop, enabling you to test on any platform or test any app. During execution, rich diagnostic data is collected to help with debugging or reproducing the issue later. Bugs filed during the process automatically include the captured diagnostic data.

Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Study Materials

To track overall progress and outcomes, leverage lightweight charts, which can be pinned to your dashboard for easy monitoring.

Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Study Materials

Tuesday, 18 September 2018

Announcing Azure Pipelines with unlimited CI/CD minutes for open source

With the introduction of Azure DevOps today, we’re offering developers a new CI/CD service called Azure Pipelines that enables you to continuously build, test, and deploy to any platform or cloud. It has cloud-hosted agents for Linux, macOS, and Windows, powerful workflows with native container support, and flexible deployments to Kubernetes, VMs, and serverless environments.

Microsoft is committed to fueling open source software development. Our next step in this journey is to provide the best CI/CD experience for open source projects. Starting today, Azure Pipelines provides unlimited CI/CD minutes and 10 parallel jobs to every open source project for free. All open source projects run on the same infrastructure that our paying customers use. That means you’ll have the same fast performance and high quality of service. Many of the top open source projects are already using Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual Studio Code, and TypeScript – and the list is growing every day.

In the following, you can see Atom running parallel jobs on Linux, macOS, and Windows for its CI.

Azure Pipelines, Azure Certification, Azure Guides, Azure Tutorial and Material, Azure CI/CD

Azure Pipelines app on GitHub Marketplace


Azure Pipelines has an app in the GitHub Marketplace so it’s easy to get started. After you install the app in your GitHub account, you can start running CI/CD for all your repositories.

Azure Pipelines, Azure Certification, Azure Guides, Azure Tutorial and Material, Azure CI/CD

Pull Request and CI Checks


When the GitHub app is setup, you’ll see CI/CD checks on each commit to your default branch and every pull request.

Azure Pipelines, Azure Certification, Azure Guides, Azure Tutorial and Material, Azure CI/CD

Our integration with the GitHub Checks API makes it easy to see build results in your pull request. If there’s a failure, the call stack is shown as well as the impacted files.

Azure Pipelines, Azure Certification, Azure Guides, Azure Tutorial and Material, Azure CI/CD

More than just open source


Azure Pipelines is also great for private repositories. It is the CI/CD solution for companies like Columbia, Shell, Accenture, and many others. It’s also used by Microsoft’s biggest projects like Azure, Office 365, and Bing. Our free offer for private projects includes a cloud-hosted job with 1,800 minutes of CI/CD a month or you can run unlimited minutes of CI/CD on your own hardware, hosted in the cloud or your on-premises hardware. You can purchase parallel jobs for private projects from Azure DevOps or the GitHub Marketplace.

In addition to CI, Azure Pipelines has flexible deployments to any platform and cloud, including Azure, Amazon Web Services, and Google Cloud Platform, as well as any of your on-premises server running Linux, macOS or Windows. There are built-in tasks for Kubernetes, serverless, and VM deployments. Also, there’s a rich ecosystem of extensions for the most popular languages and tools. The Azure Pipelines agent and tasks are open source and we’re always reviewing feedback and accepting pull requests on GitHub.

Sunday, 16 September 2018

Deep dive into Azure Boards

Azure Boards is a service for managing the work for your software projects. Teams need tools that flex and grow. Azure Boards does just that, brining you a rich set of capabilities including native support for Scrum and Kanban, customizable dashboards, and integrated reporting.

Azure Guides, Azure Certification, Azure Learning, Azure Study Materials

In this post I’ll walk through a few core features in Azure Boards and give some insight in to how you can make them work for your teams and projects.

Work items


All work in Azure Boards is tracked through an artifact called a work item. Work items are where you and your team describe the details of what’s needed. Each work item uses a state model to track and communicate progress. For example, a common state model might be: New > Active > Closed. As work progresses, items are updated accordingly, allowing everyone who works on the project to have a complete picture of where things are at. Below is a picture of the work items hub in Azure Boards. This page is the home for all work items and provides quick filters to allow you to find the items you need.

Azure Guides, Azure Certification, Azure Learning, Azure Study Materials

Opening a work item brings you to a much richer view, including the history of all changes, any related discussion, and links to development artifacts including branches, pull requests, commits, and builds. Work items are customizable, supporting the ability to add new fields, create rules, and modify aspects of the layout. 

Azure Guides, Azure Certification, Azure Learning, Azure Study Materials

Boards, Backlogs, and Sprints


Azure Boards provides a variety of choices for planning and managing work. Let’s look at a few of the core experiences.

Boards


Each project comes with a pre-configured Kanban board perfect for managing the flow of your work. Boards are highly customizable allowing you to add the columns you need for each team and project. Boards support swim lanes, card customization, conditional formatting, filtering, and even WIP limits.

Azure Guides, Azure Certification, Azure Learning, Azure Study Materials

Backlogs


Backlogs help you keep things in order of priority, and to understand the relationships between your work. Drag and drop items to adjust the order, or quickly assign work to an upcoming sprint.

Azure Guides, Azure Certification, Azure Learning, Azure Study Materials

Sprints


Finally, sprints give you the ability to create increments of work for your team to accomplish together. Each sprint comes equipped with a backlog, taskboard, burndown chart, and capacity planning view to help you and your team deliver your work on time.

Azure Guides, Azure Certification, Azure Learning, Azure Study Materials

Dashboards


In any project, it’s critical that you have a clear view of what’s happening. Azure Boards comes complete with a rich canvas for creating dashboards. Add widgets as needed to track progress and direction.

Azure Guides, Azure Certification, Azure Learning, Azure Study Materials

Queries


And finally, one of the most powerful features in Azure Boards is the query engine. Queries let you tailor exactly what you’re tracking, creating easy to monitor KPIs. It’s simple to create new queries and pin them to dashboards for quick monitoring and status. 

Azure Guides, Azure Certification, Azure Learning, Azure Study Materials

Thursday, 13 September 2018

How to extract building footprints from satellite images using deep learning

I work with our partners and other researchers inside Microsoft to develop new ways to use machine learning and other AI approaches to solve global environmental challenges. In this post, we highlight a sample project of using Azure infrastructure for training a deep learning model to gain insight from geospatial data. Such tools will finally enable us to accurately monitor and measure the impact of our solutions to problems such as deforestation and human-wildlife conflict, helping us to invest in the most effective conservation efforts.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

Applying machine learning to geospatial data


When we looked at the most widely-used tools and datasets in the environmental space, remote sensing data in the form of satellite images jumped out.

Today, subject matter experts working on geospatial data go through such collections manually with the assistance of traditional software, performing tasks such as locating, counting and outlining objects of interest to obtain measurements and trends. As high-resolution satellite images become readily available on a weekly or daily basis, it becomes essential to engage AI in this effort so that we can take advantage of the data to make more informed decisions.

Geospatial data and computer vision, an active field in AI, are natural partners: tasks involving visual data that cannot be automated by traditional algorithms, abundance of labeled data, and even more unlabeled data waiting to be understood in a timely manner. The geospatial data and machine learning communities have joined effort on this front, publishing several datasets such as Functional Map of the World (fMoW) and the xView Dataset for people to create computer vision solutions on overhead imagery.

An example of infusing geospatial data and AI into applications that we use every day is using satellite images to add street map annotations of buildings. In June 2018, our colleagues at Bing announced the release of 124 million building footprints in the United States in support of the Open Street Map project, an open data initiative that powers many location based services and applications. The Bing team was able to create so many building footprints from satellite images by training and applying a deep neural network model that classifies each pixel as building or non-building. Now you can do exactly that on your own!

With the sample project that accompanies this blog post, we walk you through how to train such a model on an Azure Deep Learning Virtual Machine (DLVM). We use labeled data made available by the SpaceNet initiative to demonstrate how you can extract information from visual environmental data using deep learning. For those eager to get started, you can head over to our repo on GitHub to read about the dataset, storage options and instructions on running the code or modifying it for your own dataset.

Semantic segmentation


In computer vision, the task of masking out pixels belonging to different classes of objects such as background or people is referred to as semantic segmentation. The semantic segmentation model (a U-Net implemented in PyTorch, different from what the Bing team used) we are training can be used for other tasks in analyzing satellite, aerial or drone imagery – you can use the same method to extract roads from satellite imagery, infer land use and monitor sustainable farming practices, as well as for applications in a wide range of domains such as locating lungs in CT scans for lung disease prediction and evaluating a street scene.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials
Illustration from slides by Tingwu Wang, University of Toronto

Satellite imagery data


The data from SpaceNet is 3-channel high resolution (31 cm) satellite images over four cities where buildings are abundant: Paris, Shanghai, Khartoum and Vegas. In the sample code we make use of the Vegas subset, consisting of 3854 images of size 650 x 650 squared pixels. About 17.37 percent of the training images contain no buildings. Since this is a reasonably small percentage of the data, we did not exclude or resample images. In addition, 76.9 percent of all pixels in the training data are background, 15.8 percent are interior of buildings and 7.3 percent are border pixels.

Original images are cropped into nine smaller chips with some overlap using utility functions provided by SpaceNet. The labels are released as polygon shapes defined using well-known text (WKT), a markup language for representing vector geometry objects on maps. These are transformed to 2D labels of the same dimension as the input images, where each pixel is labeled as one of background, boundary of building or interior of building.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

Some chips are partially or completely empty like the examples below, which is an artifact of the original satellite images and the model should be robust enough to not propose building footprints on empty regions.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

Training and applying the model


The sample code contains a walkthrough of carrying out the training and evaluation pipeline on a DLVM. The following segmentation results are produced by the model at various epochs during training for the input image and label pair shown above. This image features buildings with roofs of different colors, roads, pavements, trees and yards. We observe that initially the network learns to identify edges of building blocks and buildings with red roofs (different from the color of roads), followed by buildings of all roof colors after epoch 5. After epoch 7, the network has learnt that building pixels are enclosed by border pixels, separating them from road pixels. After epoch 10, smaller, noisy clusters of building pixels begin to disappear as the shape of buildings becomes more defined.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

A final step is to produce the polygons by assigning all pixels predicted to be building boundary as background to isolate blobs of building pixels. Blobs of connected building pixels are then described in polygon format, subject to a minimum polygon area threshold, a parameter you can tune to reduce false positive proposals.

Training and model parameters


There are a number of parameters for the training process, the model architecture and the polygonization step that you can tune. We chose a learning rate of 0.0005 for the Adam optimizer (default settings for other parameters) and a batch size of 10 chips, which worked reasonably well.

Another parameter unrelated to the CNN part of the procedure is the minimum polygon area threshold below which blobs of building pixels are discarded. Increasing this threshold from 0 to 300 squared pixels causes the false positive count to decrease rapidly as noisy false segments are excluded. The optimum threshold is about 200 squared pixels.

The weight for the three classes (background, boundary of building, interior of building) in computing the total loss during training is another parameter to experiment with. It was found that giving more weights to interior of building helps the model detect significantly more small buildings (result see figure below).

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

Each plot in the figure is a histogram of building polygons in the validation set by area, from 300 square pixels to 6000. The count of true positive detections in orange is based on the area of the ground truth polygon to which the proposed polygon was matched. The top histogram is for weights in ratio 1:1:1 in the loss function for background : building interior : building boundary; the bottom histogram is for weights in ratio 1:8:1. We can see that towards the left of the histogram where small buildings are represented, the bars for true positive proposals in orange are much taller in the bottom plot.

Last thoughts


Building footprint information generated this way could be used to document the spatial distribution of settlements, allowing researchers to quantify trends in urbanization and perhaps the developmental impact of climate change such as climate migration. The techniques here can be applied in many different situations and we hope this concrete example serves as a guide to tackling your specific problem.

Another piece of good news for those dealing with geospatial data is that Azure already offers a Geo Artificial Intelligence Data Science Virtual Machine (Geo-DSVM), equipped with ESRI’s ArcGIS Pro Geographic Information System. We also created a tutorial on how to use the Geo-DSVM for training deep learning models and integrating them with ArcGIS Pro to help you get started.

Finally, if your organization is working on solutions to address environmental challenges using data and machine learning, we encourage you to apply for an AI for Earth grant so that you can be better supported in leveraging Azure resources and become a part of this purposeful community.