Sunday 30 June 2019

Leveraging complex data to build advanced search applications with Azure Search

Data is rarely simple. Not every piece of data we have can fit nicely into a single Excel worksheet of rows and columns. Data has many diverse relationships such as the multiple locations and phone numbers for a single customer or multiple authors and genres of a single book. Of course, relationships typically are even more complex than this, and as we start to leverage AI to understand our data the additional learnings we get only add to the complexity of relationships. For that reason, expecting customers to have to flatten the data so it can be searched and explored is often unrealistic. We heard this often and it quickly became our number one most requested Azure Search feature. Because of this we were excited to announce the general availability of complex types support in Azure Search. In this post, I want to take some time to explain what complex types adds to Azure Search and the kinds of things you can build using this capability.

Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

Azure Search is a platform as a service that helps developers create their own cloud search solutions.

What is complex data?


Complex data consists of data that includes hierarchical or nested substructures that do not break down neatly into a tabular rowset. For example a book with multiple authors, where each author can have multiple attributes, can’t be represented as a single row of data unless there is a way to model the authors as a collection of objects. Complex types provide this capability, and they can be used when the data cannot be modeled in simple field structures such as strings or integers.

Complex types applicability


At Microsoft Build 2019,  we demonstrated how complex types could be leveraged to build out an effective search application. In the session we looked at the Travel Stack Exchange site, one of the many online communities supported by StackExchange.

The StackExchange data was modeled in a JSON structure to allow easy ingestion it into Azure Search. If we look at the first post made to this site and focus on the first few fields, we see that all of them can be modeled using simple datatypes, including tags which can be modeled as a collection, or array of strings.

{
   "id": "1",
    "CreationDate": "2011-06-21T20:19:34.73",
    "Score": 8,
    "ViewCount": 462,
    "BodyHTML": "<p>My fiancée and I are looking for a good Caribbean cruise in October and were wondering which
    "Body": "my fiancée and i are looking for a good caribbean cruise in october and were wondering which islands
    "OwnerUserId": 9,
    "LastEditorUserId": 101,
    "LastEditDate": "2011-12-28T21:36:43.91",
    "LastActivityDate": "2012-05-24T14:52:14.76",
    "Title": "What are some Caribbean cruises for October?",
    "Tags": [
        "caribbean",
        "cruising",
        "vacations"
    ],
    "AnswerCount": 4,
    "CommentCount": 4,
    "CloseDate": "0001-01-01T00:00:00",​

Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials
However, as we look further down this dataset we see that the data quickly gets more complex and cannot be mapped into a flat structure. For example, there can be numerous comments and answers associated with a single document.  Even votes is defined here as a complex type (although technically it could have been flattened, but that would add work to transform the data).

"CloseDate": "0001-01-01T00:00:00",
    "Comments": [
        {
            "Score": 0,
            "Text": "To help with the cruise line question: Where are you located? My wife and I live in New Orlea
            "CreationDate": "2011-06-21T20:25:14.257",
           "UserId": 12
        },
        {
            "Score": 0,
            "Text": "Toronto, Ontario. We can fly out of anywhere though.",
            "CreationDate": "2011-06-21T20:27:35.3",
            "UserId": 9
        },
        {
            "Score": 3,
            "Text": "\"Best\" for what?  Please read [this page](http://travel.stackexchange.com/questions/how-to
            "UserId": 20
        },
        {
            "Score": 2,
            "Text": "What do you want out of a cruise? To relax on a boat? To visit islands? Culture? Adventure?
            "CreationDate": "2011-06-24T05:07:16.643",
            "UserId": 65
        }
    ],
    "Votes": {
        "UpVotes": 10,
        "DownVotes": 2
    },
    "Answers": [
        {
            "IsAcceptedAnswer": "True",
            "Body": "This is less than an answer, but more than a comment…\n\nA large percentage of your travel b
            "Score": 7,
            "CreationDate": "2011-06-24T05:12:01.133",
            "OwnerUserId": 74

All of this data is important to the search experience. For example, you might want to:

◈ Search for and highlight phrases not only in the original question, but also in any of the comments.

◈ Limit documents to those where an answer was provided by a specific user.

◈ Boost certain documents higher in the search results when they have a higher number of up votes.

In fact, we could even improve on the existing StackExchange search interface by leveraging Cognitive Search to extract key phrases from the answers to supply potential phrases for autocomplete as the user types in the search box.

All of this is now possible because not only can you map this data to a complex structure, but the search queries can support this enhanced structure to help build out a better search experience.

Saturday 29 June 2019

A solution to manage policy administration from end to end

Legacy systems can be a nightmare for any business to maintain. In the insurance industry, carriers struggle not only to maintain these systems but to modify and extend them to support new business initiatives. The insurance business is complex, every state and nation has its own unique set of rules, regulations, and demographics. Creating new products such as an automobile policy has traditionally required the coordination of many different processes, systems, and people. These monolithic systems traditionally used to create new products are inflexible and creating a new product can be an expensive proposition.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner, Sunlight Solutions, uses Azure to solve a unique problem.

Monolithic systems and their problems


Insurers have long been restricted by complex digital ecosystems created by single-service solutions. Those tasked with maintaining such legacy, monolithic systems struggle as the system ages and becomes more unwieldy. Upgrades and enhancements often require significant new development, large teams, and long-term planning which are expensive, unrealistic, and a drain on morale. Worse, they restrict businesses from pursuing new and exciting opportunities.

A flexible but dedicated solution


An alternative is a single solution provider that is well versed in the insurance business but able to create a dedicated and flexible solution, one that overcomes the problems of a monolith. Sunlight is such a provider. It allows insurance carriers to leverage the benefits of receiving end-to-end insurance administration functionality from a single vendor. At the same time, their solution provides greater flexibility, speed-to-market, and fewer relationships to manage with lower integration costs.

Sunlight’s solution is a single system which manages end-to-end functionality across policy, billing, claims, forms management, customer/producer CRM, reporting and much more. According to Sunlight:

“We are highly flexible, managed through configuration rather than development. This allows for rapid speed to market for the initial deployment and complete flexibility when you need to make changes or support new business initiatives. Our efficient host and continuous delivery models address many of the industry’s largest challenges with respect to managing the cost and time associated with implementation, upgrades, and product maintenance.”

In order to achieve their goals of being quick but pliable, the architecture of the solution is literally a mixture of static and dynamic components. Static components are fields that do not change. Dynamic components such as lists populate at run time. This is conveyed in the graphic below, the solution uses static elements but lets users configure with dynamic parts as needed. The result is a faster cycle that maintains familiarity but allows a variety of data types.

Azure Certifications, Azure Guides, Azure Study Materials, Azure Tutorials and Materials

In the figure above, data appears depending on the product. When products are acquired, for example through mergers, the static data can be mapped. If a tab exists for the product, it appears. For example, “benefits” and “deductibles” are not a part of every product.

Benefits


In brief, here are the key gains made by using Sunlight:

Azure Certifications, Azure Guides, Azure Study Materials, Azure Tutorials and Materials

◈ End-to-end functionality: Supports all products/coverages/lines of business
◈ Cloud-based and accessible anywhere
◈ Supports multiple languages and currencies
◈ Globally configurable for international taxes and regional regulatory controls
◈ Highly configurable by non-IT personnel
◈ Reasonable price-point

Azure services


◈ Azure Virtual Machines are used to implement the entire project life cycle quickly.
◈ Azure Security Center provides a complete and dynamic infrastructure that continuously improves on its own.
◈ Azure Site Recovery plans are simple to implement for our production layer.
◈ Azure Functions is utilized in order to quickly replicate environments.
◈ Azure Storage is used to keep the application light with a range of storage options for increased access time based on the storage type.

Friday 28 June 2019

Solving the problem of duplicate records in healthcare

As the U.S. healthcare system continues to transition away from paper to more a digitized ecosystem, the ability to link all of an individual’s medical data together correctly becomes increasingly challenging. Patients move, marry, divorce, change names and visit multiple providers throughout their lifetime, with each visit creating new records, and the potential for inconsistent or duplicate information grows. Duplicate medical records often occur as a result of multiple name variations, data entry errors, and lack of interoperability—or communication—between systems. Poor patient identification and duplicate records in turn lead to diagnosis errors, redundant medical tests, skewed reporting and analytics, and billing inaccuracies.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we will describe how one Microsoft partner, Nextgate, uses Azure to solve a unique problem.

Patient matching


The process of reconciling electronic health records is called “patient matching,” and it is a major obstacle to improving the quality of care coordination, and patient safety. Further, duplicate records are financially crippling, costing the average hospital $1.5 million and our nation’s healthcare system over $6 billion annually. As data sharing matures and the industry pivots toward value, an enterprise view of patient information is essential for informed clinical-decision making, effective episodic care, and a seamless patient-provider experience during every encounter.

As more data is generated and more applications are introduced into the health IT environment, today’s organizations must engage in more comprehensive patient matching approaches.

The puzzle of disjointed electronic health records


While electronic health records (EHRs) have become commonplace, the disjointed, competitive nature of IT systems contributes to a proliferation of siloed, disconnected information. Many EHR systems make sharing data arduous, even in a single-system electronic medical record environment. Further, master patient indexes (MPI) within EHR systems were designed for a single vendor-based environment and lack the sophisticated algorithms for linking data across various settings of care and disparate systems. When sent downstream, duplicate and disjointed patient demographics trigger further harm including increased waste and inefficiencies, suboptimal outcomes, and lost revenue. Without common technical standards in place, EHR systems continue to collect information in various formats that only serve to exacerbate the issue of duplicate record creation.

Solution


NextGate’s Enterprise Master Patient Index (EMPI) platform is a significant step towards improving a health system’s data management and governance framework. This solution manages patient identities for more than two-thirds of the U.S. population, and one-third of the U.K. population. It empowers clinicians and their organizations to make informed, life-saving decisions by seamlessly linking medical records from any given system and reconciling data discrepancies across multiple sites of care. The automated identity matching platform uses both probabilistic and deterministic matching algorithms to account for minor variations in patient data to generate a single best record that follows the patient throughout the care journey.

Microsoft Study Materials, Azure Certifications, Azure Tutorials and Materials, Azure Learning

Benefits


◈ Enhanced clinical decision-making.
◈ Improved patient safety (or reduced medical errors.)
◈ Decreased number of unnecessary or duplicate testing/procedures.
◈ Improved interoperability and data exchange.
◈ Trusted and reliable data quality.
◈ Reduced number of denied claims and other reimbursement delays.
◈ Improved administrative efficiencies.
◈ Higher patient and provider satisfaction.

Azure services


◈ Azure Security Center reinforces the security posture of the NextGate solution against threats, and provides recommendations to harden the security.

◈ Azure Monitor provides telemetry data about the NextGate application to ensure its health.

◈ Azure Virtual Machines provide compute power; enabling auto-scaling and supporting Linux and open source services

◈ Azure SQL Database and Azure Database for PostgreSQL enable NextGate solutions to easily scale with more compute power (scale-up) or more database units (scale-out.)

Thursday 27 June 2019

Using natural language processing to manage healthcare records

The next time you see your physician, consider the times you fill in a paper form. It may seem trivial, but the information could be crucial to making a better diagnosis. Now consider the other forms of healthcare data that permeate your life—and that of your doctor, nurses, and the clinicians working to keep patients thriving. Forms and diagnostic reports are just two examples. The volume of such information is staggering, yet fully utilizing this data is key to reducing healthcare costs, improving patient outcomes, and other healthcare priorities. Now, imagine if artificial intelligence (AI) can be used to help the situation.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how SyTrue, a Microsoft partner focusing on healthcare uses Azure to empower healthcare organizations to improve efficiency, reduce costs, and improve patient outcomes.

Billions of records


Valuable insights remain locked in unstructured medical records such as scanned documents in PDF format that, while human-readable, present a major obstacle to the automation and analytics required. Over four billion medical notes are created every year. The clinical and financial insights embodied within these records are needed by an average of 20+ roles and processes downstream of the record generation. Currently, healthcare providers and payors require an army of professionals to read, understand, and extract healthcare data from the flood of clinical documents generated every day. But success has been elusive.

It's not for lack of trying. In the last decade, an effort was made to accumulate and upload data into electronic health records (EHR) systems. Meaningful Use is a government-led incentive program that aims to accelerate the movement from hard-copy filing systems to electronic health records. Still, the problem is related to the volume and the lack of time and resources to assimilate masses of data.

Note: the Meaningful Use program has a number of goals. An important one is, “Ensure adequate privacy and security protection for personal health information.” Data security is a prime value for Azure services. Data services such as Azure SQL Database encrypt data at rest and in-transit.

Moving the needle on healthcare


As costly and extensive as this effort was, many believe that we have yet to see evidence of any significant impact from the digitization of healthcare data to the quality or cost of care. One way to radically improve this is using AI for natural language processing (NLP)—specifically to automate reading of the documents. That enables subsequent analytics, yielding the most relevant actionable information in near real-time from mountains of documents to the medical professional. It empowers them to deliver better quality care, more efficiently, at lower cost.

In action


A Microsoft partner, SyTrue is leading the way. In the words of their Founder and CEO, Kyle Silvestro, “At SyTrue, the next big challenge is accessing this vast pool of accumulated patient data in a serviceable way. We’ve created a platform that transforms healthcare documentation into actionable information. The focus is on three main features: speed, context, and adaptability. Our technology consumes thousand-paged medical records in sub-seconds. The innovation is built on informational models that can ingest data from multiple types of clinical and financial health care organizations. This allows diverse healthcare stakeholders to use the system. The main objective for the technology is to present key clinical and financial insights to healthcare stakeholders in order to reduce waste and improve clinical outcomes.”

Informed by natural language processing and machine learning


SyTrue relies on NLP and machine learning (ML) as the underlying technology. Using their own proprietary methods, they perform “context-driven information extraction.” In other words, they connect the dots. The graphic below shows their processes.

Azure Certifications, Azure Guides, Azure Learning, Azure Certifications Exam

Improving healthcare


SyTrue’s offers the NLP OS (Operating System) for healthcare. It aids in several ways.

◈ It unlocks healthcare records and enables healthcare professionals to interact with medical record data and its clinical and financial implications. Specifically, it eliminates the need for professionals to hunt for the same key observations. This enables professionals to spend more time focused on patient care.

◈ NLP OS also bridges the communication between a specialist provider and a primary care physician regarding the care of a shared patient. The system extracts and highlights continuity of care recommendations generated within the patient’s care team.

◈ A large healthcare organization installed SyAudit, powered by SyTrue NLP OS, at the front of their medical chart review process. Before the charts reach a nurse-reviewer, they are processed through this solution. The system interprets the documentation to determine if a nurse review is in fact needed, or if the documentation lacks actionable information. This potentially decreases the time spent by nurse reviewers.

◈ A healthcare provider used SyReview, another SyTrue solution powered by the SyTrue NLP OS, for their quality capturing and reporting process. The particular process is related to an incentive program which directly ties quality to Medicare payment. Automating the quality-capturing process strengthens the feedback loop to providers that needed to show improvement. The organization also eliminated its manual quality-capture process, which was slow, expensive, and often inaccurate.

Tuesday 25 June 2019

Azure Cosmos DB: A competitive advantage for healthcare ISVs

CitiusTech is a specialist provider of healthcare technology services which helps its customers to accelerate innovation in healthcare. CitiusTech used Azure Cosmos DB to simplify the real-time collection and movement of healthcare data from variety of sources in a secured manner. With the proliferation of patient information from established and current sources, accompanied with scrupulous regulations, healthcare systems today are gradually shifting towards near real-time data integration. To realize such performance, healthcare systems not only need to have low latency and high availability, but should also be highly responsive. Furthermore, they need to scale effectively to manage the inflow of high speed, large volumes of healthcare data.

The situation


The rise of Internet of Things (IoT) has enabled ordinary medical devices, wearables, traditional hospital deployed medical equipment to collect and share data. Within a wide area network (WAN), there are well defined standards and protocols, but with the ever increasing number of devices getting connected to the internet, there is a general lack of standards compliance and consistency of implementation. Moreover, data collation and generation from IoT enabled medical/mobile devices need specialized applications to cope with increasing volumes of data.

This free-form approach provides a great deal of flexibility, since different data can be stored in document oriented stores as business requirements change. Relational databases aren’t efficient in performing CRUD operations on such data but are essential for handling transactional data where consistent data integrity is necessary. Different databases are designed to solve different problems, using a single database engine for multiple purposes usually leads to non-performant solutions. Whereas management of multiple types of databases is an operational overhead.

Developing distributed global scale solutions are challenged by the capability and complexity of scaling databases across multiple regions without compromising performance, and while complying with data sovereignty needs. This often leads to inefficient management of multiple regional databases and/or underperformance.

Solution


Azure Cosmos DB has the ability of polyglot persistence, which allows it to use a mix of data store technologies without compromising on performance. It is a multi-model, highly-available, globally scalable database which supports proven low latency reads and writes. Azure Cosmos DB has enterprise grade security features and keeps all data encrypted at rest.

Azure Cosmos DB is suited for distributed global scale solutions as it not only provides a turnkey global distribution feature but can geo-fence a database to specific regions to manage data sovereignty compliance. Its multi-master feature allows writes to be made and synchronized across regions with guaranteed consistency. In addition, it supports multi-document transactions with ACID guarantees.

Use cases in healthcare


Azure Cosmos DB works very well for the following workloads.

1. Global scale secure solutions

Organizations like CitiusTech that offer a mission-critical, global-scale solution should consider Azure Cosmos DB a critical component of their solution stack. For example, An ISV developing a non-drug treatment for patients through a medical device at a facility can develop web or mobile applications which store the treatment information and medical device metadata in Azure Cosmos DB. Treatment information can be pushed to medical devices at global facilities for the treatment. ISVs can comply to the compliance requirement by using geo-fencing feature.

Azure Cosmos DB can also be used as a multi-tenant database with carefully designed strategy. For instance, if a tenant has different scaling requirements, different Azure Cosmos containers can be created for such tenants. In Azure Cosmos DB, containers serve as logical units of distribution and scalability. Multi-tenancy may be possible at a partition level within an Azure Cosmos container, but needs to be designed carefully to avoid creating hot-spots and compromising the overall performance.

2. Real-time location system, Internet of Things

Azure Cosmos DB is effective for building a solution for real-time tracking and management of medical devices and patients, as it often requires rapid velocity of data, scale, and resilience. Azure Cosmos DB supports low latency writes and reads so that all data is replicated across multiple fault and update domains in each region for high availability and resilience. It supports session consistency as one of its five consistency levels which is suitable for such scenarios. Session consistency guarantees strong consistency within a session.

Using Azure Cosmos DB also allows scaling of processing power, this is useful for burst scenarios and also provides elastic scale petabytes of storage. This enables request units (RU’s) to be programmatically adjusted as per the workload.

CitiusTech worked with a leading provider of medical grade vital signs and physiological monitoring solution to build a medical IoT based platform with the following requirements:

◈ Monitor vitals with medical quality

◈ Provide solutions for partners to integrate custom solutions

◈ Deliver personalized, actionable insights

◈ Messages and/or device generated data don’t have a fixed structure and may change in the future

◈ Data producer(s) to simultaneously upload data for at least 100 subjects in less than two seconds per subject, receiving no more than 40*21=840 data points per subject, per request

◈ Data consumer(s) to read simultaneously, data of at least 100 subjects in less than two seconds, producing no more than 15,000 data points per data consumer

◈ Data for most recent 14 days shall be ready to be queried, and data older than 14 days to be moved to a cold storage

CitiusTech used Azure Cosmos DB as a hot storage to store health data, since it enabled low latency writes and reads of health data that was generated by the wearable sensor continuously. Azure Cosmos DB provided schema agnostic flexible storage to store documents with different shapes and size at scale and allowed enterprise grade security with Azure compliance certification.

The time to live (TTL) feature in Azure Cosmos DB automatically deleted expired items based on the TTL value. It was geo-distributed with its geo-fencing feature to address data sovereignty compliance requirements.

Solution architecture


Azure Cosmos DB, Azure Certifications, Azure Guides, Azure Tutorials and Materials
Architecture of data flow in CitiusTech’s solution using Azure Cosmos DB

Key insights


Azure Cosmos DB unlocks the potential of polyglot persistence for healthcare systems to integrate healthcare data from multiple systems of record. It also ensures the need for flexibility, adaptability, speed, security and scale in healthcare is addressed while maintaining low operational overheads and high performance.

About CitiusTech


CitiusTech is a specialist provider of healthcare technology services and solutions to healthcare technology companies, providers, payers and life sciences organizations. CitiusTech helps customers accelerate innovation in healthcare through specialized solutions, healthcare technology platforms, proficiencies and accelerators.

Sunday 23 June 2019

Optimize performance using Azure Database for PostgreSQL Recommendations

As a developer today, you have a lot of things to worry about. There are new technologies, frameworks, and deployment models that you need to keep up with so that you can build game-changing apps for customers. At the end of the day though, one thing that remains unchanged is the need to focus on developing features that add value while ensuring that your database remains functional and performant. In the past, a DBA would take care of your database performance, but today that responsibility can be shared across many roles.

You no longer have to be a database expert to optimize your database. Make your job easier and start taking advantage of Azure Database for PostgreSQL Recommendation for Azure Database for PostgreSQL today.

Analyzing workloads + making recommendations = heart


By analyzing the workloads on your server, the recommendations feature gives you daily insights about the Azure Database for PostgreSQL resources that you can optimize for performance. These recommendations are tightly integrated with Azure Advisor to provide you with best practices directly within the Azure portal. Azure Advisor is a personalized cloud consultant that helps you follow guidelines to optimize your Azure deployments.

If you’re running your Postgres application on Azure today and you want to see the recommendations we’ve already made to help you optimize your Azure Database for PostgreSQL resources, it’s easy! Just check out the performance recommendations tab in the Azure Advisor.

Types of settings we make recommendations about


The recommendation feature continuously monitors and analyzes your database servers' telemetry to determine if your workload performance can be improved by configuring one or more of the resource settings. Currently, we monitor, analyze, and make recommendations for four different types of settings for your Azure Database for PostgreSQL deployment.

◈ CPU bottlenecks – Very high utilization of the CPU over an extended period can cause slow query performance for your workload. Increasing the CPU size is likely to help in optimizing the runtime of the database queries and improve overall performance. We will identify servers with a high CPU utilization that are likely running CPU constrained workloads and recommend scaling your compute.

◈ Connection constraints – Each new connection to Postgres occupies some memory. The database server's performance degrades if connections to your server are failing because of an upper limit in memory. We will identify servers running with many connection failures and recommend upgrading your server's connections limits to provide more memory to your server by scaling up compute or using Memory Optimized SKUs, which have more memory per core.

◈ Memory constraints – A low cache hit ratio can result in slower query performance and increased IOPS. This could be due to a bad query plan or running a memory intensive workload. Fixing the query plan or increasing the memory of your server will help optimize the execution of the database workload. Azure Database for PostgreSQL Recommendation identifies servers affected due to this high buffer pool churn and recommends either fixing the query plan, moving to a higher SKU with more memory, or increasing storage size to get more IOPS.

◈ Read-intensive workloads – The new Azure Database for PostgreSQL recommendation feature leverages workload-based heuristics such as the ratio of reads to writes on the server over the past seven days. Your PostgreSQL resource with a very high read/writes ratio can result in CPU and/or memory contentions leading to slow query performance. Adding a replica will help in scaling out reads to the replica server, preventing CPU and/or memory constraints on the primary server. Azure Database for PostgreSQL Recommendation will identify servers with such high read-intensive workloads and recommend adding a read replica to offload some of the read workloads.

Getting started with the Recommendation feature


The only prerequisite for getting started with Azure Database for PostgreSQL Recommendation is that you have an instance of Azure Database for PostgreSQL running with some load.

Access Azure Database for PostgreSQL Recommendation in the Azure portal

You can access the recommendations feature by default with no additional cost. The recommendations provided for your server are accessible either via the Azure Advisors page or the server’s overview page, as shown in Figures 1 and 2 below.

Azure Database, Microsoft Tutorials and Materials, Azure Certifications

Fig 1: Azure Database for PostgreSQL Recommendation on the Azure Advisor’s blade on the portal

Azure Database, Microsoft Tutorials and Materials, Azure Certifications

Fig 2: Azure Database for PostgreSQL Recommendation from the overview blade of your PostgreSQL resource

Go deeper with the recommendation on Azure portal


After accessing the type of recommendation, you need to select a specific recommendation. Based on the recommendation type, you can take specific actions to address the performance constraints for the server. You can see the recommendation details as you select the recommendation type (Label 1 in Fig 3). This provides the details you need to optimize your Azure resource. Additionally, you can act on a recommendation by selecting recommended actions (Label 2 in Figure 3).

Azure Database, Microsoft Tutorials and Materials, Azure Certifications

Fig 3: Gather further recommendation details and take actions

Azure Advisor provides inline actions, a convenient way to select and implement recommendations without leaving the Azure Advisor portal.

If you don’t intend to act immediately, you can postpone a recommendation for a period. You can also dismiss a recommendation to ensure that it no longer appears. If you do not want to receive recommendations for a specific subscription or resource group, you can configure Azure Advisor via the portal or by using the CLI to generate recommendations only for those that you specify. 

After implementing any recommendation, be sure to evaluate performance to measure the impact of the changes you made.

Saturday 22 June 2019

New to Azure? Follow these easy steps to get started

Today, many organizations are leveraging digital transformation to deliver their applications and services in the cloud. At Microsoft Build 2019, we announced the general availability of Azure Quickstart Center and received positive feedback from customers. Azure Quickstart Center brings together the step-by-step guidance you need to easily create cloud workloads. The power to easily set up, configure, and manage cloud workloads while being guided by best practices is now built right into the Azure portal.

How do you access Azure Quickstart Center?


There are two ways to access Azure Quickstart Center in the Azure portal. Go to the global search and type in Quickstart Center or select All services on the left nav and type Quickstart Center. Select the star button to save it under your favorites.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Get started


Azure Quickstart Center is designed with you in mind. We created setup guides, start a project, and curated online training for self-paced learning so that you can manage cloud deployment according to your business needs.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Setup guides


To help you prepare your organization for moving to the cloud, our guides Azure setup and Azure migration in the Quickstart Center give you a comprehensive view of best practices for your cloud ecosystem. The setup guides are created by our FastTrack for Azure team who has supported customers in cloud deployment and turned these valuable insights to easy reference guides for you.

The Azure setup guide walks you through how to:

◈ Organize resources: Set up a management hierarchy to consistently apply access control, policy, and compliance to groups of resources and use tagging to track related resources.

◈ Manage access: Use role-based access control to make sure that users have only the permissions they really need.

◈ Manage costs: Identify your subscription type, understand how billing works, and how you can control costs.

◈ Governance, security, and compliance: Enforce and automate policies and security settings that help you follow applicable legal requirements.

◈ Monitoring and reporting: Get visibility across resources to help find and fix problems, optimize performance, or get insight to customer behavior.

◈ Stay current with Azure: Track product updates so you can take a proactive approach to change management.

The Azure migration guide is focused on re-host also known as lift and shift, and gives you a detailed view of how to migrate applications and resources from your on-premises environment to Azure. Our migration guide covers:

◈ Prerequisites: Work with your internal stakeholders to understand the business reasons for migration, determine which assets like infrastructure, apps, and data are being migrated and set the migration timeline.

◈ Assess the digital estate: Assess the workload and each related asset such as infrastructure, apps, and data to ensure the assets are compatible with cloud platforms.

◈ Migrate assets: Identify the appropriate tools to reach a "done state" including native tools, third-party tools, and project management tools.

◈ Manage costs: Cost discussion is a critical step in migration. Use the guidance in this step to drive the discussion.

◈ Optimize and transform: After migration, review the solution for possible areas of optimization. This could include reviewing the design of the solution, right-sizing the services, and analyzing costs.

◈ Secure and manage: Enforce and set up policies to manage the environment to ensure operations efficiency and legal compliance.

◈ Assistance: Learn how to get the right support at the right time to continue your cloud journey in Azure.

Start a project


Compare frequently used Azure services available for different solution types, and discover the best fit for your cloud project. We’ll help you quickly launch and create workloads in the cloud. Pick one of the five common scenarios shown below to compare the deployment options and evaluate high-level architecture overviews, prerequisites, and associated costs.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

After you select a scenario, choose an option, and understand the requirements, select Create.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

We’ll take you to the create resource page where you’ll follow the steps to create a resource.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Take an online course


Our recommended online learning options let you take a hands-on approach to building Azure skills and knowledge.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Thursday 20 June 2019

Announcing the preview of Microsoft Azure Bastion

For many customers around the world, securely connecting from the outside to workloads and virtual machines on private networks can be challenging. Exposing virtual machines to the public Internet to enable connectivity through Remote Desktop Protocol (RDP) and Secure Shell (SSH), increases the perimeter, rendering your critical networks and attached virtual machines more open and harder to manage.

RDP and SSH are both a fundamental approach through which customers connect to their Azure workloads. To connect to their virtual machines, most customers either expose their virtual machines to the public Internet or deploy a bastion host, such as jump-server or jump-boxes.

Azure Bastion is a new managed PaaS service that provides seamless RDP and SSH connectivity to your virtual machines over the Secure Sockets Layer (SSL). This is completed without any exposure of the public IPs on your virtual machines. Azure Bastion provisions directly in your Azure Virtual Network, providing bastion host or jump server as-a-service and integrated connectivity to all virtual machines in your virtual networking using RDP/SSH directly from and through your browser and the Azure portal experience. This can be executed with just two clicks and without the need to worry about managing network security policies.

Leading up to the preview, we have worked with hundreds of customers across a wide area of industries. The interest to join the preview has been immense, and similar to other unique Azure services such as Azure Firewall, the feedback has been very consistent: We need an easy and integrated way to deploy, run, and scale jump-servers or bastion hosts within our Azure infrastructure.

For example, what we heard directly from a cloud foundation team manager for a German premium car manufacturer is that they had concerns about exposing cloud virtual machines with RDP/SSH ports directly to the Internet due to the potential of experiencing a number of security and connectivity issues. During the preview of Azure Bastion, they were able to use RDP/SSH over SSL to our virtual machines which allowed them to traverse corporate firewalls effortlessly and at the same time, restrict Azure Virtual Machines to only private IPs.

Deploying a stand-alone dedicated jump-server often entails manually deploying and managing specialized IaaS based solutions and workloads, such as Remote Desktop Services (RDS) gateway, the configuration and managing of authentication, security policies and access control lists (ACLs), as well as managing availability, redundancy, and scalability of the solution. Additionally, monitoring and auditing along with the ongoing requirement to remain compliant with corporate policies can quickly make the setup and management of jump servers an involving, costly, and less desirable task.

Azure Bastion is deployed in your virtual network providing RDP/SSH access for all authorized virtual machines connected to the virtual network.

Microsoft Azure Bastion, Azure Certifications, Azure Learning, Azure Guides

Key features available with the preview include:

◈ RDP and SSH from the Azure portal: Initiate RDP and SSH sessions directly in the Azure portal with a single-click seamless experience.

◈ Remote session over SSL and firewall traversal for RDP/SSH: HTML5 based web clients are automatically streamed to your local device providing the RDP/SSH session over SSL on port 443. This allows easy and securely traversal of corporate firewalls.

◈ No public IP required on Azure Virtual Machines: Azure Bastion opens the RDP/SSH connection to your Azure virtual machine using a private IP, limiting exposure of your infrastructure to the public Internet.

◈ Simplified secure rules management: Simple one-time configuration of Network Security Groups (NSGs) to allow RDP/SSH from only Azure Bastion.

◈ Increased protection against port scanning: The limited exposure of virtual machines to the public Internet will help protect against threats, such as external port scanning.

◈ Hardening in one place to protect against zero-day exploits: Azure Bastion is a managed service maintained by Microsoft. It’s continuously hardened by automatically patching and keeping up to date against known vulnerabilities.

Azure Bastion–The road ahead


Like with all other Azure networking services, we look forward to building out Azure Bastion and adding more great capabilities as we march towards general availability.

The future brings Azure Active Directory integration, adding seamless single-sign-on capabilities using Azure Active Directory identities and Azure Multi-Factor Authentication, and effectively extending two-factor authentication to your RDP/SSH connections. We are also looking to add support for native RDP/SSH clients so that you can use your favorite client applications to securely connect to your Azure Virtual Machines using Azure Bastion, while at the same time enhance the auditing experience for RDP sessions with full session video recording.

Tuesday 18 June 2019

Azure Shared Image Gallery now generally available

At Microsoft Build 2019, we announced the general availability of Azure Shared Image Gallery, making it easier to manage, share, and globally distribute custom virtual machine (VM) images in Azure.

Shared Image Gallery provides a simple way to share your applications with others in your organization, within or across Azure Active Directory (AD) tenants and regions. This enables you to expedite regional expansion or DevOps processes and simplify your cross-region HA/DR setup.

Shared Image Gallery also supports larger deployments. You can now deploy up to a 1,000 virtual machine instances in a scale set, up from 600 with managed images.

Here is what one of our customers had to say about the feature:

“Shared Image Gallery enables us to build all our VM images from a single Azure DevOps pipeline and to deploy IaaS VMs from these images in any subscription in any tenant in any region, without the added complexity of managing and distributing copies of managed images or VHDs across multiple subscriptions or regions.”

Regional availability


Shared Image Gallery now supports all Azure public cloud regions as target regions and all generally available Azure public cloud regions, with the exception of South Africa regions as a source region. Check the list of source and target regions.

In the coming months, this feature will also be available in sovereign clouds.

Quota


The default quota that is supported on Shared Image Gallery resources include:

◈ 100 shared image galleries per subscription per region
◈ 1,000 image definitions per subscription per region
◈ 10,000 image versions per subscription per region

Users can request for a higher quota based on their requirements.

Pricing


There is no extra charge for using the Shared Image Gallery service. You will only pay for the following:

1. Storage charges for image versions and replicas in each region, source and target
2. Network egress charges for replication across regions

Getting started


◈ CLI
◈ PowerShell
◈ Azure portal
◈ API
◈ Quickstart templates
◈ .NET
◈ Java

Let’s take a quick look at what you can do with Shared Image Gallery.

Manage your images better


We introduced three new Azure Resource Manager resources as part of the feature—gallery, image definition, and image version—which helps you organize images in logical groups. You can also publish multiple versions of your images as and when you update or patch the applications.

Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

Share images across subscriptions and Azure Active Directory tenants


One of the key capabilities that Shared Image Gallery provides is a way to share your images across subscriptions. Since all three newly introduced constructs are Azure Resource Manager resources, you can use Azure role-based access control (RBAC) to share your galleries or image definitions with other users who can then deploy VMs in their subscriptions, even across Azure Active Directory tenants.

A few common scenarios where sharing images across tenants becomes useful are:

1. A company acquires another and suddenly the Azure infrastructure is spread across Azure AD tenants.
2. A company with multiple subsidiaries that use Azure is likely to have multiple Azure AD tenants.

Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

Distribute your images globally


We understand that business happens at a global scale and you don’t want your organization to be limited by the platform. Shared Image Gallery provides a way to globally distribute your images based on your organizational needs. You only need to specify the target regions and Shared Image Gallery will replicate your image versions to the regions specified.

Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

Scale your deployments


With Shared Image Gallery, you can now deploy up to a 1,000 VM instances in a VM scale set, an increase from 600 with managed images. We also introduced a concept of image replicas for better deployment performance, reliability, and consistency. You can set a different replica count in each target region based on your regional scale needs. Since each replica is a deep copy of your image, you can scale your deployments linearly with each extra replica versus a managed image.

Azure Certifications, Azure Guides, Azure Learning, Azure Study Materials

Make your images highly available


With the general availability of Shared Image Gallery, you can choose to store your images in zone-redundant storage (ZRS) accounts in regions with Availability Zones. You can also choose to specify storage account type for each of the target regions.

Sunday 16 June 2019

Transforming Azure Monitor Logs for DevOps, granular access control, and improved Azure integration

Logs are critical for many scenarios in the modern digital world. They are used in tandem with metrics for observability, monitoring, troubleshooting, usage and service level analytics, auditing, security, and much more. Any plan to build an application or IT environment should include a plan for logs.

Logs architecture


There are two main paradigms for logs:

◈ Centralized: All logs are kept in a central repository. In this scenario, it is easy to search across resources and cross-correlate logs but, since these repositories get big and include logs from all kind of sources, it's hard to maintain access control on them. Some organizations completely avoid centralized logging for that reason, while other organizations that use centralized logging restrict access to very few admins, which prevents most of their users from getting value out of the logs.

◈ Siloed: Logs are either stored within a resource or stored centrally but segregated per resource. In these instances, the repository can be kept secure, and access control is coherent with the resource access, but it's hard or impossible to cross-correlate logs. Users who need a broad view of many resources cannot generate insights. In modern applications, problems and insights span across resources, making the siloed paradigm highly limited in its value.

To accommodate the conflicting needs of security and log correlations many organizations have implemented both paradigms in parallel, resulting in a complex, expensive, and hard-to-maintain environment with gaps in logs coverage. This leads to lower usage of log data in the organization and results in decision-making that is not based on data.

New access control options for Azure Monitor Logs


We have recently announced a new set of Azure Monitor Logs capabilities that allow customers to benefit from the advantages of both paradigms. Customers can now have their logs centralized while seamlessly integrated into Azure and its role based access control (RBAC) mechanisms. We call this resource-centric logging. It will be added to the existing Azure Monitor Logs experience automatically while maintaining the existing experiences and APIs. Delivering a new logs model is a journey, but you can start using this new experience today. We plan to enhance and complete alignment of all Azure Monitor's components over the next few months.

The basic idea behind resource-centric logs is that every log record emitted by an Azure resource is automatically associated with this resource. Logs are sent to a central workspace container that respects scoping and RBAC based on the resources. Users will have two options for accessing the data:

1. Workspace-centric: Query all data in a specific workspace–Azure Monitor Logs container. Workspace access permissions apply. This mode will be used by centralized teams that need access to logs regardless of the resource permissions. It can also be used for components that don't support resource-centric or off-Azure resources, though a new option for them will be available soon.

2. Resource-centric: Query all logs related to a resource. Resource access permissions apply. Logs will be served from all workspaces that contain data for that resource without the need to specify them. If workspace access control allows it, there is no need to grant the users access to the workspace. This mode works for a specific resource, all resources in a specific resource group, or all resources in a specific subscription. Most application teams and DevOps will be using this mode to consume their logs.

Azure Monitor experience automatically decides on the right mode depending on the scope the user chooses. If the user selects a workspace, queries will be sent in workspace-centric mode. If the user selects a resource, resource group, or subscription, resource-centric is used. The scope is always presented in the top left section of the Log Analytics screen:

Azure Monitoring, Azure DevOps, Azure Study Materials, Azure Guides, Azure Learning

You can also query all logs of resources in a specific resource group using the resource group screen:

Azure Monitoring, Azure DevOps, Azure Study Materials, Azure Guides, Azure Learning

Soon, Azure Monitor will also be able to scope queries for an entire subscription.

To make logs more prevalent and easier to use, they are now integrated into many Azure resource experiences. When log search is opened from a resource menu, the search is automatically scoped to that resource and resource-centric queries are used. This means that if users have access to a resource, they'll be able to access their logs. Workspace owners can block or enable such access using the workspace access control mode.

Another capability we're adding is the ability to set permissions per table that store the logs. By default, if users are granted access to workspaces or resources, they can read all their log types. The new table RBAC allows admins to use Azure custom roles to define limited access for users, so they're only able to access some of the tables, or admins can block users from accessing specific tables. You can use this, for example, if you want the networking team to be able to access only the networking related table in a workspace or a subscription.

As result of these changes, organizations will have simpler models with fewer workspaces and more secure access control. Workspaces now assume the role of a manageable container, allowing administrators to better govern their environments. Users are now empowered to view logs in their natural Azure context, helping them to leverage the power of logs in their day-to-day work.

The improved Azure Monitor Logs access control lets you now enjoy both worlds at once without compromise on usability and security. Central teams can have full access to all logs while DevOps teams can access logs only for their resources. This comes on top of the powerful log analytics, integration and scalability capabilities that are used by tens of thousands of customers.

Saturday 15 June 2019

How one Azure IoT partner is building connected experiences

We recently spent time with Mesh Systems, a Microsoft Gold Cloud platform partner based in Carmel, Indiana, to understand what a day in the life of an Azure IoT partner looks like. They shared some of their recent IoT customer engagements and talked about the types of everyday challenges Azure IoT partners face like building an IoT solution with legacy endpoints, how to approach tracking assets through a supply chain, and integrating an IoT solution with a business application. Finally, we discussed what best practices have driven the success of their IoT practice.

Connected coffee: building an IoT solution with legacy endpoints


Mesh’s experience in the beverage category caught the interest of a large European company that provides coffee beans and routine maintenance to thousands of coffee machines. The company wanted to innovate by providing their bean supplier with robust consumption data using an IoT solution.

But there was a catch. The company managed machines made by many different manufacturers across many different classes of machines. It would be cost prohibitive to build a custom integration for each machine type. There was no way to connect them to the cloud without expensive rework.

Azure Study Materials, Azure Tutorials and Materials, Azure Guides, Azure Certifications

“This is a typical brownfield use case,” said Doyle Baxter, Manager of Strategic Alliances, Mesh Systems. "The client understands their business case but the cost of connecting legacy endpoints is sometimes higher than the value of the data. It was a tough nut to crack."

For this use case, Mesh came up with an innovative proposal. Their concept was to identify unique electrical current signatures for different coffee machine processes. The signature of a double shot of espresso would be different from a single shot. Using this current analysis, Mesh could determine the amount of coffee being dispensed.

“There’s work to match up coffee machine actions with current consumption, but the enablement hardware is really inexpensive compared to other connected coffee applications," he said. "Additionally, the same enablement hardware has potential application across other beverage equipment—not just coffee machines."

Connected assets: improving supply chain efficiency


A manufacturer of glass products approached Mesh to investigate an IoT solution for tracking shipping racks. The customer ships their fragile products on expensive, custom-made racks. Unfortunately, the racks often come up missing! All told, the customer writes off more than half a million dollars of lost racks each year.

“We always look for the most cost efficient and easily deployed endpoints, especially in the case of asset tracking,” said Baxter. “In this case, our team specified a small, battery-operated Bluetooth beacon for each rack.” The beacons communicate to low-cost cellular gateways each covering 125,000 to 200,000 square feet.

“Our team designed and manufactured both the beacons and gateways and wrote the embedded software. We built the cloud solution with Azure IoT Central,” Baxter explained. The Mesh team leveraged the continuous data export functionality of IoT Central. The architecture was configured to continuously export data to Azure Blob Storage, Azure Functions, Data Factory, and Azure SQL.

The customer viewed rack movement in a detailed report within a Microsoft Power BI dashboard. With this information, they identified the end customer responsible for the shrinkage. They then coached customers on best practices for managing racks to reduce their lost rack expenses.

Connected construction: integration into business applications


Mesh worked with a construction company that needed to track which employees and contractors were on their construction sites on any given day. The data was critical to meet compliance requirements. This means the company needed to manage the whereabouts of thousands of people over the course of a project. The customer was looking to build one, unified solution for both access control and real-time location.

Mesh proposed a badge access system in which employee badges had Bluetooth beacons that communicated to local gateways and then into Azure over a cellular backhaul. Mesh built its solution with Azure IoT Central, leveraging the continuous data export function.

“A challenge in this project was designing the interface to the project management system in use that was used by the client,” said Baxter. “Sometimes a solution can provide value with its own user interface, but more often than not, the IoT data needs to be integrated into existing business systems.” Mesh worked with its customer to define the integration points and test out communication.

The result was the ability to view both present and absent employees and contractors natively within the company's existing project management system. They used a Power BI dashboard to analyze detailed historical trends.

Partner best practices


Mesh has had a strong pipeline of IoT projects and success moving customers to production. They pointed to their company’s philosophy on proof of concept engagements and best practices. “When we engage with a client on a project, we start with the end in mind,” said Baxter. “We don’t look at proof of concepts as a ‘throw away,’ but rather as a milestone on the journey to scale implementation.”

“Partnership is the name of the IoT game. The IoT stack is simply too deep for one company to provide a turnkey solution without good ecosystem partners. We realize that we are only as successful as our partnerships,” He said. The company has developed strong partnerships with cloud infrastructure, connectivity, and silicon providers.

Mesh brings deep technical skills and a wealth of experience. “We understand the reality of implementing IoT on a large scale – from thousands of sensors and devices being shipped, unboxed, installed and activated to architecting, piloting, and deploying IoT cloud solutions with the latest Azure IoT services,” said Baxter.

Friday 14 June 2019

Three ways to get notified about Azure service issues

Preparing for the unexpected is part of every IT professional’s and developer’s job. Although rare, service issues like outages and planned maintenance do occur. There are many ways to stay informed, but we’ve identified three effective approaches that have helped our customers respond quickly to service issues and mitigate downtime. All three take advantage of Azure Service Health, a free Azure service that lets you configure alerts to notify you automatically about service issues that might have an impact on your availability.

Azure Materials, Azure Study Materials, Azure Tutorials and Materials

1. Start simple with an email alert to catch all issues


If you’re new to setting up Service Health alerts, you’ll notice that there are many choices to make. Who should I alert? About which services and regions? For which types of health events? Outages? Planned maintenance? Health advisories? And what type of notification should I use? Email, SMS, push notification, webhook, or something else?

The best way to get started with Service Health alerts is to start simple. Set up an alert that will email your key operations professionals about any service issue that could affect any service or region. Since Service Health is personalized, the alert will only fire if there’s an impact to a service or region you use, so you don’t have to worry about unnecessary notifications.

Azure Materials, Azure Study Materials, Azure Tutorials and Materials

Once you’ve set up your email alert, see how it goes. Maybe it’s all you need. Simple is good. But if you find that you’re frequently routing information from the notifications you receive to other teams, consider setting up additional alerts for those teams. You can also explore more sophisticated methods of alerting like the following scenarios.

2. Set up a mobile push alert for urgent issues


Not all service issues are created equal. If there’s a potential impact to a critical production workload, you’ll want to find out and respond as quickly as possible. In those situations, email might be insufficient. Instead, we recommend configuring Service Health alerts to send mobile push notifications through the Azure mobile app.

Azure Materials, Azure Study Materials, Azure Tutorials and Materials

When you’re setting up a new alert, you’ll see an option in the UI for Azure app push notifications and SMS. We recommend push notifications over SMS because push notifications can contain more information and will provide you with more substantial updates when there’s a service issue.

With a push notification, you’ll learn about critical service issues right on your mobile device and be able to act immediately to start mitigating any impact to your workloads.

3) Connect our alerts with your IT service management tools


Finally, many customers already have ticketing systems and IT service management (ITSM) tools in place. If you already use one of these tools to manage your teams and work, we recommend setting up Service Health alerts using the webhook or ITSM integration. This will allow you to automatically create and assign tickets for Azure service issues.

Two key considerations when setting up a Service Health alert are the appropriate team to notify and the level of urgency of the message. You may wish to route alerts for certain services to specific teams, for example sending Azure SQL Database issues to your database team. You can also route alerts due to region, for example, sending issues in West Europe to your Europe lead. You may even wish to distinguish by subscription, for example, dev/test vs. production. Another important consideration is the level of urgency of the message. You’ll have more time to respond to planned maintenance and health advisories, which are communicated weeks and months in advance, than outages, which by their very nature can only be communicated at the time of the event. Depending upon the urgency you may wish to flag the communication differently in your system so you alert on-call teams.