Showing posts with label Azure Analysis Services. Show all posts
Showing posts with label Azure Analysis Services. Show all posts

Saturday, 22 June 2019

New to Azure? Follow these easy steps to get started

Today, many organizations are leveraging digital transformation to deliver their applications and services in the cloud. At Microsoft Build 2019, we announced the general availability of Azure Quickstart Center and received positive feedback from customers. Azure Quickstart Center brings together the step-by-step guidance you need to easily create cloud workloads. The power to easily set up, configure, and manage cloud workloads while being guided by best practices is now built right into the Azure portal.

How do you access Azure Quickstart Center?


There are two ways to access Azure Quickstart Center in the Azure portal. Go to the global search and type in Quickstart Center or select All services on the left nav and type Quickstart Center. Select the star button to save it under your favorites.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Get started


Azure Quickstart Center is designed with you in mind. We created setup guides, start a project, and curated online training for self-paced learning so that you can manage cloud deployment according to your business needs.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Setup guides


To help you prepare your organization for moving to the cloud, our guides Azure setup and Azure migration in the Quickstart Center give you a comprehensive view of best practices for your cloud ecosystem. The setup guides are created by our FastTrack for Azure team who has supported customers in cloud deployment and turned these valuable insights to easy reference guides for you.

The Azure setup guide walks you through how to:

◈ Organize resources: Set up a management hierarchy to consistently apply access control, policy, and compliance to groups of resources and use tagging to track related resources.

◈ Manage access: Use role-based access control to make sure that users have only the permissions they really need.

◈ Manage costs: Identify your subscription type, understand how billing works, and how you can control costs.

◈ Governance, security, and compliance: Enforce and automate policies and security settings that help you follow applicable legal requirements.

◈ Monitoring and reporting: Get visibility across resources to help find and fix problems, optimize performance, or get insight to customer behavior.

◈ Stay current with Azure: Track product updates so you can take a proactive approach to change management.

The Azure migration guide is focused on re-host also known as lift and shift, and gives you a detailed view of how to migrate applications and resources from your on-premises environment to Azure. Our migration guide covers:

◈ Prerequisites: Work with your internal stakeholders to understand the business reasons for migration, determine which assets like infrastructure, apps, and data are being migrated and set the migration timeline.

◈ Assess the digital estate: Assess the workload and each related asset such as infrastructure, apps, and data to ensure the assets are compatible with cloud platforms.

◈ Migrate assets: Identify the appropriate tools to reach a "done state" including native tools, third-party tools, and project management tools.

◈ Manage costs: Cost discussion is a critical step in migration. Use the guidance in this step to drive the discussion.

◈ Optimize and transform: After migration, review the solution for possible areas of optimization. This could include reviewing the design of the solution, right-sizing the services, and analyzing costs.

◈ Secure and manage: Enforce and set up policies to manage the environment to ensure operations efficiency and legal compliance.

◈ Assistance: Learn how to get the right support at the right time to continue your cloud journey in Azure.

Start a project


Compare frequently used Azure services available for different solution types, and discover the best fit for your cloud project. We’ll help you quickly launch and create workloads in the cloud. Pick one of the five common scenarios shown below to compare the deployment options and evaluate high-level architecture overviews, prerequisites, and associated costs.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

After you select a scenario, choose an option, and understand the requirements, select Create.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

We’ll take you to the create resource page where you’ll follow the steps to create a resource.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Take an online course


Our recommended online learning options let you take a hands-on approach to building Azure skills and knowledge.

Azure Certifications, Azure Learning, Azure Tutorials and Materials, Azure Guides, Azure Study Materials

Saturday, 13 October 2018

Data models within Azure Analysis Services and Power BI

In a world where self-service and speed of delivery are key priorities in any reporting BI solution, the importance of creating a comprehensive and performant data semantic model layer is sometimes overlooked.

I have seen quite a few occurrences where the relational data store such as Azure SQL Database and Azure SQL Data Warehouse are well structured, the reporting tier is well presented whether that is SQL Server reporting services or Power BI, but still, the performance is not as expected.

Before we drill down to the data semantic model, I always advise that understanding your data and how you want to present and report on it is key. By creating a report that takes the end consumer through a data journey is the difference between a good and a bad report. Report designing should take into account who is consuming and what they want to achieve out of the report. For example, if you have a small number of consumers who need to view a lower level of hierarchical data with additional measures or KPIs, then it may not be suitable to visualize this on the first page. As the majority of consumers may want to view a more aggregated view of the data. This example could lead to the first page of the report taking longer to return the data, thus giving a perception of a slow running report to the majority of consumers. To achieve a better experience, we could take the consumers through a data journey to ensure that the detail level does not impact the higher-level data points.

Setting the correct service level agreements and performance expectation is key. Setting an SLA for less than two seconds may be achievable if the data is two million rows. But would this still be met if it was two billion rows? There are many factors in understanding what is achievable, and the data set size is one of them. Network, compute, architecture patterns, data hierarchies, measures, KPIs, consumer device, consumer location, and real-time vs batch reporting are other impacts that can affect the perception of performance to the end consumer.

However, creating and/or optimizing your data semantic model layer will have a drastic impact on the overall performance.

The basics


The question I sometimes get is, with more computing power and the use of Azure, why can I not just report directly from my data lake or operational SQL Server? The answer is that reporting from data is very different from writing and reading data in an online transaction processing (OLTP) approach.

Dimensional modeling developed by Kimball has now been a data warehouse proven methodology and widely used for the last 20 plus years. The ideology behind the dimensional modeling is to be able to generate interactive reporting where consumers can retrieve calculations, aggregate data, and show business KPIs.

Due to creating dimensional models within a star or snowflake schema, you have the ability to retrieve the data in a more performant way due to the schema being designed for retrieval and reads rather than reads and writes, which are commonly associated with an OLTP database design.

In a star or snowflake schema, you have a fact table and many dimension tables. The fact table contains the foreign keys relating them to the dimension tables along with metrics and KPIs. The dimension tables contain the attributes associated with that particular dimension table. An example of this could be a date dimension table that contains month, day, and year as the attributes.

Creating dimensional modeling for data warehousing such as SQL Server or Azure SQL Data Warehouse will assist in what you are trying to achieve out of a reporting solution. Again, traditional warehouses have been deployed widely over the last 20 years. Even with this approach, creating a data semantic model on top of the warehouse can improve performance as well as things like improving concurrency and even adding an additional layer of security between the end consumers and the source warehouse data. SQL Server Analysis Services and now Azure Analysis Services have been designed for this purpose. We can essentially serve the required data from the warehouse into a model that can then be consumed as below.

Azure Analysis Services, Power BI, Azure Learning, Azure Tutorial and Material

Traditionally, the architecture was exactly that. Ingest data from the warehouse into the cube sitting on an instance of Analysis Services, process the cube, and then serve to SQL Server Reporting Services, Excel, and more. The landscape of data ingestion has changed over the last five to ten years with the adoption of big data and the end consumers wanting to consume a wider range of data sources, whether that is SQL, Spark, CSV, JSON, or others.

Modern BI reporting now needs to ingest, mash, clean and then present this data. Therefore, the data semantic layer needs to be agile but also delivering on performance. However, the principles of designing a model that aligns to dimensional modeling are still key.

In a world of data services in Azure, Analysis Services and Power BI are good candidates for building data semantic models on top of a data warehousing dimensional modeling. The fundamental principles of these services have formed the foundations for Microsoft BI solutions historically, even though they have evolved and now use modern in-memory architecture and allow agility for self-service.

Power BI and Analysis Services Tabular Model


SQL Server Analysis Services Tabular model, Azure Analysis Services, and Power BI share the same underlining fundamentals and principles. They are all built using the tabular model which was first released on SQL Server in 2012.

SQL Server Analysis Services Multi Dimension is a different architecture and is set at the server configuration section at the install point.

The Analysis Service Tabular model (in Power BI) is built on the columnar in-memory architecture, which forms the VertiPaq engine.

Azure Analysis Services, Power BI, Azure Learning, Azure Tutorial and Material

At processing time, the rows of data are converted into columns, encoded, and compressed allowing more data to be stored. Due to the data being stored in memory the analytical reporting delivers high performance, versus retrieving data from disk-based systems. The purpose of in-memory tabular models is to minimize read times for reporting purposes. Again, understanding the consumer reporting behavior is key, tabular models are designed to retrieve a small number of columns. The balance here is that when you retrieve a high number of columns, the engine needs to sort back into rows of data and decompress, which impacts compute.

Best practices for data modeling


The best practices below are some of the key observations I have seen over the last several years, particularly when creating data semantic models in SQL Server Analysis Services, Azure Analysis Services, or Power BI.

◈ Create a dimension model star and/or snowflake, even if you are ingesting data from different sources.

◈ Ensure that you create integer surrogate keys on dimension tables. Natural keys are not best practice and can cause issues if you need to change them at a later date. Natural keys are generally strings, so larger in size and can perform poorly when joining to other tables. The key point in regards to performance with tabular models is that natural keys are not optimal for compression. The process with natural keys is that they are:

    ◈ Encoded, hash/dictionary encoding.

    ◈ Foreign keys encoded on the fact table relating to the dimension table, again hash/dictionary encoding.

    ◈ Build the relationships.

◈ This has an impact on performance and reduces the available memory for data as a proportion, which will be needed for the dictionary encoding.

◈ Only bring into the model the integer surrogate keys or value encoding and exclude any natural keys from the dimension tables.

◈ Only bring into the model the foreign keys or integer surrogate keys on the fact table from the dimension tables.

◈ Only bring columns into your model that are required for analysis, this may be excluding columns that are not needed or filter on data to only bring the data in that is being analyzed.

◈ Reduce cardinality so that the values uniqueness can be reduced, allowing for much greater compression.

◈ Add a date dimension into your model.

◈ Ideally, we should run calculations at the compute layer if possible.

The best practices noted above have all been used in part or collectively to improve the performance for the consumer experience. Once the data semantic models have been created to align with best practices, then performance expectations can be gauged and aligned with SLA’s. The key focus on the best practices above is to ensure that we utilize the VertiPaq in-memory architecture. A large part of this is to ensure that data can be compressed as much as possible so that we can store more data within the model but also so that we can report upon the data in an efficient way.

Wednesday, 17 January 2018

Azure Analysis Services new modeling and tooling features

Following the announcement a few weeks ago that 1400 models are now in Azure Analysis Services, we haven’t stopped there! We are pleased to announce the following further features for 1400 models in Azure.

◈ Shared M expressions are shown in the SSDT Tabular Model Explorer, and can be maintained using the Query Editor.
◈ Data Management View (DMV) improvements.
◈ Opening an file with the .MSDAX extension in SSDT enables DAX non-model related IntelliSense.

Shared M expressions


Shared M expressions are shown in the Tabular Model Explorer! By right clicking the Expressions node, you can edit the expressions in the Query Editor. This should seem familiar to Power BI Desktop users.

Azure Analysis Services, Azure Guides, Azure Tutorials and Materials, Azure Learning

DMV improvements

DMVs expose information about server operations and server health, settings and model structure. They are used for server monitoring, model documentation and various other reasons.

DISCOVER_CALC_DEPENDENCY

M expression dependencies are included in DISCOVER_CALC_DEPENDENCY. The following query returns the output shown below. M expressions and structured data sources are included for 1400 models.

SELECT * FROM $System.DISCOVER_CALC_DEPENDENCY

WHERE OBJECT_TYPE = 'PARTITION' OR OBJECT_TYPE = 'M_EXPRESSION';

Azure Analysis Services, Azure Guides, Azure Tutorials and Materials, Azure Learning

The output represents the same information that is shown by the Query Dependencies visual, which is now available in SSDT from the Query Editor. This visual should seem familiar to Power BI Desktop users.

Azure Analysis Services, Azure Guides, Azure Tutorials and Materials, Azure Learning

MDSCHEMA_MEASUREGROUP_DIMENSIONS

This release provides a fix for MDSCHEMA_MEASUREGROUP_DIMENSIONS. This DMV is used by various client tools to show measure dimensionality. For example, the Explore feature in Excel Pivot Tables allows the user to cross-drill to dimensions related to the selected measures.

Azure Analysis Services, Azure Guides, Azure Tutorials and Materials, Azure Learning

Prior to this release, some rows were missing in the output for 1200 models, which meant the Explore feature did not work correctly. This is now fixed for 1200 and 1400 models.

DAX file editing

Opening a file with the .MSDAX extension allows DAX editing with non-model related IntelliSense such as highlighting, statement completion and parameter info. As you can imagine, we intend to use this for interesting features to be released in the future!

Azure Analysis Services, Azure Guides, Azure Tutorials and Materials, Azure Learning

Saturday, 13 January 2018

1400 compatibility level in Azure Analysis Services

We are excited to announce the public preview of the 1400 compatibility level for tabular models in Azure Analysis Services! This brings a host of new connectivity and modeling features for comprehensive, enterprise-scale analytic solutions delivering actionable insights. The 1400 compatibility level will also be available in SQL Server 2017 Analysis Services, ensuring a symmetric modeling capability across on-premises and the cloud.

Here are just some highlights of the new features available to 1400 models.
  • New infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This enables:
    • Support for additional data sources, such as Azure Blob storage.
    • Data transformation and data mashup capabilities.
  • Support for BI tools such as Microsoft Excel enable drill-down to detailed data from an aggregated report. For example, when end-users view total sales for a region and month, they can view the associated order details.
  • Object-level security to secure table and column names in addition to the data within them.
  • Enhanced support for ragged hierarchies such as organizational charts and chart of accounts.
  • Various other improvements for performance, monitoring and consistency with the Power BI modeling experience.
In SSDT, you can select the new 1400 compatibility level when creating new tabular model projects. Alternatively, you can upgrade an existing tabular model by selecting the Model.bim file in Solution Explorer and setting the Compatibility Level to 1400 in the Properties window. Models at the 1400 compatibility level cannot be downgraded to lower compatibility levels.

Microsoft Guides, Microsoft Tutorials and Materials, Azure Tutorials and Materials

New Infrastructure for Data Connectivity


1400 models introduce a new infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This is based on similar functionality in Power BI Desktop and Microsoft Excel 2016. At this point, only the following cloud-based data sources are supported with the 1400 compatibility level in Azure Analysis Services. We intend to add support for more data sources soon.

◈ Azure SQL Data Warehouse
◈ Azure SQL Database
◈ Azure Blog Storage

Detail Rows


A much-requested feature for tabular models is the ability to define a custom row set contributing to a measure value. Multidimensional models achieve this by using drillthrough and rowset actions. This allows end-users to view information in more detail than the aggregated level.

For example, the following PivotTable shows Internet Total Sales by year from the Adventure Works sample tabular model. Users can right-click the cell for 2010 and then select the Show Details menu option to view the detail rows.

Microsoft Guides, Microsoft Tutorials and Materials, Azure Tutorials and Materials

By default, all the columns in the Internet Sales table are displayed. This behavior is often not meaningful for the user because too many columns may be shown, and the table may not have the necessary columns to show useful information such as customer name and order information.

Detail Rows Expression Property for Measures

1400 models introduce the Detail Rows Expression property for measures. It allows the modeler to customize the columns and rows returned to the end user. The following example uses the DAX Editor in SSDT to define the columns to be returned for the Internet Total Sales measure.

Microsoft Guides, Microsoft Tutorials and Materials, Azure Tutorials and Materials

With the property defined and the model deployed, the custom row set is returned when the user selects Show Details. It automatically honors the filter context of the cell that was selected. In this example, only the rows for 2010 value are displayed.

Microsoft Guides, Microsoft Tutorials and Materials, Azure Tutorials and Materials

Object-Level Security


Roles in tabular models already support a granular list of permissions, and row-level filters to help protect sensitive data.

1400 models introduce table- and column-level security allowing sensitive table and column names to be protected in addition to the data within them. Collectively these features are referred to as object-level security (OLS).

The current version requires that OLS is set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM). We plan to deliver SSDT support soon. The following snippet of JSON-based metadata from the Model.bim file secures the Base Rate column in the Employee table of the Adventure Works sample tabular model by setting the MetadataPermission property of the ColumnPermission class to None.

"roles": [

  {

    "name": "General Users",

    "description": "All allowed users to query the model",

    "modelPermission": "read",

    "tablePermissions": [

      {

        "name": "Employee",

        "columnPermissions": [

           {

              "name": "Base Rate",

              "metadataPermission": "none"

           }

        ]

      }

    ]

  }

Unauthorized users cannot access the Base Rate column using client tools like Power BI and Excel Pivot Tables. Additionally, such users cannot query the Base Rate column using DAX or MDX, or measures that refer to it.

Ragged Hierarchies


Tabular models with previous compatibility levels can be used to model parent-child hierarchies. Hierarchies with a differing number of levels are referred to as ragged hierarchies. An example of a ragged hierarchy is an organizational chart. By default, ragged hierarchies are displayed with blanks for levels below the lowest child. This can look untidy to users, as shown by this organizational chart in Adventure Works:

Microsoft Guides, Microsoft Tutorials and Materials, Azure Tutorials and Materials

1400 models introduce the Hide Members property to correct this. Simply set the Hide Members property to Hide blank members.

Microsoft Guides, Microsoft Tutorials and Materials, Azure Tutorials and Materials

With the property set and the model deployed, the more presentable version of the hierarchy is displayed.

Microsoft Guides, Microsoft Tutorials and Materials, Azure Tutorials and Materials

Other Features


Various other features such as the following are also introduced with the 1400 compatibility level. For more information, please refer to the Analysis Services Team blog for what's new in SQL Server 2017 CTP 2.0 and SQL Server vNext on Windows CTP 1.1 for Analysis Services.

◈ Transaction-performance improvements for a more responsive developer experience.
◈ Dynamic Management View improvements enabling dependency analysis and reporting.
◈ Hierarchy and column reuse to be surfaced in more helpful locations in the Power BI field list.
◈ Date relationships to easily create relationships to date dimensions based on date columns.
◈ DAX enhancements to make DAX more accessible and powerful. These include the IN operator and table/row constructors.

Friday, 29 December 2017

Hardening Azure Analysis Services with the new firewall capability

Azure Analysis Services (Azure AS) is designed with security in mind and takes advantage of the security features available on the Azure platform. For example, integration with Azure Active Directory (Azure AD) provides a solid foundation for access control. Any user creating, managing, or connecting to an Azure Analysis Services server must have a valid Azure AD user identity. Object-level security within a model enables you to define permissions at the table, row, and column levels. Moreover, Azure AS uses encryption to help safeguard data at rest and in transit within the local data center, across data centers, between data centers and on-premises networks, as well as across public Internet connections. The combination of Transport Layer Security (TLS), Perfect Forward Secrecy (PFS), and RSA-based 2,048-bit encryption keys provides strong protection against would-be eavesdroppers.

However, keeping in mind that Azure Analysis Services is a multi-tenant cloud service, it is important to note that the service accepts network traffic from any client by default. Do not forget to harden your servers by taking advantage of basic firewall support. In the Azure Portal, you can find the firewall settings when you display the properties of your Azure AS server. Click on the Firewall tab, as the following screenshot illustrates. You must be a member of the Analysis Services Admins group to configure the firewall.

Enabling the firewall without providing any client IP address ranges effectively closes the Azure AS server to all inbound traffic—except traffic from the Power BI cloud service. The Power BI service is whitelisted in the default "Firewall on" state, but you can disable this rule if desired. Click Save to apply the changes.

Azure Analysis Services, Microsoft Guides, Microsoft Tutorials and Materials

With the firewall enabled, the Azure AS server responds to blocked traffic with a 401 error code. The corresponding error message informs you about the IP address that the client was using. This can be helpful if you want to grant this IP address access to your Azure AS server. This error handling is different from a network firewall in stealth mode not responding to blocked traffic at all. Although the Azure AS firewall does not operate in stealth mode, it enables you to lock down your servers effectively. You can quickly verify the firewall behavior in SQL Server Management Studio (SSMS), as shown in the following screenshot.

Azure Analysis Services, Microsoft Guides, Microsoft Tutorials and Materials

You can also discover the client IP address of your workstation in the Azure Portal. On the Firewall page, click on Add client IP to add the current workstation IP address to the list of allowed IP addresses. Please note that the IP address is typically a public address, most likely assigned dynamically at your network access point to the Internet. Your client computer might not always use the same IP address. For this reason, it is usually advantageous to configure an IP range instead of an individual address. See the following table for examples. Note that you must specify addresses in IPv4 format.

Name Start IP Address  End IP Address  Comments 
ClientIPAddress  192.168.1.1 192.168.1.1  Grants access to exactly one IP address. 
ClientIPAddresses  192.168.1.0  192.168.1.254  Grants access to all IP addresses in the 192.168.1.x subnet. 
US East 2 Data Center 23.100.64.1  23.100.71.254  This is the address range 23.100.64.0/21 from the US East 2 data center. 

Besides Power BI and client computers in on-premises networks, you might also want to grant specific Azure-based solutions access to your Azure AS server. For example, you could be using a solution based on Azure Functions to perform automated processing or other actions against Azure AS. If the Azure AS firewall blocks your solution, you will encounter the error message, “System.Net.WebException: The remote server returned an error: (401) Unauthorized.” The following screenshot illustrates the error condition.

Azure Analysis Services, Microsoft Guides, Microsoft Tutorials and Materials

In order to grant the Azure App Service access to your Azure AS server, you must determine the IP address that your function app uses. In the properties of your function app, copy the outbound IP addresses (see the following screenshot) and add them to the list of allowed client IP addresses in your firewall rules.

Azure Analysis Services, Microsoft Guides, Microsoft Tutorials and Materials

Perhaps you are wondering at this point how to open an Azure AS server to an entire data center. This is slightly more complicated because the Azure data center address ranges are dynamic. You can download an XML file with the list of IP address ranges for all Azure data centers from the Microsoft Download Center. This list is updated on a weekly basis, so make sure you check for updates periodically.

Note that the XML file uses the classless inter-domain routing (CIDR) notation, while the Azure AS Firewall settings expect the ranges to be specified with start and end IP address. To convert the CIDR format into start and end IP addresses, you can use any of the publicly available IP converter tools. Alternatively, you can process the XML file by using Power Query, as the following screenshot illustrates.

Azure Analysis Services, Microsoft Guides, Microsoft Tutorials and Materials

Download the Excel workbook and make sure you update the XmlFilePath parameter to point to the XML file you downloaded. For your convenience, the workbook includes a column called Firewall Rule Added, which concatenates the data center information into firewall rules as they would be defined in an Azure Resource Manager (ARM) template. The following screenshot shows an ARM template with several rules that grant IP address ranges from the US East 2 data center access to an Azure AS server.

Azure Analysis Services, Microsoft Guides, Microsoft Tutorials and Materials

The ARM template makes it easy to apply a large list of rules programmatically by using Azure PowerShell, Azure Command Line Interface (CLI), Azure portal, or the Resource Manager REST API. However, an excessively long list of IP addresses is hard to manage. Moreover, the Azure AS firewall must evaluate each rule for every incoming request. For this reason, it is recommended to limit the number of rules to the absolute necessary. For example, avoid adding approximately 3,500 rules for all IP ranges across all Azure data centers. Even if you limit the rules to your server’s local data center, there still may be more than 400 subnets. As a best practice, build your Azure AS business solutions using technologies that support static IP addresses, or at least a small set of dynamic IP addresses, as is the case with the Azure App Service. The smaller the surface area, the more effective the hardening of your Azure AS server.

Friday, 1 December 2017

Azure Analysis Services integration with Azure Diagnostic Logs

We are pleased to announce that Azure Analysis Services is integrated with Azure Monitor Resource Diagnostic Logs. Diagnostic logging is a key feature for IT owned BI implementations. We have taken steps to ensure you can confidently run diagnostic logging on production Azure Analysis Services servers without a performance penalty.

Various scenarios are supported, including the following:

◉ Auditing
◉ Monitoring of server health
◉ Derivation of usage metrics
◉ Understanding which user groups are using which datasets and when
◉ Detection of long-running or problematic queries
◉ Detection of users experiencing errors

Traditionally, customers have used SSAS Extended Events (xEvents) on premises. This normally involved xEvent session management, output to a binary XEL file, use of special system functions in SQL Server to access the data within the files, and complex parsing of XML output. Having done all that, the data could be stored somewhere and subsequently consumed for analysis. It was often not automatically integrated with other usage data such as performance counter metrics and logs from other components of the architecture. Azure diagnostic logging makes this process simpler and easier for Azure Analysis Services.

Set up diagnostic logging


To set it up, select the “Diagnostic logs” blade for an Azure Analysis Services server in the Azure portal. Then click the add diagnostic setting link.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

The diagnostic settings blade is displayed.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

Here you can define up to 3 targets for diagnostic logs.

1. Archive to a storage account: Log files are stored in JSON format (not XEL files).

2. Stream to an event hub: This allows broad integration with, for example, big-data systems.

3. Send to Log Analytics: This leverages the particularly useful Azure Log Analytics, which provides built in analysis, dashboarding and notification capabilities.
The following log categories are available for selection.

◉ The engine category instructs Azure Analysis Services to log the following xEvents. Unlike xEvents in SSAS, it is not possible to select individual xEvents. The Log Analytics model assumes it is relatively inexpensive to log all the events and ask questions later. We have had feedback from the community that these are the most valuable xEvents, and we have excluded verbose events that can affect server performance. Further xEvents may of course be added in the future, especially when releasing new features.

(XEvent) Category
Event Name
Security Audit
Audit Login
Security Audit
Audit Logout
Security Audit 
Audit Server Starts And Stops
Progress Reports
Progress Report Begin
Progress Reports
Progress Report End
Progress Reports 
Progress Report Current
Queries
Query Begin
Queries 
Query End
Commands
Command Begin
Commands 
Command End
Errors & Warnings
Error
Discover
Discover End
Notification
Notification
Session
Session Initialize
Locks
Deadlock
Query Processing
VertiPaq SE Query Begin
Query Processing 
VertiPaq SE Query End
Query Processing 
VertiPaq SE Query Cache Match
Query Processing 
Direct Query Begin
Query Processing 
Direct Query End

◉ The service category includes the following service-level events.

Operation name
Occurs when
CreateGateway
User configures a gateway on server
ResumeServer
Resume a server
SuspendServer
Pause a server
DeleteServer
Delete a server
RestartServer
Delete a server
GetServerLogFiles
User exports server log through PowerShell
ExportModel
User exports model in Azure Portal. For example, "Open in Power BI Desktop", "Open in Visual Studio"

◉ The All Metrics category logs events for metric readings. These are the same metrics displayed in the Metrics blade of the Azure portal for an Azure Analysis Services server.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

Metrics and server events are integrated with xEvents in Log Analytics for side-by-side analysis. Log Analytics can also be configured to receive events from a range of other Azure services providing a holistic view of diagnostic logging data across customer architectures. Adding the diagnostic setting can be done from PowerShell using the Set-AzureRmDiagnosticSetting cmdlet.

Consume diagnostic logs in Log Analytics


With some log data already generated, navigate to the Log Analytics section of the Azure portal and select the target “OMS workspace”. Then click on Log Search.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

Click on all collected data to get started.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

Then, click on AzureDiagnostics and Apply. AzureDiagnostics includes engine and service events.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

Notice that a Log Analytics query is being constructed on the fly. The EventClass_s field contains xEvent names, which may look familiar if you have used xEvents on premises. Click EventClass_s or one of the event names and Log Analytics will continue constructing a query based on interaction in the user interface. Log Analytics has the ability to save searches for later reuse.

This post describes only the tip of the iceberg regarding consuming Log Analytics data. For example, Operations Management Suite provides a website with enhanced query, dashboarding, and alerting capabilities on Log Analytics data.

Search query sample


The following sample query returns queries submitted to Azure Analysis Services that took over 5 minutes (300,000 miliseconds) to complete. The generic xEvent columns are normally stored as strings and therefore end with the “_s” suffix. In order to filter on the queries that took over 5 minutes, it is necessary to cast Duration_s to a numeric value. This can be achieved using the toint() syntax.

search * | where ( Type == "AzureDiagnostics" ) | where ( EventClass_s == "QUERY_END" ) | where toint(Duration_s) > 300000

Query scale out


When using scale out, you can identify read-only replicas because the ServerName_s field values have the replica instance number appended to the name. The resource field contains the Azure resource name, which matches the server name that the users see. Additionally, the IsQueryScaleoutReadonlyInstance_s field equals true for replicas.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

Consume diagnostic logs in Power BI


The feature that probably unlocks Log Analytics data for most BI professionals is the Power BI button. Simply click to download a text file that contains an M expression, which can be pasted into a blank query in Power BI Desktop. The expression contains the current Log Analytics query and consumes from the Log Analytics REST API.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

This enables a variety of analytical reports such as the following one, showing the information below.

◉ The S4 server is not hitting the 100 GB memory limit.

◉ During the time range, the QPU is maxed out. S4 servers are limited to 400 QPUs.

◉ Long running queries were taking place during processing/data refresh operations.

◉ Users received timeout errors due to contention between long running queries and processing operations.

◉ The server may be a good candidate for query scale out.

Azure, Microsoft Guides, Microsoft Learning, Microsoft Tutorials and Materials

We hope you’ll agree that Azure Analysis Services integration with Azure Monitor Resource Diagnostic Logs provides a rich capability for auditing and monitoring, side-by-side analysis of xEvent data with other data such as metrics data, and is easier to set up than xEvents on premises. 

Tuesday, 7 November 2017

Introducing query replica scale-out for Azure Analysis Services

Today at the SQL PASS Summit, Microsoft announced the scale out feature for Azure Analysis Services. With scale-out, client queries can be distributed among multiple query replicas in a query pool, reducing response times during high query workloads. You can also separate processing from the query pool, ensuring client queries are not adversely affected by processing operations. With Azure Analysis Services, we have made setting up scale-out as easy as possible. Scale-out can be configured in Azure portal, PowerShell (coming soon), or by using the Analysis Services REST API.

How it works


In a typical server deployment, one server serves as both processing server and query server. If the number of client queries against models on your server exceeds the Query Processing Units (QPU) for your server's plan, or model processing occurs at the same time as high query workloads, performance can decrease.

With scale-out, you can create a query pool with up to seven additional query replicas (eight total, including your server). You can scale the number of query replicas to meet QPU demands at critical times and you can separate a processing server from the query pool at any time.

Regardless of the number of query replicas you have in a query pool, processing workloads are not distributed among query replicas. A single server serves as the processing server. Query replicas serve only queries against the models synchronized between each replica in the query pool. When processing operations are completed, a synchronization must be performed between the processing server and the query replica servers. When automating processing operations, it's important to configure a synchronization operation upon successful completion of processing operations.

Note: Scale-out does not increase the amount of available memory for your server. To increase memory, you need to upgrade your plan.

Monitor QPU usage


To determine if scale-out for your server is necessary, monitor your server in Azure portal by using Metrics. If your QPU regularly maxes out, it means the number of queries against your models is exceeding the QPU limit for your plan. The query pool job queue length metric also increases when the number of queries in the query thread pool queue exceeds available QPU.

Configure scale-out


1. In the portal, click Scale-out. Use the slider to select the number of query replica servers. The number of replicas you choose is in addition to your existing server.

2. In Separate the processing server from the querying pool, select yes to exclude your processing server from query servers.

Data Warehouse, Business Intelligence, Azure Analysis Services, Microsoft Azure

3. Click Save to provision your new query replica servers.

Tabular models on your primary server are synchronized with the replica servers. When synchronization is complete, the query pool begins distributing incoming queries among the replica servers.

Note: You can also change these settings programmatically using Azure ARM.

Synchronization


When you provision new query replicas, Azure Analysis Services automatically replicates your models across all replicas. You can also perform a manual synchronization. When you process your models, you should perform a synchronization so updates are synchronized among query replicas.

In Overview, click the synchronize icon on the right of a model.

Data Warehouse, Business Intelligence, Azure Analysis Services, Microsoft Azure

Synchronization can also be triggered programmatically by using the Azure Analysis Services REST API.

Connections


On your server's overview page, there are two server names. Once you configure scale-out for a server, you will need to specify the appropriate server name depending on the connection type.

For end-user client connections like Power BI Desktop, Excel and custom apps, use Server name.

For SSMS, SSDT, and connection strings in PowerShell, Azure Function apps, and AMO, use Management server name. The management server name includes a special :rw (read-write) qualifier.

Data Warehouse, Business Intelligence, Azure Analysis Services, Microsoft Azure