DP-600: Implementing Analytics Solutions Using Microsoft Fabric

DP-600: Implementing Analytics Solutions Using Microsoft Fabric

As a candidate for this certification, you should have subject matter expertise in designing, creating, and deploying enterprise-scale data analytics solutions.

Your responsibilities for this role include transforming data into reusable analytics assets by using Microsoft Fabric components, such as:

◉ Lakehouses
◉ Data warehouses
◉ Notebooks
◉ Dataflows
◉ Data pipelines
◉ Semantic models
◉ Reports

You implement analytics best practices in Fabric, including version control and deployment.

To implement solutions as a Fabric analytics engineer, you partner with other roles, such as:

◉ Solution architects
◉ Data engineers
◉ Data scientists
◉ AI engineers
◉ Database administrators
◉ Power BI data analysts

In addition to in-depth work with the Fabric platform, you need experience with:

◉ Data modeling
◉ Data transformation
◉ Git-based source control
◉ Exploratory analytics
◉ Languages, including Structured Query Language (SQL), Data Analysis Expressions (DAX), and PySpark

Implementing Analytics Solutions Using Microsoft Fabric Exam Summary:


Exam Name Microsoft Certified - Fabric Analytics Engineer Associate
Exam Code   DP-600
Exam Price  $165 (USD)
Exam Price  120 mins
Number of Questions  40-60
Passing Score  700 / 1000
Books / Training  DP-600T00-A: Microsoft Fabric Analytics Engineer
Sample Questions Implementing Analytics Solutions Using Microsoft Fabric Sample Questions
Practice Exam  Microsoft DP-600 Certification Practice Exam

Microsoft DP-600 Exam Syllabus Topics:


Topic Details
Plan, implement, and manage a solution for data analytics (10-15%)
Plan a data analytics environment - Identify requirements for a solution, including components, features, performance, and capacity stock-keeping units (SKUs)
- Recommend settings in the Fabric admin portal
- Choose a data gateway type
- Create a custom Power BI report theme
Implement and manage a data analytics environment - Implement workspace and item-level access controls for Fabric items
- Implement data sharing for workspaces, warehouses, and lakehouses
- Manage sensitivity labels in semantic models and lakehouses
- Configure Fabric-enabled workspace settings
- Manage Fabric capacity
Manage the analytics development lifecycle - Implement version control for a workspace
- Create and manage a Power BI Desktop project (.pbip)
- Plan and implement deployment solutions
- Perform impact analysis of downstream dependencies from lakehouses, data warehouses, dataflows, and semantic models
- Deploy and manage semantic models by using the XMLA endpoint
- Create and update reusable assets, including Power BI template (.pbit) files, Power BI data source (.pbids) files, and shared semantic models 
Prepare and serve data (40-45%)
Create objects in a lakehouse or warehouse - Ingest data by using a data pipeline, dataflow, or notebook
- Create and manage shortcuts
- Implement file partitioning for analytics workloads in a lakehouse
- Create views, functions, and stored procedures
- Enrich data by adding new columns or tables
Copy data - Choose an appropriate method for copying data from a Fabric data source to a lakehouse or warehouse
- Copy data by using a data pipeline, dataflow, or notebook
- Add stored procedures, notebooks, and dataflows to a data pipeline
- Schedule data pipelines
- Schedule dataflows and notebooks
Transform data - Implement a data cleansing process
- Implement a star schema for a lakehouse or warehouse, including Type 1 and Type 2 slowly changing dimensions
- Implement bridge tables for a lakehouse or a warehouse
- Denormalize data
- Aggregate or de-aggregate data
- Merge or join data
- Identify and resolve duplicate data, missing data, or null values
- Convert data types by using SQL or PySpark
- Filter data 
Optimize performance - Identify and resolve data loading performance bottlenecks in dataflows, notebooks, and SQL queries
- Implement performance improvements in dataflows, notebooks, and SQL queries
- Identify and resolve issues with Delta table file sizes 
Implement and manage semantic models (20-25%)
Design and build semantic models - Choose a storage mode, including Direct Lake
- Identify use cases for DAX Studio and Tabular Editor 2
- Implement a star schema for a semantic model
- Implement relationships, such as bridge tables and many-to-many relationships
- Write calculations that use DAX variables and functions, such as iterators, table filtering, windowing, and information functions
- Implement calculation groups, dynamic strings, and field parameters
- Design and build a large format dataset
- Design and build composite models that include aggregations
- Implement dynamic row-level security and object-level security
- Validate row-level security and object-level security
Optimize enterprise-scale semantic models - Implement performance improvements in queries and report visuals
- Improve DAX performance by using DAX Studio
- Optimize a semantic model by using Tabular Editor 2
- Implement incremental refresh
Explore and analyze data (20-25%)
Perform exploratory analytics - Implement descriptive and diagnostic analytics
- Integrate prescriptive and predictive analytics into a visual or report
- Profile data
Query data by using SQL - Query a lakehouse in Fabric by using SQL queries or the visual query editor
- Query a warehouse in Fabric by using SQL queries or the visual query editor
- Connect to and query datasets by using the XMLA endpoint

0 comments:

Post a Comment