As a candidate for this exam, you should have subject matter expertise designing, implementing, and monitoring cloud-native applications that store and manage data.
Your responsibilities for this role include:
◉ Designing and implementing data models and data distribution.
◉ Loading data into an Azure Cosmos DB database.
◉ Optimizing and maintaining the solution.
As a professional in this role, you integrate the solution with other Azure services. You also design, implement, and monitor solutions that consider security, availability, resilience, and performance requirements.
As a candidate for this exam, you must have solid knowledge and experience with:
◉ Developing apps for Azure.
◉ Working with Azure Cosmos DB database technologies.
◉ Creating server-side objects with JavaScript.
You should be proficient at developing applications that use the Azure Cosmos DB for NoSQL API. You should be able to:
◉ Write efficient SQL queries for the API.
◉ Create appropriate indexing policies.
◉ Interpret JSON.
◉ Read C# or Java code.
◉ Use PowerShell.
Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB Exam Summary:
Exam Name | Microsoft Certified - Azure Cosmos DB Developer Specialty |
Exam Code | DP-420 |
Exam Price | $165 (USD) |
Exam Price | 120 mins |
Number of Questions | 40-60 |
Passing Score | 700 / 1000 |
Books / Training | Course DP-420T00: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB |
Sample Questions | Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB Sample Questions |
Practice Exam | Microsoft DP-420 Certification Practice Exam |
Microsoft DP-420 Exam Syllabus Topics:
Topic | Details |
Design and Implement Data Models (35-40%) | |
Design and implement a non-relational data model for Azure Cosmos DB for NoSQL | - Develop a design by storing multiple entity types in the same container - Develop a design by storing multiple related entities in the same document - Develop a model that denormalizes data across documents - Develop a design by referencing between documents - Identify primary and unique keys - Identify data and associated access patterns - Specify a default time to live (TTL) on a container for a transactional store |
Design a data partitioning strategy for Azure Cosmos DB for NoSQL | - Choose a partitioning strategy based on a specific workload - Choose a partition key - Plan for transactions when choosing a partition key - Evaluate the cost of using a cross-partition query - Calculate and evaluate data distribution based on partition key selection - Calculate and evaluate throughput distribution based on partition key selection - Construct and implement a synthetic partition key - Design and implement a hierarchical partition key - Design partitioning for workloads that require multiple partition keys |
Plan and implement sizing and scaling for a database created with Azure Cosmos DB | - Evaluate the throughput and data storage requirements for a specific workload - Choose between serverless and provisioned models - Choose when to use database-level provisioned throughput - Design for granular scale units and resource governance - Evaluate the cost of the global distribution of data - Configure throughput for Azure Cosmos DB by using the Azure portal |
Implement client connectivity options in the Azure Cosmos DB SDK | - Choose a connectivity mode (gateway versus direct) - Implement a connectivity mode - Create a connection to a database - Enable offline development by using the Azure Cosmos DB emulator - Handle connection errors - Implement a singleton for the client - Specify a region for global distribution - Configure client-side threading and parallelism options - Enable SDK logging |
Implement data access by using the SQL language for Azure Cosmos DB for NoSQL | - Implement queries that use arrays, nested objects, aggregation, and ordering - Implement a correlated subquery - Implement queries that use array and type-checking functions - Implement queries that use mathematical, string, and date functions - Implement queries based on variable data |
Implement data access by using Azure Cosmos DB for NoSQL SDKs | - Choose when to use a point operation versus a query operation - Implement a point operation that creates, updates, and deletes documents - Implement an update by using a patch operation - Manage multi-document transactions using SDK Transactional Batch - Perform a multi-document load using Bulk Support in the SDK - Implement optimistic concurrency control using ETags - Override default consistency by using query request options - Implement session consistency by using session tokens - Implement a query operation that includes pagination - Implement a query operation by using a continuation token - Handle transient errors and 429s - Specify TTL for a document - Retrieve and use query metrics |
Implement server-side programming in Azure Cosmos DB for NoSQL by using JavaScript | - Write, deploy, and call a stored procedure - Design stored procedures to work with multiple documents transactionally - Implement and call triggers - Implement a user-defined function |
Design and Implement Data Distribution (5-10%) | |
Design and implement a replication strategy for Azure Cosmos DB | - Choose when to distribute data - Define automatic failover policies for regional failure for Azure Cosmos DB for NoSQL - Perform manual failovers to move single master write regions - Choose a consistency model - Identify use cases for different consistency models - Evaluate the impact of consistency model choices on availability and associated request unit (RU) cost - Evaluate the impact of consistency model choices on performance and latency - Specify application connections to replicated data |
Design and implement multi-region write | - Choose when to use multi-region write - Implement multi-region write - Implement a custom conflict resolution policy for Azure Cosmos DB for NoSQL |
Integrate an Azure Cosmos DB Solution (5-10%) | |
Enable Azure Cosmos DB analytical workloads | - Enable Azure Synapse Link - Choose between Azure Synapse Link and Spark Connector - Enable the analytical store on a container - Implement custom partitioning in Azure Synapse Link - Enable a connection to an analytical store and query from Azure Synapse Spark or Azure Synapse SQL - Perform a query against the transactional store from Spark - Write data back to the transactional store from Spark |
Implement solutions across services | - Integrate events with other applications by using Azure Functions and Azure Event Hubs - Denormalize data by using Change Feed and Azure Functions - Enforce referential integrity by using Change Feed and Azure Functions - Aggregate data by using Change Feed and Azure Functions, including reporting - Archive data by using Change Feed and Azure Functions - Implement Azure Cognitive Search for an Azure Cosmos DB solution |
Optimize an Azure Cosmos DB Solution (15-20%) | |
Optimize query performance when using the API for Azure Cosmos DB for NoSQL | - Adjust indexes on the database - Calculate the cost of the query - Retrieve request unit cost of a point operation or query - Implement Azure Cosmos DB integrated cache |
Design and implement change feeds for Azure Cosmos DB for NoSQL | - Develop an Azure Functions trigger to process a change feed - Consume a change feed from within an application by using the SDK - Manage the number of change feed instances by using the change feed estimator - Implement denormalization by using a change feed - Implement referential enforcement by using a change feed - Implement aggregation persistence by using a change feed - Implement data archiving by using a change feed |
Define and implement an indexing strategy for Azure Cosmos DB for NoSQL | - Choose when to use a read-heavy versus write-heavy index strategy - Choose an appropriate index type - Configure a custom indexing policy by using the Azure portal - Implement a composite index - Optimize index performance |
Maintain an Azure Cosmos DB Solution (25-30%) | |
Monitor and troubleshoot an Azure Cosmos DB solution | - Evaluate response status code and failure metrics - Monitor metrics for normalized throughput usage by using Azure Monitor - Monitor server-side latency metrics by using Azure Monitor - Monitor data replication in relation to latency and availability - Configure Azure Monitor alerts for Azure Cosmos DB - Implement and query Azure Cosmos DB logs - Monitor throughput across partitions - Monitor distribution of data across partitions - Monitor security by using logging and auditing |
Implement backup and restore for an Azure Cosmos DB solution | - Choose between periodic and continuous backup - Configure periodic backup - Configure continuous backup and recovery - Locate a recovery point for a point-in-time recovery - Recover a database or container from a recovery point |
Implement security for an Azure Cosmos DB solution | - Choose between service-managed and customer-managed encryption keys - Configure network-level access control for Azure Cosmos DB - Configure data encryption for Azure Cosmos DB - Manage control plane access to Azure Cosmos DB by using Azure role-based access control (RBAC) - Manage data plane access to Azure Cosmos DB by using keys - Manage data plane access to Azure Cosmos DB by using Microsoft Entra ID - Configure Cross-Origin Resource Sharing (CORS) settings - Manage account keys by using Azure Key Vault - Implement customer-managed keys for encryption - Implement Always Encrypted |
Implement data movement for an Azure Cosmos DB solution | - Choose a data movement strategy - Move data by using client SDK bulk operations - Move data by using Azure Data Factory and Azure Synapse pipelines - Move data by using a Kafka connector - Move data by using Azure Stream Analytics - Move data by using the Azure Cosmos DB Spark Connector - Configure Azure Cosmos DB as a custom endpoint for an Azure IoT Hub |
Implement a DevOps process for an Azure Cosmos DB solution | - Choose when to use declarative versus imperative operations - Provision and manage Azure Cosmos DB resources by using Azure Resource Manager templates - Migrate between standard and autoscale throughput by using PowerShell or Azure CLI - Initiate a regional failover by using PowerShell or Azure CLI - Maintain indexing policies in production by using Azure Resource Manager templates |
0 comments:
Post a Comment