Friday, 16 August 2019

New for developers: Azure Cosmos DB .NET SDK v3 now available

The Azure Cosmos DB team is announcing the general availability of version 3 of the Azure Cosmos DB .NET SDK, ​released in July. Thank you to all who gave feedback during our preview.

In this post, we’ll walk through the latest improvements that we’ve made to enhance the developer experience in .NET SDK v3.

Azure Cosmos DB, Azure Study Materials, Azure Certifications, Azure Learning, Azure Online Exam

You can get the latest version of the SDK through NuGet and contribute on GitHub.

//Using .NET CLI
dotnet add package Microsoft.Azure.Cosmos

//Using NuGet
Install-Package Microsoft.Azure.Cosmos

What is Azure Cosmos DB?


Azure Cosmos DB is a globally distributed, multi-model database service that enables you to read and write data from any Azure region. It offers turnkey global distribution, guarantees single-digit millisecond latencies at the 99th percentile, 99.999 percent high availability, and elastic scaling of throughput and storage.

What is new in Azure Cosmos DB .NET SDK version 3?


Version 3 of the SDK contains numerous usability and performance improvements, including a new intuitive programming model, support for stream APIs, built-in support for change feed processor APIs, the ability to scale non-partitioned containers, and more. The SDK targets .NET Standard 2.0 and is open sourced on GitHub.

For new workloads, we recommend starting with the latest version 3.x SDK for the best experience. We have no immediate plans to retire version 2.x of the .NET SDK.

Targets .NET Standard 2.0

We’ve unified the existing Azure Cosmos DB .NET Framework and .NET Core SDKs into a single SDK, which targets .NET Standard 2.0. You can now use the .NET SDK in any platform that implements .NET Standard 2.0, including your .NET Framework 4.6.1+ and .NET Core 2.0+ applications.

Open source on GitHub

The Azure Cosmos DB .NET v3 SDK is open source, and our team is planning to do development in the open. To that end, we welcome any pull requests and will be logging issues and tracking feedback on GitHub.

New programming model with fluent API surface

Since the preview, we’ve continued to improve the object model for a more intuitive developer experience. We’ve created a new top level CosmosClient class to replace DocumentClient and split its methods into modular database and container classes. From our usability studies, we’ve seen that this hierarchy makes it easier for developers to learn and discover the API surface.

using Microsoft.Azure.Cosmos;
using System;
using System.Threading.Tasks;

//Getting started with .NET SDK V3
namespace CosmosDotnetSDKGetStarted
{
    class Program
    {
        static async Task Main(string[] args)
        {
            // Create new CosmosClient to communiciate with Azure Cosmos DB
            using (var cosmosClient = new CosmosClient("endpoint", "key"))
            {
                // Create new database
                Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync("databaseId");

                // Create new container
                Container container = await database.CreateContainerIfNotExistsAsync("containerId", "/PartitionKey");

                // Add item to container
                var todoItem = new TodoItem()
                {
                    id = Guid.NewGuid().ToString(),
                    PartitionKey = Guid.NewGuid().ToString(),
                    Task = "Get started with Azure Cosmos DB!"
                };
                var todoItemResponse = await container.CreateItemAsync<TodoItem>(todoItem, new PartitionKey(todoItem.PartitionKey));
            }
        }
    }
}

We’ve also added in fluent builder APIs, which make it easier to create CosmosClient, Container, and ChangeFeedProcessor classes with custom options.

using Microsoft.Azure.Cosmos;
using Microsoft.Azure.Cosmos.Fluent; // Use the new fluent namespace
using System.Threading.Tasks;

// Creating a new CosmosClient and Container using the fluent builder pattern
namespace CosmosDotnetSDKGetStarted
{
    class Program
    {
        static async Task Main(string[] args)
        {
            //Create new instance of CosmosClient using builder pattern
            CosmosClient cosmosClient = new CosmosClientBuilder("endpoint", "key")
                .WithApplicationRegion("West US 2")
                .WithApplicationName("ProductReviewsApp")
                .Build();

            //Define and create a new container using builder pattern
            Container container = await cosmosClient.GetDatabase("databaseId").DefineContainer("containerId", "/PartitionKey")
                // Define indexing policy with included and excluded paths
                .WithIndexingPolicy()
                    .WithIncludedPaths()
                        .Path("/username/*")
                        .Path("/productName/*")
                        .Path("/rating/*")
                        .Attach()
                    .WithExcludedPaths()
                        .Path("/*")
                        .Attach()
                    .Attach()
                // Define time to live (TTL) in seconds on container
                .WithDefaultTimeToLive(30)
                .CreateAsync();
        }
    }
}

View all samples on GitHub.

Stream APIs for high performance


The previous versions of the Azure Cosmos DB .NET SDKs always serialized and deserialized the data to and from the network. In the context of an ASP.NET Web API, this can lead to performance overhead. Now, with the new stream API, when you read an item or query, you can get the stream and pass it to the response without deserialization overhead, using the new GetItemQueryStreamIterator and ReadItemStreamAsync methods.

Easier to test and more extensible


In .NET SDK version 3, all APIs are mockable, making for easier unit testing.

We also introduced an extensible request pipeline, so you can pass in custom handlers that will run when sending requests to the service. For example, you can use these handlers to log request information in Azure Application Insights, define custom retry polices, and more. You can also now pass in a custom serializer, another commonly requested developer feature.

Use the Change Feed Processor APIs directly from the SDK


One of the most popular features of Azure Cosmos DB is the change feed, which is commonly used in event-sourcing architectures, stream processing, data movement scenarios, and to build materialized views. The change feed enables you to listen to changes on a container and get an incremental feed of its records as they are created or updated.

The new SDK has built-in support for the Change Feed Processor APIs, which means you can use the same SDK for building your application and change feed processor implementation. Previously, you had to use the separate change feed processor library.

Ability to scale non-partitioned containers

We’ve heard from many customers who have non-partitioned or “fixed” containers that they wanted to scale them beyond their 10GB storage and 10,000 RU/s provisioned throughput limit. With version 3 of the SDK, you can now do so, without having to create a new container and move your data.

All non-partitioned containers now have a system partition key “_partitionKey” that you can set to a value when writing new items. Once you begin using the _partitionKey value, Azure Cosmos DB will scale your container as its storage volume increases beyond 10GB. If you want to keep your container as is, you can use the PartitionKey.None value to read and write existing data without a partition key.

// Scale fixed container using partition key in .NET V3 SDK
public static async Task ScaleNonPartitionedContainer()
{   
    using (var cosmosClient = new CosmosClient(endpoint, key))
    {
        //Get reference to existing fixed container
        var container = cosmosClient.GetDatabase(databaseId).GetContainer(containerId);             

        // Add item to container without partition key
        var user = new User()
        {
            Id = "bob",
            Status = "Learning Azure Cosmos DB!"
        };
        await container.CreateItemAsync<User>(user, PartitionKey.None);

        // Now start taking advantage of partitioning! Create and add a new item with partition key value of user Id
        var userWithPartitionKey = new User()
        {
            Id = "alice",
            PartitionKey = "alice",
            Status = "Partitioning all the things"
        };

        await container.CreateItemAsync<User>(userWithPartitionKey, new PartitionKey(userWithPartitionKey.PartitionKey));

        // Scale throughtput beyond  10,000 RU/s limit of fixed containers
        var throughputResponse = await container.ReplaceThroughputAsync(15000);
    }

}

// Example of the User class
public class User
{
    public User()
    {
    }

    [JsonProperty(PropertyName = "id")]
    public string Id { get; set; }

    [JsonProperty(PropertyName = "_partitionKey", NullValueHandling = NullValueHandling.Ignore)] // Enbales optional PartitionKey value
    public string PartitionKey { get; set; }

    [JsonProperty(PropertyName = "status")]
    public string Status { get; set; }

}

Easier APIs for scaling throughput


We’ve redesigned the APIs for scaling provisioned throughput (RU/s) up and down. You can now use the ReadThroughputAsync method to get the current throughput and ReplaceThroughputAsync to change it. View sample.

Related Posts

0 comments:

Post a Comment