Saturday, 29 April 2023

Managing IP with Microsoft Azure and Cliosoft

Microsoft Azure, Cliosoft, Azure Prep, Azure Career, Azure Tutorial and Materials, Azure Guides, Azure Cliosoft, Azure Learning

Intellectual Property (IP) is a critical asset for technology companies, encompassing a company's patents, trademarks, copyrights, and trade secrets. For technology companies, IP often refers to source code or design, which is the foundation of their products and services. Therefore, protecting IP is crucial to a company's success, and proper management of IP data and processes is equally essential.

As technology companies have grown and become more geographically diverse, managing IP data and processes have become more complex. Companies now need to be able to build geographically diverse teams and design centers, adding new complexity and challenges to IP data and process management. With the advent of cloud computing, companies are leveraging the power of the cloud for high-performance computing (HPC) workloads. This move to the cloud has made effective and efficient IP data management a necessity.

Managing revisions of IPs is a fundamental requirement, but IP data governance, traceability, and security are equally important. These requirements have led to the development of IP management platforms, which are designed to meet the unique needs of technology companies. Ultimately, using an IP management system enables companies to collaborate efficiently with accuracy, secure their IP, track the expertise of their employees, and comply with regulatory requirements.

Companies that work on hardware, software, or both must manage IP data and processes holistically on a Unified IP Management Platform. This platform must provide revision and release control for geographically and functionally diverse teams of engineers and managers contributing to the company's intellectual property development process. This is where the power of using HPC platforms on the cloud is most apparent.

Knowing your key requirements for an IP management system


During the product design process, several features are needed to ensure companies are getting the most out of their IP management system. If you’re looking to implement an IP management system, look to ensure your system has the following elements.

◉ Version control tools that provide three main features: reversibility, concurrency, and annotation, which help engineering teams manage their source code or design. Integrating version control with IP editor tools improves productivity for engineering teams by making data sharing more efficient and reducing errors. A centralized repository that helps engineers synchronize their work more frequently and improves collaborations by allowing engineers to see who is working on what and stay up-to-date on changes.

◉ Technology design often has large files, so an IP management system needs to provide viable options to mitigate the problem and manage data management at scale powered with technology such as a network storage optimization.

◉ IP traceability is a key feature of an IP management system, that helps with Bill of Materials (BOM) Management, IP Provenance tracking, and data analytics

◉ IP Reuse: To maximize the company's return on invested time, money, and efforts, engineering teams need a central catalog of IPs to search and browse for all available IPs. Engineers need access to real-time, accurate information about the IP. The catalog of IPs must provide comprehensive dependency tracking. Engineers and managers must be able to report the Bill of Materials (BOM) for every project or IP in the catalog and provide an IP consumers' report, including detailed information on the products using the IP.

◉ Long term return on investment (ROI) stemming from a system that can track IP over time.  Companies spend a lot of time, money and effort to develop the IP, so having a system that can access real-time and historical knowledge about IP is paramount.

Using Cliosoft on Microsoft Azure


Cliosoft’s Data and IP Management platform ensures teams can create, manage, and store their documents, scripts, methodologies, and ideas seamlessly. In addition, Cliosoft HUB addresses the size and complexities of today’s IP management. Used in combination, users can accurately manage their intellectual property.

Microsoft Azure’s purpose-built HPC platform offers industry-leading security, scalability, and flexible use models to meet the demands of the most complex high-performance computing workloads. The availability of Azure’s virtual machines means having large amounts of compute available, which gives companies the flexibility to scale up and down depending on their IP management needs.

By implementing a well-designed Cliosoft IP Source Code Management Platform on Microsoft Azure, enterprise customers can leverage the power of both systems to achieve high scalability, collaboration, availability, and reliability in their product development cycles. This solution provides companies with the tools they need to manage their IP data and processes holistically, ensuring that they can develop high-quality intellectual property while reducing development times and costs.

Moving Forward


IP is a critical asset for technology companies, and proper management of IP data and processes is essential for a company's success. The move to cloud computing has made effective and efficient IP data management a necessity. Cliosoft and Microsoft Azure provide a solution for managing IP source code in technology development, offering a platform that addresses the complexity and challenges of IP governance, traceability, and security. Companies that leverage this solution can achieve high scalability, availability, and reliability in their product development cycles, giving them a competitive advantage in achieving success.

Source: microsoft.com

Thursday, 27 April 2023

Choose the best global distribution solution for your applications with Azure

As part of your cloud journey, critical applications need to be deployed in multiple Azure regions to ensure high availability for your global customer base. When reviewing Azure’s various global traffic distribution solutions, ask yourself, “Which option is the best one for my application?”.

In this blog, you will learn about each global traffic distribution solution Azure offers, and which solution is the best one for your internet-facing cloud architecture. Currently, Azure offers different options for distributing global traffic. Microsoft Azure Front Door is a content delivery network (CDN) service with application layer load balancing capabilities. Azure cross-region Load Balancer is a global network layer load balancer. Finally, Azure Traffic Manager is a domain name service (DNS)-based traffic distribution solution. 

Choosing the right global traffic distribution solution


You will learn about three example companies—Contoso1, Contoso2, and Contoso3. For each company, we will dive into their application’s scenario and decide which global traffic distribution solution is the best one for them.

Customer scenario 1—wholesale distributor

Contoso1 is a large wholesale distributor that has locations all over the globe. Contoso1 has been going through a large technological transformation and has been migrating services to Azure. One of the applications being moved to Azure is their backend inventory management software. This application is responsible for providing users with information about inventory status and updating inventory records after a transaction has occurred. As part of their migration the team at Contoso1 has strict requirements that need to be met by a global distribution solution.

◉ First, all traffic type will be layer 4 and must be served with ultra-low latency. In addition, the application requires a regional redundancy with automatic traffic fail-over in the event a region is down, to ensure high availability.
◉ Second, the application requires a static IP address that the application’s frontend will consistently ping.
◉ Finally, any updates made to regional deployments shouldn’t have an impact on the overall backend inventory application.

Given all the requirements laid out by Contoso1’s, Azure cross-region Load Balancer is a perfect solution for their application. Azure cross-region Load Balancer is highly optimized at serving layer-4 traffic with ultra-low latency. Furthermore, cross-region load balancer provides geo-proximity routing, which means all Contoso1’s stores traffic will be forwarded to the closest regional deployment to them. Azure cross-region Load Balancer also provides automatic failover. In the event one of Contoso1’s regional deployment is unhealthy, all traffic will be serviced by the next healthy regional deployment. In addition, cross-region load balancers provide users with a static globally anycast IP address, in which Contoso1 doesn’t have to worry about their IP address changing. Finally, Azure cross-region Load Balancer will allow Contoso1 to update its regional deployments behind a single global endpoint without any impact on its end users.

Azure Career, Azure Skills, Azure Jobs, Azure Prep, Azure Preparation, Azure Certification, Azure Skills, Azure Jobs, Azure Materials

Customer scenario 2—social media company

Contoso2 is a global social media platform. As a social media site, they need to serve both interactive and static content to their users around the globe as quickly and reliably as possible. Most recently, due to Contoso2’s prominent status as a social media platform, they have experienced an outage with their on-premises hosted website because of a DDoS attack. That said, Contoso2 has the following strict requirements as they migrate to Azure:

◉ A platform that can deliver both static and dynamic content to their consumers around the globe with the utmost performance and reliability.
◉ Ability to route content to both their mobile and desktop users as quickly as possible.
◉ Easily integrate with Azure’s DNS, Web Application, Storage, and Application Gateway products.
◉ DDoS protection.
◉ Reduce secure sockets layer (SSL) load on Contoso2’s application servers, and instead process SSL requests on the edge for faster user experience for Contoso2’s global clients.

Azure Front Door is an ideal solution to enable accelerated and highly resilient web application performance for optimal delivery of static and dynamic content around the globe:

◉ Static Content—Contoso2’s cached static content can be served from Azure Front Door’s 185 global edge points of presence (PoP) locations. To ensure the utmost performance and resiliency, Azure Front Door utilizes the Anycast protocol to make sure the Contoso2’s client’s requests are served from the nearest global edge locations.

◉ Dynamic Content—Azure Front Door has an arsenal of traffic acceleration features. Client to Azure Front Door PoP traffic is again optimized via the Anycast protocol. Although as it specifically pertains to dynamic workloads, edge PoP to customer’s origin connections are optimized via split TCP. This technique enables the traffic to terminate the TCP connection to the nearest edge PoP and uses long living connections over Microsoft’s global private wide area network (WAN) to reduce the round-trip-time (RTT). Additionally, in the event Cotoso2 deployed multiregional origin deployments, Azure Front Door utilizes health probes to fetch content from the least latent origin.

Moreover, Azure Front Door also has SSL offload capabilities which can improve performance further. In addition, Azure Front Door is highly optimized for HTTP and web-based applications. With Azure Front Door, customers are equipped with various layer 7 routing features. These features allow customers to apply business routing and advanced routing within Azure Front Door. For example, Azure Front Door can route requests to mobile or desktop versions of Contoso2’s web application based on the client device type. Additional examples include SSL offload, path-based routing, fast failover, caching, and more.

Today Azure provides end-to-end solutions for every aspect of application management. Azure Front Door provides seamless integration with other Azure services such as DNS, Web App, and Storage. These integrations allow customers to easily create powerful web applications built using the integration of multiple Azure services.

Finally, Azure Front Door provides built-in support for various security products to help protect customers’ web applications. For example, customers can secure their origins with layer 3, 4, and 7 DDOS mitigation, and seamlessly enable Azure Web Application Firewall protection.

Azure Career, Azure Skills, Azure Jobs, Azure Prep, Azure Preparation, Azure Certification, Azure Skills, Azure Jobs, Azure Materials

Customer scenario 3—sustainable fashion retailor

Contoso3 is a large retail store focused on sustainable fashion items. Contoso3 has a large online presence and has historically been hosting all their applications on-premises. However, given the advantage of the cloud and Azure, Contoso3 has begun migrating their applications to Azure. One of these applications is their online store platform. As the team at Contoso3 is evaluating different Azure global traffic distribution solutions, they have outlined several requirements that must be addressed.

◉ First, the team at Contoso3 will be doing a rolling migration where part of their application will remain on-premises and the other part will be hosted on Azure. Any viable solution should be able to direct traffic to on-premises servers to support this rolling migration plan.
◉ Second, latency is critical for Contoso3 and client traffic needs to be routed to healthy endpoints in a timely manner. 
◉ Finally, the solution needs to be able to direct users to the correct backend type based on their geographical location. Contoso3 caters to a wide range of customers and often has clothing items specific to certain geographical areas.

With all the requirements stated prior, Azure Traffic Manager would be the optimal solution for Contoso3. With Azure Traffic Manager, users can add on-premises servers in the backend to support burst-to-cloud, failover-to-cloud, and migrate-to-cloud scenarios. In addition, Azure Traffic Manager provides automatic failover and multi-region support, which all result in traffic being served with low latency. DNS name resolution is fast, and results are cached. The speed of the initial DNS lookup depends on the DNS servers the client uses for name resolution. Typically, a client can complete a DNS lookup within approximately 50 ms. The results of the lookup are cached for the duration of the DNS time-to-live (TTL). The default TTL for Traffic Manager is 300 seconds (about five minutes). The Traffic Manager can also help Contoso3 with their geofencing needs, specifically with the geographic routing feature. This feature will allow Contoso3 to direct users to the correct backend instance based on their geographical location.

Azure Career, Azure Skills, Azure Jobs, Azure Prep, Azure Preparation, Azure Certification, Azure Skills, Azure Jobs, Azure Materials

Source: microsoft.com

Tuesday, 25 April 2023

Isovalent Cilium Enterprise in Azure Marketplace

Azure Marketplace, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Tutorial and Material

In December 2022, Microsoft and Isovalent announced the collaboration to bring the Isovalent Cilium Enterprise offering onto Azure Kubernetes Services (AKS) via Azure Marketplace. Today, we are happy to announce that Isovalent Cilium Enterprise offering is now available on the Azure Marketplace.

Isovalent Cilium Enterprise on Azure is a powerful tool for securing and managing Kubernetes’ workloads on Azure. Isovalent Cilium Enterprise's range of advanced features and easy-to-deploy ability makes it an ideal solution for organizations of all sizes looking to secure their cloud-native applications.

Enterprises realize that to achieve accelerated business growth, they can leverage AKS. However, as applications scale, there is increased complexity with security and networking. Isovalent Cilium Enterprise, which is built on top of the open-source Cilium project, addresses these gaps by providing additional functionality such as advanced observability and security policy enforcement across multiple layers of the stack. It uses eBPF technology to deliver network and application-layer security, while also providing observability and tracing for Kubernetes workloads. Isovalent Cilium Enterprise also provides seamless integration with popular Kubernetes platforms and tools, including Istio, Helm, and more, thereby making it a trusted offering among organizations. Azure Marketplace is an online store for buying and selling cloud computing solutions that allows you to deploy Isovalent Cilium Enterprise to Azure with ease.

With such a tight and native integration of Isovalent Cilium Enterprise with Azure Marketplace, customers can deploy and upgrade Isovalent Cilium Enterprise features to a new or existing AKS cluster (running Azure CNI powered by Cilium) with a few clicks. Isovalent Cilium Enterprise is built with native integration with the Azure networking platform to offer advanced features and capabilities with best-in-class performance and scale. Furthermore, customers can seamlessly perform lifecycle management of the application by receiving version updates, auto-upgrades, and vulnerability scans thus allowing them to achieve their business goals effectively. They also benefit from the simplified billing from Azure Marketplace. Whether customers are just getting started with AKS or have a large-scale production deployment, Isovalent Cilium Enterprise on Azure Marketplace will help them achieve better visibility, security, and compliance. The myriad benefits of a trusted and secure platform such as Azure Marketplace will accrue savings by limiting management overhead and driving productivity for enterprises at scale.

Key capabilities for Azure Kubernetes Services customers


To recapture, Isovalent Cilium Enterprise provides a range of advanced features built on Kubernetes networking, security, and observability. Here are just a few of the key capabilities that Isovalent Cilium Enterprise exposes to AKS customers:

• Enhanced observability: Isovalent Cilium Enterprise provides deep visibility into your Kubernetes networking and security via Hubble. This includes detailed flow-level visibility, service mesh tracing, and more.

• Advanced security: Isovalent Cilium Enterprise provides multi-layer security policy enforcement, including layer 7 application security policies. This allows you to protect your Kubernetes environment from advanced threats while also ensuring compliance with industry regulations.

• Better Scalability: Isovalent Cilium Enterprise is designed to scale with your Kubernetes workloads, providing security and observability for large-scale Kubernetes clusters.

• Enterprise-grade support: Isovalent Cilium Enterprise includes enterprise-grade support from Isovalent, including everyday support and access to patches and updates.

Start your journey with Isovalent Cilium Enterprise on Azure


To get started with Isovalent Cilium Enterprise on Azure, simply navigate to the Azure Marketplace from the Azure Portal and search for "Cilium Enterprise." You can choose between different options based on the number of nodes in your cluster and the level of support you require. Once you have selected your subscription and accepted the terms, you can deploy Isovalent Cilium Enterprise to your Kubernetes cluster with just a few clicks.

Azure Marketplace, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Tutorial and Material

The benefits of Isovalent Cilium Enterprise through Azure Marketplace


The key benefits of using Isovalent Cilium Enterprise in the Azure Marketplace include:

• Easy deployment: You can deploy Isovalent Cilium Enterprise on Azure with just a few clicks from the Azure Marketplace. You can either create a new AKS cluster or seamlessly upgrade an existing AKS cluster running Azure CNI powered by Cilium with the Isovalent Cilium Enterprise package.

• Zero data path downtime while upgrading Cilium OSS to Isovalent Cilium Enterprise via Azure Marketplace.

• Enhanced billing experience: Azure Marketplace provides a unified billing experience and an integrated experience for your Isovalent Cilium Enterprise usage.

• Limited management overhead for customers in maintaining the upgrades. Azure Marketplace supports configurable auto-upgrades for minor versions.

Source: microsoft.com

Saturday, 22 April 2023

4 cloud cost optimization strategies with Microsoft Azure

Microsoft Azure, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Prep, Microsoft Preparation, Microsoft Guides, Microsoft Learning, Microsoft

We have seen many businesses make significant shifts toward cloud computing in the last decade. The Microsoft Azure public cloud offers many benefits to companies, such as increased flexibility, scalability, and availability of resources. However, with the increased usage of resources, implementing best practices in cloud efficiency is a necessity to validate spending and avoid waste.

What is cloud efficiency? It is the capacity to utilize cloud resources in the best possible way, and at the lowest possible cost while, at the same time, minimizing the waste of resources, and thus of energy and carbon emissions. It’s a combination of cost—how you handle and govern your cloud infrastructure, carbon—how you can keep carbon emissions at a minimum, and energy—how the application uses electricity, and how you can optimize these three areas to make the cheapest, more modern, efficient, and sustainable application. In this post, we will explore why you should immediately start your cloud cost management and governance process.

Cloud cost optimization is essential for companies as it directly impacts their bottom line and OPEX expenses. The cost of cloud computing can quickly add up, especially for businesses with a high volume of data or high traffic, and mission-critical applications.

Cloud cost optimization is what makes workloads more efficient, but what are its benefits?

◉ Understanding, measuring, optimizing, and tracking your cloud costs. Having full control of your monthly bill should be your primary goal.

◉ Reduce carbon emissions. Cloud computing consumes a significant amount of energy, and the increased usage of cloud resources has resulted in a substantial increase in carbon emissions. Cloud providers are taking steps to reduce their carbon footprint, but businesses can also play a significant role in reducing carbon emissions by optimizing their cloud resources.

◉ Improve the performance of applications. This can significantly impact user experience, as slow or unresponsive applications can lead to frustrated customers and lost revenue. By optimizing cloud resources, companies can ensure that their applications run smoothly, improving customer satisfaction, and decreasing cloud spend.

◉ Saving on your application’s cost in a systematic way can give you a budget for additional features, refactoring, and innovation.

The four main cloud cost optimization strategies are usually:

1. Right sizing


Right-sizing is probably the most important aspect of controlling cloud costs. The impact is not simply saving money—in many cases, there is a balance between performance and spending and, more specifically, between meeting your internal customer service-level agreements (SLAs) efficiently. You need to find this balance to keep both your application managers, financial operations (finops) team, and cloud team happy.

2. Clean-up


Another important part of cloud computing cost saving is cleanup operations. When dealing with many workloads or complex projects, lots of resources are created just as a transitional step and are often forgotten about and paid for. This is particularly valid during lift and shift migration where customers choose to initially match resources that were in a fixed, non-flexible environment, ending up with overallocated services. Cleaning up unused items—as a first approach—represents one of the short-term, quick wins for cost-saving. When inserted into a recurring process, this will also help you uncover any unassigned or unutilized infrastructure (with operational downfalls) and, in general, uncover gaps in your processes that might have a wider impact than costs. You should plan to periodically assess the evolution of your infrastructure for any resources that may have been left unassigned and add this to your technical debt management operations.

3. Azure reservations and savings plans


These are a 1- or 3-year commitment to specific Microsoft Azure services or compute use. In exchange for this, significant cloud computing cost savings are granted. This is a very important area of cost governance, as it can amount to very large savings, even though it has practically zero impact on the carbon footprint. We recommend using reservations and savings plans once the right-sizing and cleanup processes have successfully started and periodically track and adjust their usage to match up to 100 percent of your requirements.

4. Database and application tuning


We often see customers migrate applications that rely on legacy databases. Sometimes, even cloud-native applications are developed using old data handling patterns, mostly because companies have a history that needs to be retained and cannot be wiped out by switching to a new database. But a large, stratified database that was doing well in an on-premises environment, has immediate drawbacks in the cloud—queries may be slow and resource-intensive, and data is uselessly exchanged and in large quantities which all adds up to the monthly bill. Optimizing the database so that the application is leaner and faster will also save you money by downsizing the original infrastructure and using fewer data and networking resources.

Having fully optimized your databases can, sometimes, not be enough. Your freshly migrated application came from one of the cloud migration patterns—lift and shift, refactor, rearchitect, and rebuild. Their cloud efficiency is higher when applications are designed for the cloud, as they will utilize all the flexibility and scaling of infrastructure as a service (IaaS) and platform as a service (PaaS) services, with the result of higher performance and lower costs. Investing some of the savings from your cloud cost reduction exercise will not only improve your application performance but in the end improve your overall cloud resource optimization.

Microsoft Azure, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Prep, Microsoft Preparation, Microsoft Guides, Microsoft Learning, Microsoft

What can you do to kickstart your cloud computing efficiency today:

Start your recurrent cloud cost management meeting this week. Make sure to invite all the stakeholders—the cloud and finops teams, your finance controller, and anyone in your company who is dealing with cloud costs directly or indirectly.

Search for quick wins (cleaning up, downsizing, optimizing logs or backups, and more) so that this will fund the upcoming wave of cost-saving tasks and the refactoring and innovation of your applications.

In conclusion, cloud computing efficiency is a crucial element for any company that is operating in the cloud. By adopting cloud spend optimization practices, businesses can reduce their overall cloud spend and carbon emissions, improve the performance of their applications, and finance future elements of innovation.

Source: microsoft.com

Prove Your Azure Expertise with Microsoft DP-420 Certification

What Is the Microsoft DP-420 Certification Exam?

DP-420 certification exam is a professional certification exam designed to test an individual's proficiency in data preprocessing, analysis, visualization, and machine learning. Microsoft offers the certification, and it is intended for individuals who want to demonstrate their knowledge and expertise in using Microsoft Azure's Machine Learning service to create and deploy machine learning models. The exam assesses the candidate's ability to use Azure Machine Learning tools and services, understand and implement data preprocessing techniques, and develop, train, and deploy machine learning models in the Azure environment.

Who Should Take the Microsoft DP-420 Certification Exam?

To take this exam, the candidate is expected to have a strong understanding and hands-on experience in developing applications for Azure and working with Azure Cosmos DB database technologies. Specifically, the candidate should have the following knowledge and skills.

1. Proficiency in Developing Applications for Azure

The candidate should be well-versed in developing cloud-based applications using Azure services and tools such as Azure Functions, Azure App Service, Azure Storage, and Azure Event Grid.

2. Solid Understanding of Azure Cosmos Db Database Technologies

The candidate should have a thorough knowledge of the different data models supported by Cosmos DB, such as documents, graphs, and key-value. They should also be familiar with the features and capabilities of Cosmos DB, such as global distribution, partitioning, and indexing.

3. Experience Working with Azure Cosmos DB for NoSQL API

The candidate should have hands-on experience developing applications that use Azure Cosmos DB for NoSQL API. This includes knowledge of creating and managing databases and containers, performing CRUD (Create, Read, Update, Delete) operations, and executing queries using the SQL API.

Having the above skills and knowledge, the candidate can prepare effectively for the exam and increase their chances of passing it successfully. This can also help them professionally by demonstrating their expertise in developing Azure Cosmos DB applications.

Microsoft DP-420 Exam Format

The exam is called Microsoft Certified - Azure Cosmos DB Developer Specialty, with the exam code DP-420. The exam costs $165 (USD), and the duration is 120 minutes. There will be between 40-60 questions on the exam, and the passing score is 700 out of 1000 points. This exam is designed for individuals who want to demonstrate their expertise in developing solutions using Azure Cosmos DB.

Exam Topics

The Microsoft Certified Azure Cosmos DB Developer Specialty exam (DP-420) covers various topics related to developing solutions using Azure Cosmos DB. Some of the critical areas that are covered in the exam include.

  • Design and Implement Data Models (35-40%)
  • Design and Implement Data Distribution (5-10%)
  • Integrate an Azure Cosmos DB Solution (5-10%)
  • Optimize an Azure Cosmos DB Solution (15-20%)
  • Maintain an Azure Cosmos DB Solution (25-30%)

These are some topics that may be covered on the exam. Reviewing the whole exam objectives and study materials provided by Microsoft is essential to ensure you are fully prepared for the exam.

Why is Microsoft DP-420 Certification Necessary?

The exam you refer to evaluates your proficiency in various technical areas related to Azure Cosmos DB, a worldwide distributed, multi-model database service offered by Microsoft Azure. The exam measures your ability to perform the following tasks.

1. Design and Implement Data Models

You should be able to create and optimize data models to save and manage data in Azure Cosmos DB. This includes understanding the different data models supported by Cosmos DB, such as documents, graphs, and key-value.

2. Design and Implement Data Distribution

You should be able to configure and manage the replication and partitioning of data across multiple regions and availability zones to ensure high availability and disaster recovery.

3. Integrate an Azure Cosmos DB Solution

You should be able to integrate Cosmos DB with other Azure services and tools to create a complete end-to-end solution for your application.

4. Optimize an Azure Cosmos DB Solution

You should be able to fine-tune and optimize the performance of your Cosmos DB solution by selecting appropriate configurations and using features such as indexing and query optimization.

5. Maintain an Azure Cosmos DB Solution

You should be able to monitor and manage the health and performance of your Cosmos DB solution, troubleshoot issues, and implement backup and restore procedures.

By passing this exam, you can show your expertise in designing, implementing, and maintaining robust and scalable data solutions using Azure Cosmos DB. This can lead to career advancement opportunities and increased recognition in the industry.

Microsoft DP-420 Exam Preparation

1. DP-420 Exam Study Materials

To prepare for the DP-420 certification exam, candidates can use various study materials, including Microsoft documentation, Azure Machine Learning samples, whitepapers, and Microsoft Official Courseware (MOC).

2. Microsoft DP-420 Exam Practice Tests and Sample Questions

Practice tests and sample questions are essential for preparing for the DP-420 certification exam. Candidates can find sample questions and practice tests on the Microsoft Learning website or from third-party providers.

3. Microsoft DP-420 Exam Study Tips

Here are some study tips to help candidates prepare for the DP-420 certification exam.

  • Review the exam objectives and topics thoroughly.
  • Create a study plan and allocate time for each topic.
  • Practice hands-on exercises using Azure Machine Learning service.
  • Collaborate with other professionals or join study groups to discuss exam topics and share knowledge.
  • Take regular breaks to avoid burnout.

4. Recommended DP-420 Exam Study Resources

Here are some recommended study resources for the DP-420 certification exam:

  • Microsoft Azure Machine Learning documentation
  • Azure Machine Learning samples and tutorials
  • Microsoft Official Courseware (MOC)
  • Online learning bases such as Udemy, Coursera, and edX
  • Microsoft DP-420 Exam Study Guide from Microsoft Press

Exam Tips and Tricks

1. Time Management

Time management is crucial when taking the DP-420 certification exam. Candidates should allocate their time wisely and ensure enough time to answer all the questions. They should also monitor the clock and pace themselves throughout the exam.

2. Answering Strategies

When answering exam questions, candidates should read each question carefully and understand what is being asked before attempting to answer. They should also eliminate incorrect answers to increase their chances of selecting the correct answer. If unsure, they should use their best judgment to choose the most suitable option.

3. Common Mistakes to Avoid

Here are some common mistakes that candidates should avoid when taking the DP-420 certification exam.

  • Need help understanding the exam objectives and topics.
  • Rushing through the exam without reading questions carefully.
  • Not managing time effectively.
  • Overthinking questions and second-guessing answers.
  • Not reviewing answers before submitting the exam.

4. Best Practices for DP-420 Exam Preparation

Here are some best practices to help candidates prepare for the DP-420 certification exam.

  • Use a variety of study materials, including practice tests and sample questions.
  • Create a study schedule and stick to it.
  • Practice hands-on exercises using Azure Machine Learning service.
  • Take breaks and avoid burnout.
  • Join study groups or collaborate with other professionals to share knowledge and insights.

Conclusion

In conclusion, the DP-420 certification exam is valuable for experts seeking to demonstrate their proficiency in data preprocessing and machine learning using Microsoft Azure's Machine Learning service. By passing this exam, candidates can validate their knowledge and skills, enhance their career prospects, and differentiate themselves from other job candidates. However, passing the exam requires thorough preparation, including reviewing exam objectives and topics, practicing with sample questions and exercises, and developing effective time management and answering strategies. With dedication and effort, candidates can succeed in the DP-420 certification exam and take the next step in their career in data science and machine learning.

Thursday, 20 April 2023

Unleash the power of APIs: Strategies for innovation

Microsoft, Microsoft Exam Prep, Microsoft Preparation, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Tutorial and Materials

Modern businesses increasingly rely on technology to drive growth and deliver innovative experiences to their customers. Application programming interfaces (APIs) are the building blocks that power these connected digital experiences. And more than ever, effective API management has become critical to accelerate time-to-market and deliver compelling customer and partner interactions.

We are excited to announce, "Unleash the Power of APIs: Strategies for Innovation", the latest in the Microsoft Azure webinar series on April 26, 2023, from 10:00 AM to 11:30 AM PT. In this 90-minute interactive virtual session, you will hear from analysts, product leaders, and Microsoft Azure API Management customers on how API management can maximize your investments and accelerate your API programs.

From security to development, first-hand customer accounts to analyst insights, this event will cover why APIs are so important today and in the future. Here’s a preview of what we’ll have featured at the webinar:

API-first businesses transformation


Amanda Silver, Corporate Vice President and Head of the Product Developer Division at Microsoft

We’re all hearing a lot about API-first development these days, and for good reason. The impact of API-first development is huge—An API-first approach promotes faster development, better collaboration, scalability, reusability, and enhanced security for developers. But what does API-first mean for businesses? And how do you implement this strategy?

The event will kick off with Amanda sharing insights on what API-first means and how it’s a game changer for businesses to achieve faster time-to-market, better integration, and accelerated innovation. She will also discuss the role of the Azure API Management platform in supporting an API-first strategy.

Market trends and API-driven innovation


Shari Lava, IDC Research Director Automation, and Ashmi Chokshi, General Manager Azure Digital and Application Innovation, Microsoft

Businesses are increasingly adopting digital-first strategies to stay competitive in today’s fast-moving market and economy, and as a result, interest in APIs is surging as they become critical to these strategies.

Join Shari and Ashmi in a conversation about what’s driving the adoption of APIs now and why APIs are critical to driving competitive innovation and business differentiation. Shari will also discuss why it is crucial to invest in an API management solution, the market trends in the adoption of API management tools, and the factors to consider when choosing one based on your business needs.

Enterprise scale API management with Azure


Balan Subramanian, Partner Director of Product Management Azure App Platform, Microsoft

If you’ve been curious about how Azure API Management is empowering our customers to drive superior business outcomes, don’t miss this overview on Azure API Management from Balan.

With Azure API Management, organizations can manage every aspect of an API's lifecycle, from its inception to productization, across their API footprint, whether it's on-premises, on Azure, or on other clouds. Additionally, the developer portal and customizations allow platform engineering teams to create their API platforms on top of Azure API Management, tailoring it to their unique business requirements. Azure API Management is also fully integrated into Azure, making it an ideal solution for organizations migrating application workloads to the cloud without any overhead of using disparate solutions for building and managing their APIs.

API-first approach in the mortgage industry


Matt Cesarz, Chief Technology Officer, Optimal Blue, and Ali Powell, Vice President, Customer Success Digital and Application Innovation, Microsoft

In this customer session, Optimal Blue, a leading mortgage provider in the United States, talks about their successful partnership and journey with Azure API Management. Matt explains how adopting an API-first mindset enabled them to create frictionless customer experiences, deliver innovations faster, and drive growth.

Comprehensive defense-in-depth security with Azure API Management


APIs have become a popular attack vector, making defense-in-depth a crucial strategy for protecting enterprise data vaults against security threats. Without that level of protection, organizations are leaving themselves vulnerable to a range of security threats, including malicious attacks and data exfiltration.

Attend this session and learn how Azure API Management enables a defense-in-depth strategy through multiple layers of protection to prevent, detect, and respond to API threats. Balan will also cover the latest innovations that can further strengthen the security posture of your APIs.

Customer-centric healthcare with APIs


Blake Wilson, Integration and Site Reliability Engineering Manager, Technology, Bupa, and Ali Powell, Vice President, Customer Success Digital and Application Innovation, Microsoft

In this customer session, Bupa, one of the largest global medical insurance providers—with a large presence in Australia—talks about their successful innovations built with Azure API Management. Blake explains how leading with APIs has enabled them to enhance partner collaboration, improve security posture, and increase developer productivity.

Line of business innovation with Azure API Management


For businesses, low-code development and enterprise integration are two key strategic areas of investment that can drive innovation. By enabling seamless collaboration among application, integration, and low-code developers, these integrations can help accelerate innovation across all areas of your organization.

Hear from Balan about the Azure API Management integrations with Microsoft Power Platform to facilitate low-code development. You will also learn that Azure API Management is a core component of Azure Integration Services and is tightly integrated with other services such as Azure Logic Apps, Azure Functions, Azure Service Bus, and managed connectors enabling API-centered integration.

Source: microsoft.com

Saturday, 15 April 2023

How 5G and wireless edge infrastructure power digital operations with Microsoft

Microsoft, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Preparation, Microsoft Tutorial and Materials, Microsoft Learning, Microsoft Guides, Microsoft Preparation

As enterprises continue to adopt the Internet of Things (IoT) solutions and AI to analyze processes and data from their equipment, the need for high-speed, low-latency wireless connections are rapidly growing. Companies are already seeing benefits from deploying private 5G networks to enable their solutions, especially in the manufacturing, healthcare, and retail sectors.

The potential of 5G and multi-access edge computing (MEC) has evolved substantially. As they are fully ready to enable the next-generation of digital operations, it is important to highlight some recent successful deployments that provide high speeds and ultra-low latency.

These findings have been included in the latest Digital Operations Signals report. Where our previous industry trends report, IoT Signals, gave insight for audiences into IoT, we thought it was important for this latest report to go beyond IoT and into the world of digital operations. The report now encompasses the business outcomes that organizations are pursuing to unlock the next level of improvements in efficiency, agility, and sustainability in their physical operations utilizing AI, machine learning, digital twins, 5G, and more.

As 5G connections and mobile edge computing continue to advance, so does the demand for its adoption. Interestingly, the Digital Operations Signals report found that cloud radio access networks (C-RAN), private Wi-Fi networks, and MEC technologies are not just continuing to develop, but they are also likely to converge. This means we could see more unified on-site network architectures with faster, more powerful computing.

What can 5G infrastructure deliver?


Traditionally, local connectivity in business sites—such as hospitals, clinics, warehouses, and factories—was provided by Ethernet and Wi-Fi. While Wi-Fi is still in common use for enterprise on-premises connections, it doesn’t always offer the bandwidth, latency, security, and reliability needed for demanding IoT solutions, particularly for rugged operational environments. The wider availability of 5G connectivity is spurring growth in new edge solutions and an increasing number of IoT device connections. It is now possible to have higher throughput and latency as low as 100 milliseconds or less for a device to respond to a hosting server’s request.

But the adoption of 5G is more than just a network upgrade. Instead, it’s ushering in a new category of network-intelligent applications that can solve problems that were once out of reach. With 5G, you can deploy edge applications based on cloud-native distributed architecture for solutions that demand low latency and dedicated quality of service. By using 5G and leveraging APIs to interact with networks, these applications can deliver high-performing, optimized experiences.

How is 5G being used by enterprises today?


In factory settings, for example, AI requires low latency to improve control processes and robotic systems, recognize objects through advanced computer vision, and effectively manage warehouse and supply chain operations. In this scenario, 5G and MEC can help power computer vision-assisted product packing and gather near-real-time data on any mistakes. This opens the potential to improve on-site quality assurance for logistics and supply chain companies and reduce processing times.

In healthcare, 5G connections support AI’s use in medical diagnoses, health monitoring, predictive maintenance and monitoring of medical systems, and telemedicine applications. In retail operations, low-latency connections allow AI to help with real-time inventory management, in-store video traffic, and in-store real-time offers.

The 5G architecture consists of three different network tiers—low band, midband, and millimeter wave (mmWave) high band—that offer different advantages and disadvantages in coverage distances and speed. Additionally, key 5G services specialize in providing different features:

◉ Enhanced mobile broadband (eMBB): By defining a minimum level of data transfer rate, eMBB can provide ultra-high wireless bandwidth capabilities, handling virtual reality, computer vision, and large-scale video streaming.

◉ Massive machine-type communications (mMTC): Designed for industrial scenarios and other environments requiring numerous devices to be connected to each other, mMTC could be used with IoT solutions or large spaces with a variety of devices that would need to communicate together.

◉ Ultra-reliable low-latency communications (URLLC): This is designed for use cases that require extremely low latency and high reliability. This would benefit situations where responsiveness is critical, such as public safety and emergency response uses, remote healthcare, industrial automation, smart energy grids, and controlling autonomous vehicles.

Using these services to achieve high speeds and performance, however, requires businesses to upgrade network technology and update their older wireless and edge architectures. To help overcome these challenges, enterprises are turning to the right combination of hardware, software, and cloud services that can optimize 5G at the edge.

How are Microsoft and Intel empowering 5G solutions?


Microsoft and Intel understand the many challenges that enterprises face. By working with telecom hyper scalers, independent solution providers, and other partners, we are providing 5G infrastructure and network services that are easily adaptable for use cases in many sectors. Azure private multi-access edge compute (MEC) helps operators and system integrators simplify the delivery of ultra-low-latency solutions over 4G and 5G netwworks. By reducing integration complexity, enterprises can innovate new solutions and generate new revenue streams.

Intel has designed a range of hardware to power 5G edge network activities and improve content transmission and processing. By providing foundational technology to run 5G, they are working to help standardize and simplify its use and create more unified edge applications and services. By helping customers securely and efficiently deploy 5G across industries, they can reap the benefits of 5G without complicated or extended timelines.

Source: microsoft.com

Thursday, 13 April 2023

Azure Space technologies advance digital transformation across government agencies

Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Prep, Azure Preparation, Azure Guides, Azure Learning

Since its launch, Microsoft Azure Space has been committed to enabling people to achieve more, both on and off the planet. This mission has transcended various industries, including agriculture, finance, insurance, and healthcare.

The announcements we’ve made thus far have helped showcase how our mission encompasses not only commercial industries but also empowers government missions through recent contract wins. By bringing new commercial technologies, such as Microsoft 365, Azure Government Cloud, and Azure Orbital, government agencies are increasing the speed, flexibility, and agility of their missions. Today, we are announcing additional momentum on this motion, including:

◉ Viasat RTE integration with Azure Orbital Ground Station, bringing high rate, low latency data streaming downlink from spacecraft directly to Azure.

◉ A partnership with Ball Aerospace and Loft Federal on the Space Development Agency’s (SDA) National Defense Space Architecture Experimental Testbed (NeXT) program, which will bring 10 satellites with experimental payloads into orbit and provide the associated ground infrastructure.

◉ Advancements on the Hybrid Space Architecture for the Defense Innovation Unit, U.S. Space Force and Air Force Research Lab, with new partners and demonstrations that showcase the power, flexibility, and agility of commercial hybrid systems that work across multi-path, multi-orbit, and multi-vendor cloud enabled resilient capabilities.

◉ Azure powers Space Information Sharing and Analysis Center (ISAC) to deliver Space cybersecurity and threat intelligence operating capabilities. The watch center’s collaborative environment provides visualization of environmental conditions and threat information to rapidly detect, assess and respond to space weather events, vulnerabilities, incidents, and threats to space systems.

Viasat Real-Time Earth general availability on Azure Orbital Ground Station


Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Prep, Azure Preparation, Azure Guides, Azure Learning

Microsoft has partnered with Viasat Real-Time Earth (RTE) to offer customers new real-time capabilities to manage spacecraft and missions with Azure Orbital Ground Station as a service. This includes the ability to view, schedule, and modify passes at Viasat RTE sites for downlinking data to Azure and bring real-time streaming directly to Azure across secure Microsoft WAN.

As commercial satellite operators require increasingly higher downlink rates to bring mission data such as hyperspectral or synthetic aperture radar into Azure—this partnership with Viasat increases the opportunity to access an established global network on KA-band antennas. This unlocks new business opportunities for missions that require fast time to insight whilst also maintaining a high level of security.

“Viasat Real-Time Earth is enabling remote sensing satellite operators who are pushing the envelope of high-rate downlinks. Our strong relationship with Azure Orbital enables those same customers, through increased access to our ground service over the Azure Orbital marketplace and a dependable, high-speed terrestrial network, to reduce the time it takes to downlink and deliver massive amounts of data.”—John Williams, Vice President Viasat Real-Time Earth.

True Anomaly


True Anomaly delivers a fully integrated technology platform that combines training and simulation tools, advanced spacecraft manufacturing infrastructure and autonomy systems to revolutionize space security and sustainability.

True Anomaly is using the Viasat RTE integration with Azure Orbital Ground Station via Microsoft APIs today to advance their business with the government.

"Together, True Anomaly, Viasat, and Microsoft will employ cutting-edge modeling, simulation, and visualization tools available to train Space Force guardians and other operators. Our partnership will extend to satellite control, leveraging Microsoft Azure Orbital to provide seamless and efficient satellite management solutions for our fleet of Autonomous Orbital Vehicles. By joining forces, we unlock a path to disrupt space operations and training for years to come."— Even Rogers, Co-founder and CEO of True Anomaly.

This partnership combines True Anomaly's innovative Mission Control System with Microsoft’s Azure Orbital and Viasat, offering a seamless satellite management solution for space security operations and training.

Microsoft, Loft Federal, and Ball Aerospace partner on Space Development Agency NExT


The Space Development Agency is charged to create and sustain effective and affordable military space capabilities that provide persistent, resilient, global, low-latency surveillance. The National Defense Space Architecture Experimental Testbed (NExT) program will carry 10 satellites with experimental payloads into orbit.

SDA NExT builds upon Microsoft’s Azure Space products and partnerships. Central to Microsoft’s solution for NExT is the combination of Azure Orbital Ground Station and Azure Government air-gapped clouds which will allow SDA to do their mission work in a secure cloud environment. 

Through NExT, together the SDA and US Space Force will securely operate a government-owned satellite constellation with Azure Orbital Ground Station’s global network for the first time. Additionally, Microsoft 365 will also provide them with productivity tools to enable personnel to share information, which will help ensure a coordinated response.

Microsoft Azure Government cloud will enable SDA to extract spaceborne data insights from the cloud to the ultimate edge and to scale innovation faster and better meet the critical needs of the Guardians and strengthen national security.

New advancements and partnerships for Hybrid Space Architecture


Last year, we announced our contract supporting the Department of Defense's (DoD) Defense Innovation Unit (in partnership with United States Space Force and Air Force Research Lab) on the Hybrid Space Architecture (HSA). The goal of the program is to bring our advanced, trusted cloud, and innovative Azure Space capabilities, alongside a space partner ecosystem, to serve as a foundation to realize their Hybrid Space Architecture vision.

This year, Microsoft completed the first demonstration for the program focused on resilient communication and data paths which showcased:

◉ Multi-orbit, multi-vendor, resilient, edge-to-cloud connectivity including use of Azure Orbital Cloud Access through satellite communications partner SpaceX and SES.

◉ SpatioTemporal Asset Catalogs (STAC) standards for operating a private Planetary Computer to efficiently manage large geospatial datasets and enable space vehicle tasking across multiple providers.

◉ AI-enabled field user application to allow users to rapidly and easily discover and task satellite collection through an intuitive chat interface.

Microsoft is committed to a strong, and growing, partner ecosystem. As part of this first demonstration, the Hybrid Space Architecture ecosystem included the capabilities from Umbra and BlackSky.

Future demonstrations will incorporate all Azure Space capabilities including Azure Orbital Cloud Access, Azure Orbital Ground Station, Azure Orbital Space SDK, our leading security solutions, and vast threat intelligence, as well as multiple leading space partners.

Azure powers ISAC to deliver Space cybersecurity and threat intelligence operating capabilities


As a society, our increased reliance of space-based systems for commercial, government, and critical infrastructure sectors underscores the importance of sharing threat intelligence to safeguard space infrastructure, which supports billions of people globally.

The Space Information Sharing and Analysis Center (ISAC) was established several years ago, with Microsoft as a founding member, to facilitate timely collaboration across the global space industry to enhance the ability to prepare for and respond to vulnerabilities, incidents, and cybersecurity threats.

On March 30, 2023 the Space ISAC’s Operational Watch Center reached its initial operational capability hosted in Azure. The watch center’s collaborative environment provides visualization of environmental conditions and threat information to rapidly detect, assess and respond to space weather events, vulnerabilities, incidents, and threats to space systems. The Watch Center is supported by a dedicated team of 10 in-person analysts with additional virtual support enabled by Azure cloud architecture.

As the largest cloud architecture in the world, Microsoft has gained an exceptional vantage point and garnered unique experience on what it takes to secure cloud workloads and containers. Microsoft has a unique view into emerging threats based on analysis of over 65 trillion threat signals daily across over 200 global consumer and commercial services and shares this insight with the Space ISAC community.

Working with the Space ISAC Watch Center, we can rapidly share threat intelligence with the space community. In addition, the new Microsoft Security Copilot capability will be available to our Space ISAC partners, to enable cyber defense at machine speed and scale.

Source: microsoft.com

Tuesday, 11 April 2023

New Azure App Service plans fuel greater choice and savings

In times of economic uncertainty, organizations need flexible options to achieve their business outcomes. To meet this need, we are excited to announce new plans for Microsoft Azure App Service customers with two new offerings in the Premium v3 (Pv3) service tier, and expansion in the Isolated v2 tier, which powers the high-security App Service Environment v3.

The rise in App Modernization projects is driven by companies of all sizes and digital maturity levels seeking powerful cost efficiencies as they continue to innovate and unlock competitive advantages with a leaner workforce and limited budgets. A brand new cost-effective P0v3 plan and a new series of memory-optimized (P*mv3) plans is designed to help more customers thrive and grow with Azure platform as a service (PaaS).

The P0v3 plan provides one of the most compelling price-to-performance ratios for organizations looking to get started with Azure App Service. As part of the modern Premium v3 tier, these plans include access to environments with faster processors, Azure Virtual Network support, and higher scale via increased vCPU, memory, and instance counts. The P0v3 and P*mv3 plans help you achieve more—with more options than before and greater flexibility to support your organization’s workload and free your development team to focus on innovation.


Reduce your app modernization costs without compromising on performance


The P0v3 plan gives you access to all the modern features that you expect from our most popular Premium v3 service plan at an extremely competitive price point. If you’re currently using or considering the Standard plan or Premium v2 (Pv2) plan, the new P0v3 plan delivers significantly higher performance at a similar monthly cost. What’s more, the P0v3 plan allows you to take advantage of Azure savings plans and reserved instance (RI) pricing—only available on the Premium v3 tier—for up to 55 percent more savings compared to the pay-as-you-go plan. Start planning your deployments today; app modernization has never been more compelling.

More diverse offerings meet your memory-intensive workload needs


Not all workloads in the cloud need the same resources, which is why organizations regularly optimize their architecture with a mix of technologies and service plans that best meet their current needs. To meet this need for varied workload sizes, we’ve developed the new memory-optimized series of P*mv3 plans, labeled as P1mv3, P2mv3, P3mv3, and so on.

These offerings provide the flexibility to scale your memory configuration without having to pay for additional cores. They also complement our existing line-up by creating more cost-effective options for workloads that need more memory to cache data or render a large page quickly, while working well using existing compute horsepower. They range from two virtual cores with 16 GB RAM in P1mv3 (compared to two cores, 8 GB RAM in P1v3) to 32 virtual cores with 256 GB RAM in the P5mv3.

As an example of how the memory-optimized plans can drive greater value for your organization, consider that the Linux pay as you go (PAYG) price for the East US region for the 16 GB P1mv3 plan with two virtual cores costs 54 percent less than the current four core, or 14 GB P3v2 offering and 40 percent less than the four virtual core and 16 GB P2v3 offering—while offering equivalent or more memory. Now, when you consider the savings plan discount for one and three years for P1mv3 as compared to P3v2, the savings jump to 65 percent and 75 percent respectively.


Premium v3 is your one-stop-shop for savings and scalability


Whether your organization is looking for entry-level plans to get started with app modernization or ready to expand its footprint with larger, more demanding workloads, the Premium v3 tier has you covered. With cost-effective instances as low as one virtual core with four GB RAM on the one hand, to the largest workloads running on 32 virtual core with 256 GB RAM on the other—and everything in between—Premium v3 plans give you the peace of mind to scale up or down within the same modern and dynamic environment as needed, and adapt your platform spending to your current needs.

If you’re using a different tier for your existing workloads, we recommend adding new workloads using Pv3 plans as they can easily coexist. Going forward, rather than adding another instance to a Pv2 or lower tier plan, consider the Premium v3 tier for greater savings and performance. As a fully managed application platform service, Azure App Service will ensure your workloads are configured to perform and scale at the highest levels no matter what plan you choose.

Announcing larger instances for App Service Environment


Customers with high security and compliance needs can also choose the App Service Environment which provides fine-grained control over inbound and outbound application network traffic. Unlike the shared, multi-tenant service, an App Service Environment hosts applications from only one customer and is powered by the Isolated v2 (Iv2) plans. We are excited to announce three new Isolated v2 offerings which expand the ability for organizations to run applications at high scale and with higher memory utilization in an isolated and secure environment. These new offerings are the I4v2, I5v2, and I6v2, providing new compute options ranging from 16 virtual cores with 64 GB memory to as much as 64 virtual cores with 256 GB memory.

These new offerings also provide additional reasons for customers currently using earlier versions of App Service Environment to migrate to Isolated v2. This blog describes the benefits of moving to Isolated v2.

Achieve more on the next step of your modernization journey


Availability of the new plans is rolling out in H1 2023 across the Azure regions. Check back regularly as availability expands in the coming weeks.

If you are new to this journey, learn more about Azure App Modernization and get started with free Azure credits. If you are ready to start, check out our partner portal to find Microsoft partners with advanced specialization in app modernization to help speed up your deployments.

Source: microsoft.com