Sunday, 31 July 2022

Accelerating capital markets workloads for Murex on Azure

Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Preparation, Azure Skills, Azure Jobs, Azure Guides

The financial services industry is constantly evolving to meet customer and regulatory demands. It is facing a variety of challenges spanning people, processes, and technology. Financial institutions (FIs) need to continuously accelerate to achieve technology and innovation while maintaining scale, quality, speed, and safety. Simultaneously, they need to handle evolving regulatory frameworks, manage risk, digitally transform, process financial transaction volumes, and accelerate cost reductions and restructuring efforts.

Murex is a leading global software provider of trading, risk management, processing operations, and post-trade solutions for capital markets. FIs around the world deploy Murex’s MX.3 platform to better manage risk, accelerate transformation, and simplify compliance while driving revenue growth.

Murex MX.3 on Azure

Murex MX.3 has been certified for Microsoft Azure since version 3.1.35. We have been collaborating with Murex and global strategic partners like Accenture and DXC to provide Murex customers with a simple way to create and scale MX.3 infrastructure and achieve agility in business transformation. With the recent version 3.1.48, SQL Server is supported and customers can now benefit from the performance, scalability, resilience, and cost savings facilitated by SQL Server. With SQL Server IaaS Extension, Murex customers can run SQL Server virtual machines (VMs) in Azure with PaaS capabilities for Windows OS (with automated patching setting disabled in order to prevent the installation of a cumulative update that may not yet be supported by MX3).

Architecture

Murex customers can now refer to the architecture to implement MX.3 application on Azure. Azure enables a secure, reliable, and efficient environment, significantly reducing the infrastructure cost needed to operate the MX.3 environment and providing scalability and a highly performant environment. Customers running MX.3 on Azure can take advantage of multilayered security provided by Microsoft across physical data centers, infrastructure, and operations in Azure. They can benefit from the Compliance Program that helps accelerate cloud adoption with proactive compliance assurance for highly critical and regulated workloads. Customers can maximize their existing on-premises investments using an effective hybrid approach. Azure provides a holistic, seamless, and more secure approach to innovation across customers’ on-premises, multicloud, and edge environments.

The architecture is designed to provide high availability and disaster recovery. Murex customers can achieve threat intelligence and traffic control using Azure Firewall, cost optimization using Reserved Instances and VM scale sets, and high storage throughout using Azure NetApp Files Ultra Storage.

“With the deployment of large scale—originally specialized platform-based—Murex workloads, Azure NetApp Files has proven to deliver the ideal Azure landing zone for storage-performance intensive, mission-critical enterprise applications and to live up to its promise to Migrate the Un-migratable," says Geert van Teylingen, Azure NetApp Files Principal Product Manager from NetApp.

Customers running Murex on Azure

Customers around the world are migrating the Murex platform from on-premises to Azure.

ABN AMRO has moved their MX.3 trading and treasury front-to-back-to-risk platform to Azure, achieving flexibility, agility, and improved time to market. ABN AMRO’s journey to Azure progressed from proof of concept to production, with the Murex MX.3 platform now entirely operational on Azure.

Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Preparation, Azure Skills, Azure Jobs, Azure Guides
“The key focus for us was always to make sure that we could automate most processes while preserving its operational excellence and key features,” says Kees van Duin, IT Integrator at ABN AMRO.

“Thanks to Microsoft, we were able to preserve nearly 90 percent of our original design and move our platform to the cloud, while in-production, as efficiently as possible. We couldn’t be happier with the result,” he continues.

For Pavilion Energy, Upskills helped drive implementation for Murex Trading in Azure, helping reduce the risk of errors, increase the volume of trading activities, and optimize the management of their Murex MX.3 platform environments.

“We have been working on the Murex technology for over 10 years. Implementing Murex Trading Platform fully into Azure has proven to be the right decision to reduce the risk of delivery, optimize the environments management, and provide sustainable solutions and support to Pavilion Energy” says Thong Tran, Chief Executive Officer (CEO) of Upskills.

Strategic partners helping accelerate Murex workloads

Murex customers can modernize MX.3 workloads, reduce time-to-market and operational costs, and increase acceleration, leveraging accelerators, scripts, and blueprints from our partners—Accenture and DXC.

Accenture and Microsoft have decades of experience partnering with each other and building joint solutions that help customers achieve their goals. Leveraging our strategic alliance to better serve our customers, Accenture has designed and created specific accelerators, tools, and methodologies for MX.3 on Azure that could help organizations develop richer DevOps and become more agile while controlling costs.

Luxoft, a DXC Technology Company, with Microsoft as a global strategic partner for more than 30 years and Murex as a top-tier alliance partner for more than 13 years, helps modernize solutions to connect people, data, and processes with tangible business results. DXC has developed execution frameworks that adopt market best practices to accelerate and minimize risks of cloud migration of MX.3 to Azure.

Keeping pace with the changing regulatory and compliance constraints, financial innovation, computation complexity, and cyber threats is essential for FIs. FIs around the world are relying on Murex MX.3 to accelerate transformation and drive growth and innovation while complying with complex regulations. Customers are using Azure to enhance business agility and operation efficiency, reduce risk and total cost of ownership, and achieve scalability and robustness.

Source: microsoft.com

Thursday, 28 July 2022

Azure empowers easy-to-use, high-performance, and hyperscale model training using DeepSpeed

Large-scale transformer-based deep learning models trained on large amounts of data have shown great results in recent years in several cognitive tasks and are behind new products and features that augment human capabilities. These models have grown several orders of magnitude in size during the last five years. Starting from a few million parameters of the original transformer model all the way to the latest 530 billion-parameter Megatron-Turing (MT-NLG 530B) model as shown in Figure 1. There is a growing need for customers to train and fine-tune large models at an unprecedented scale.

Azure, Azure Exam, Azure Exam Prep, Azure Exam Preparation, Azure Guides, Azure Learning, Azure Career, Azure Skills, Azure Jobs, Azure Tutorial and Materials
Figure 1: Landscape of large models and hardware capabilities.

Azure Machine Learning (AzureML) brings large fleets of the latest GPUs powered by the InfiniBand interconnect to tackle large-scale AI training. We already train some of the largest models including Megatron/Turing and GPT-3 on Azure. Previously, to train these models, users needed to set up and maintain a complex distributed training infrastructure that usually required several manual and error-prone steps. This led to a subpar experience both in terms of usability and performance.

Today, we are proud to announce a breakthrough in our software stack, using DeepSpeed and 1024 A100s to scale the training of a 2T parameter model with a streamlined user experience at 1K+ GPU scale. We are bringing these software innovations to you through AzureML (including a fully optimized PyTorch environment) that offers great performance and an easy-to-use interface for large-scale training.

Customers can now use DeepSpeed on Azure with simple-to-use training pipelines that utilize either the recommended AzureML recipes or via bash scripts for VMSS-based environments. As shown in Figure 2, Microsoft is taking a full stack optimization approach where all the necessary pieces including the hardware, the OS, the VM image, the Docker image (containing optimized PyTorch, DeepSpeed, ONNX Runtime, and other Python packages), and the user-facing Azure ML APIs have been optimized, integrated, and well-tested for excellent performance and scalability without unnecessary complexity.

Azure, Azure Exam, Azure Exam Prep, Azure Exam Preparation, Azure Guides, Azure Learning, Azure Career, Azure Skills, Azure Jobs, Azure Tutorial and Materials
Figure 2: Microsoft full-stack optimizations for scalable distributed training on Azure.

This optimized stack enabled us to efficiently scale training of large models using DeepSpeed on Azure. We are happy to share our performance results supporting 2x larger model sizes (2 trillion vs. 1 trillion parameters), scaling to 2x more GPUs (1024 vs. 512), and up to 1.8x higher compute throughput/GPU (150 TFLOPs vs. 81 TFLOPs) compared to those published on other cloud providers.

We offer near-linear scalability both in terms of an increase in model size as well as increase in number of GPUs. As shown in Figure 3a, together with the DeepSpeed ZeRO-3, its novel CPU offloading capabilities, and a high-performance Azure stack powered by InfiniBand interconnects and A100 GPUs, we were able to maintain an efficient throughput/GPU (>157 TFLOPs) in a near-linear fashion as the model size increased from 175 billion parameters to 2 trillion parameters. On the other hand, for a given model size, for example, 175B, we achieve near-linear scaling as we increase the number of GPUs from 128 all the way to 1024 as shown in Figure 3b. The key takeaway from the results presented in this blog is that Azure and DeepSpeed together are breaking the GPU memory wall and enabling our customers to easily and efficiently train trillion-parameter models at scale.

Azure, Azure Exam, Azure Exam Prep, Azure Exam Preparation, Azure Guides, Azure Learning, Azure Career, Azure Skills, Azure Jobs, Azure Tutorial and Materials
(a)

Azure, Azure Exam, Azure Exam Prep, Azure Exam Preparation, Azure Guides, Azure Learning, Azure Career, Azure Skills, Azure Jobs, Azure Tutorial and Materials
(b)

Figure 3: (a) Near-perfect throughput/GPU as we increase the model size from 175 billion to 2 trillion parameters (BS/GPU=8), (b) Near-perfect performance scaling with the increase in number of GPU devices for the 175B model (BS/GPU=16). The sequence length is 1024 for both cases.

Source: microsoft.com

Tuesday, 26 July 2022

Microsoft Cost Details API now generally available for EA and MCA customers

Microsoft Cost, Microsoft Exam Prep, Microsoft Tutorial and Materials, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Learning, Microsoft News, Microsoft Online Guides

The Cost Details API is now generally available for use by Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) customers. This API provides on-demand download of the granular cost details (formerly referred to as usage details) associated with your Microsoft charges. The API replaces all existing Usage Details APIs and provides data for charges in your invoice. For customers with an MCA agreement this includes Microsoft 365, Dynamics 365, Power Platform, and Azure charges. This API is the go-to solution, along with Exports, for ingesting the raw cost data needed to build any custom reporting solution. To learn more about how to call the Cost Details API, please see our documentation on how to get small cost datasets on demand.

Customers that are not using Exports or the Cost Details API should migrate to one of these solutions moving forward.

Benefits of the new solutions

Both the Cost Details API and Exports provide extensive benefits on top of existing solutions today.

◉ Security and stability—New solutions require Service Principal and/or user tokens in order to access data. For EA customers, keys used in the EA Reporting APIs are valid for six months. Going forward it is recommended to use token-based solutions through Service Principal or User authentication, consistent with Azure REST APIs.

◉ Scalability—The EA Reporting APIs (available only for EA customers) and the Consumption Usage Details (available for both EA and MCA customers) aren't built to scale well as your Microsoft and Azure costs increase. The number of Azure cost records in your cost details dataset can get exceedingly large as you deploy more resources into the cloud. The new solutions are asynchronous and have extensive infrastructure enhancements behind them to ensure successful downloads for any size dataset.

◉ Single dataset for all usage details—For EA customers, the existing EA Reporting APIs have separate endpoints for Azure usage charges versus Azure Marketplace charges. These datasets have been merged in the new solutions. A single dataset reduces the number of APIs that you need to call to see all your charges.

◉ Purchase amortization—Customers who purchase Reservations can see an amortized view of their costs using the new solutions. You can request for amortized or actual cost datasets as part of report configuration. Learn more about using amortized cost details datasets.

◉ Schema consistency—The Cost Details API and Exports provide files with matching fields. This allows you to easily move between solutions based on your scenario.

◉ Cost allocation integration—EA and MCA customers can use the new solutions to view charges in relation to the cost allocation rules that have been configured.

◉ Go forward improvements—The new solutions are being actively developed moving forward. They'll receive all new features as they're released.

Migrating to the new solutions

The EA Reporting APIs and the Consumption Usage Details API are no longer actively being developed. We strongly recommend migrating off these APIs as soon as possible. A retirement announcement will be made in the future with formal timelines for when the APIs will be turned off. However, the APIs will be fully supported from a reliability standpoint until the formal retirement date is reached.

When assessing the migration off existing usage details solutions, please use the documents below as a reference point:

◉ Migrate from the Enterprise Reporting Usage Details API 

◉ Migrate from the Consumption Usage Details API

Please note that pay-as-you-go, MSDN, and Visual Studio customers who are not a part of either an Enterprise Agreement (EA) or Microsoft Customer Agreement (MCA) should continue using the Consumption Usage Details API.

Source: microsoft.com

Saturday, 23 July 2022

Digital transformation for manufacturers requires additional IT/OT security

Digital Transformation, IT/OT Security, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Guides, Azure Prep, Azure Exam Prep

While every industry is vulnerable to a ransomware attack, manufacturers are at a particular risk. While digitization and automation have helped transform the industry, it has simultaneously opened up new attack vectors within organizations. Now the most targeted industry, the manufacturing industry, has seen a 300 percent increase in cyberattacks in a single year.

Beyond the tremendous growth in attacks, manufacturing companies make an ideal target for hackers due to the high value of the companies themselves, the high costs of unplanned downtime, and the highly visible impact that downtime has on consumers’ daily lives. With the risks so high, an enterprise-level solution that provides visibility and protection like Microsoft Defender for IoT is essential.

Visibility is the first step to network protection

The number of connected industrial control system (ICS)/operational technology (OT) devices in manufacturing facilities continues to grow. The benefits for the operations side of the house are clear, but the lack of visibility into them poses serious security risks for chief information security officers (CISOs).

Manufacturers often have no way to identify and monitor what all their connected devices are doing and with whom or what they are communicating. Worse, all too often they lack even a simple inventory of all the connected devices they have in their facilities. In case of an attack, the lack of visibility means that they have no way of tracing the attack vector the hacker took, making them vulnerable to a second wave and delaying recovery and remediation.

Continuous monitoring without impacting productivity

Microsoft Defender for IoT not only creates asset maps within minutes of being turned on, but it also provides continuous monitoring of every device in every facility around the world. Microsoft’s Section 52 has access to tens of trillions of identity, endpoint, and other signals each day. The threat intelligence from this specialized IoT and ICS research team produces high-impact insights that help keep manufacturers safe from attacks.

The agentless nature of the system protects companies without impacting production, no matter the topology of the network or the regulations governing the industry. And, with round-the-clock protection, Microsoft Defender for IoT can alert the SecOps team about an intrusion any time, any place.

Security for networks in an age of IT and OT convergence

As their digital transformations have progressed, manufacturers have seen their IT and OT environments converge. The air gap between them that ensured production would continue even if IT assets were taken offline is increasingly a thing of the past. With these trendlines, forward-thinking CISOs and their boards are taking proactive steps to protect the entire company from cyber-physical attacks that could have huge costs to safety, production, reputation, and the bottom line.

Fortunately, Microsoft Defender for IoT can usually be deployed in less than a single day per facility and works right out of the box for large enterprises and small, niche facilities. With it, defenders of OT networks have a powerful new tool at their disposal to help keep hackers out and people, production, and profits safe.

Source: microsoft.com

Thursday, 21 July 2022

Azure Premium SSD v2 Disk Storage in preview

Azure Premium SSD v2 Disk Storage in preview, Azure Exam, Azure Exam Prep, Azure Tutorial and Material, Azure Tutorial, Azure Career, Azure Skills, Azure Jobs, Azure Preparation

We are excited to announce the preview of Premium SSD v2, the next generation of Microsoft Azure Premium SSD Disk Storage. This new disk offering provides the most advanced block storage solution designed for a broad range of input/output (IO)-intensive enterprise production workloads that require sub-millisecond disk latencies as well as high input/output operations per second (IOPS) and throughput—at a low cost. With Premium SSD v2, you can now provision up to 64TiBs of storage capacity, 80,000 IOPS, and 1,200 MBPS throughput on a single disk. With best-in-class IOPS and bandwidth, Premium SSD v2 provides the most flexible and scalable general-purpose block storage in the cloud, enabling you to meet the ever-growing demands of your production workloads such as—SQL Server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big data, analytics, gaming, on virtual machines, or stateful containers. Moreover, with Premium SSD v2, you can provision granular disk sizes, IOPS, and throughput independently based on your workload needs, providing you more flexibility in managing performance and costs.

With the launch of Premium SSD v2, our Azure Disk Storage portfolio now includes one of the most comprehensive sets of disk storage offerings to satisfy workloads ranging from Tier-1 IOPS intensive workloads such as SAP HANA to general purpose workloads such as RDMS and NoSQL databases and cost-sensitive Dev/Test workloads.

Benefits of Premium SSD v2

As customers transition their production workloads to the cloud or deploy new cloud-native applications, balancing performance and cost is top of mind. For example, transaction-intensive database workloads may require high IOPS on a small disk size or a gaming application may need very high IOPS during peak hours. Similarly, big data applications like Cloudera/Hadoop may require very high throughput at a low cost. Hence, customers need the flexibility to scale their IOPS and throughput independent of the disk size. With Premium SSD v2, you can customize disk performance to precisely meet your workload requirements or seasonal demands, without the need to provision additional storage capacity.

Premium SSD v2 also enables you to provision storage capacity ranging from 1 GiB up to 64 TiB with GiB increments. All Premium SSD v2 disks provide a baseline performance of 3,000 IOPS and 125 MB/sec. If your disk requires higher performance, you can provision the required IOPS and throughput at a low cost, up to the max limits shown below. You can dynamically scale up or scale down the IOPS and throughput as needed without downtime, allowing you to manage disk performance cost-effectively while avoiding the maintenance overhead of striping multiple disks to achieve more performance. Summarizing the key benefits:

◉ Granular disk size in 1 GiB increments.

◉ Independent provisioning of IOPS, throughput, and GiB.

◉ Consistent sub-millisecond latency.

◉ Easier maintenance with scaling performance up and down without downtime.

Premium SSD v2, like all other Azure Disk Storage offerings, will provide our industry-leading data durability and high availability at general availability.

Following is a summary comparing Premium SSD v2 with the current Premium SSD and Ultra Disk.

  Ultra Disk Premium SSD v2  Premium SSD 
Disk Size  4 GiB - 64 TiB 1 GiB - 64 TiB 4 GiB - 32 TiB
Baseline IOPS  Varies by disk size  3,000 IOPS free  Varies by disk size 
Baseline throughput  Varies by disk size  125 MBPS free  Varies by disk size 
Peak IOPS 

160,000 IOPS

80,000 IOPS  20,000 IOPS 
Peak Throughput  4,000 MBPS  1,200 MBPS  900 MBPS 
Durability 

99.999999999% durability

(~0% annual failure rate)

99.999999999% durability

(~0% annual failure rate)

99.999999999% durability

(~0% annual failure rate)


Supported Azure Virtual Machines


Premium SSD v2 can be used with any premium storage-enabled virtual machines sizes enabling you to leverage a diverse set of virtual machine sizes. Currently, Premium SSD v2 can only be used as data disks. Premium SSDs and Standard SSDs can be used as OS disks for virtual machines using Premium SSD v2 data disks.

Pricing


Premium SSD v2 disks are billed hourly based on the provisioned capacity, IOPS, and MBPS. Let’s take an example of a disk that you provision with 100 GiB capacity, 5000 IOPS, and 150 MB/sec throughput.

◉ The disks are billed per GiB of the provisioned capacity. Hence, you will be charged for 100 GiB of the provisioned capacity.

◉ The disks are billed for any additional IOPS provisioned over the free baseline of 3,000 IOPS. In this case, since you provisioned 5000 IOPS, you will be billed for the additional 2,000 IOPS.

◉ The disks are billed for any additional throughput over the free baseline throughput of 125 MB/s. In this case, since you provisioned 150 MB/sec throughput, you will be billed for the additional 25 MB/s throughput.

Source: microsoft.com

Sunday, 17 July 2022

Gateway Load Balancer now generally available in all regions

We announced the public preview release of Gateway Load Balancer (GWLB), a new SKU of Azure Load Balancer targeted for transparent NVA (network virtual appliance) insertion supported by a growing list of NVA providers. Today, placing NVAs in the path of traffic is a growing need for customers as their workloads scale. Common use cases of NVAs we’ve seen are:

◉ Allowing or blocking specific IPs using virtual firewalls.

◉ Protecting applications from DDoS attacks.

◉ Analyzing or visualizing traffic patterns.

And GWLB now offers the following benefits for NVA scenarios:

◉ Source IP preservation.

◉ Flow symmetry.

◉ Lightweight NVA management at scale.

◉ Auto-scaling with Azure Virtual Machines Scale Sets (VMSS).

With GWLB, bump-in-the-wire service chaining becomes easy to add on to new or existing architectures in Azure. This means customers can easily “chain” a new GWLB resource to both Standard Public Load Balancers and individual virtual machines with Standard Public IPs, covering scenarios involving both highly available, zonally resilient deployments and simpler workloads.

Azure Exam Prep, Azure Certification, Azure Preparation, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs

Figure 1: GWLB can be associated to multiple consumer resources, including both Standard Public Load Balancers and Virtual Machines with Standard Public IPs. When GWLB is chained to the front-end configuration or VM NIC IP configuration, unfiltered traffic from the internet will first be directed to the GWLB and then reach the configured NVAs. The NVAs will then inspect the traffic and send the filtered traffic to the final destination, the consumer application hosted on either the load balancer or virtual machine.

What’s new with Gateway Load Balancer


GWLB borrows a majority of the same concepts as the Standard Load Balancers that customers are familiar with today. You’ll have most of the same components such as frontend IPs, load balancing rules, backend pools, health probes, and metrics, but you’ll also see a new component unique to GWLB—VXLAN tunnel interfaces.

VXLAN is an encapsulation protocol utilized by GWLB. This allows traffic packets to be encapsulated and decapsulated with VXLAN headers as they traverse the appropriate data path, all while maintaining their original source IP and flow symmetry without requiring Source Network Address Translation (SNAT) or other complex configurations like user-defined routes (UDRs).

The VXLAN tunnel interfaces are configured as part of the GWLB’s back-end pool and enable the NVAs to isolate “untrusted” traffic from “trusted” traffic. Tunnel interfaces can either be internal or external and each backend pool can have up to two tunnel interfaces. Typically, the external interface is used for “untrusted” traffic—traffic coming from the internet and headed to the appliance. Correspondingly, the internal interface is used for “trusted” traffic—traffic going from your appliances to your application.

Contoso case study


To better understand the use case of GWLB, let’s dive deeper into example retail company Contoso’s use case.

Who is Contoso?

Contoso is a retail company that uses Azure Load Balancer today to make their web servers supporting their retail platform regionally resilient. In the past few years, they’ve experienced exponential growth and now serve over 20 million visitors per month. When faced with the need to scale their retail platform, they chose Azure Load Balancer because of its high performance coupled with ultra-low latency. As a result of their success, they’ve begun to adopt stricter security practices to protect customer transactions and reduce the risk of harmful traffic reaching their platforms.

What does Contoso’s architecture look like today?

One of their load balancers supporting the eastus region is called contoso-eastus and has a front-end IP configuration with the public IP 101.22.462. Today, traffic headed to 101.22.462 on port 80 is distributed to the backend instances on port 80 as well.

What’s the problem?

The security team recently identified some potentially malicious IP addresses that have been attempting to access their retail platform. As a result, they’re looking to place a network-layer virtual firewall to protect their applications from IP addresses with poor reputations.

What’s the plan?

Contoso has decided to go with a third-party NVA vendor whose appliances the team has used in other contexts such as smaller scale applications or other internal-facing tools. The security team wants to keep the creation of additional resources to a minimum to simplify their NVA management architecture, so they decide map one GWLB with an auto-scaling backend pool of NVAs using Azure VMSS to each group of load balancers deployed in the same region.

Deploying Gateway Load Balancer

The cloud infrastructure team at Contoso creates a GWLB with their NVAs deployed using Azure VMSS. Then, they chain this GWLB to their 5 Standard Public LBs for the eastus region. After verifying that their Data Path Availability and Health Probe Status metrics are 100 percent on both their GWLB and on each chained Standard Public LB, they run a quick packet capture to ensure everything is working as expected.

What happens now?

Now, traffic packets whose destination are any of the frontend IPs of the Standard Public LBs for eastus will be encapsulated using VXLAN and sent to the GWLB first. At this point, the firewall NVAs will decapsulate the traffic, inspect the source IP, and determine whether this traffic is safe to continue on towards the end application. The NVA will then re-encapsulate traffic packets that meet the firewall’s criteria and send it back to the Standard LB. When the traffic reaches the Standard LB, the packets will be decapsulated, meaning that the traffic will appear as if it came directly from the internet, with its original source IP intact. This is what we mean by transparent NVA insertion, as Contoso’s retail platform applications will behave exactly as they did before, without ever knowing that the packet was inspected or filtered by a firewall appliance prior to reaching the application server.

Gateway Load Balancer partners


Gateway Load Balancer supports a variety of NVA providers, you can learn more about each of our partners on our partners page.

  • Virtual firewalls
    • Check Point
    • Cisco
    • F5
    • Fortinet
    • Palo Alto Networks
  • Traffic observability
    • cPacket Networks
    • Glasnostic
  • Network security
    • Citrix
    • Trend Micro
    • Valtix
  • DDoS protection
    • A10 Networks

Source: microsoft.com

Saturday, 16 July 2022

Microsoft joins Jakarta EE and MicroProfile Working Groups at Eclipse Foundation

We’re excited to announce that Microsoft has joined the Eclipse Foundation Jakarta EE and MicroProfile Working Groups as an Enterprise and Corporate member, respectively. Our goal is to help advance these technologies to deliver better outcomes for our Java customers and the broader community. We’re committed to the health and well-being of the vibrant Java ecosystem, including Spring (Spring utilizes several key Jakarta EE technologies). Joining the Jakarta EE and MicroProfile groups complements our participation in the Java Community Process (JCP) to help advance Java SE.



Over the past few years, Microsoft has made substantial investments in offerings for Java, Jakarta EE, MicroProfile, and Spring technologies on Azure in collaboration with our strategic partners. With Red Hat, we’ve built a managed service for JBoss EAP on Azure App Service. We’re also collaborating with Red Hat to enable robust solutions for JBoss EAP on Virtual Machines (VMs) and Azure Red Hat OpenShift (ARO). With VMware, we jointly develop and support Azure Spring Apps (formerly Azure Spring Cloud), a fully managed service for Spring Boot applications. And with Oracle and IBM, we’ve been building solutions for customers to run WebLogic and the WebSphere product family on VMs, Azure Kubernetes Service, and ARO. Other work includes a first-party managed service to run Tomcat and Java SE (App Service) and Jakarta Messaging support in Azure Service Bus.


Our strategic partners


Microsoft is actively improving our support for running Quarkus on Azure, including on emerging platforms such as Azure Container Apps. The expanded investment in Jakarta EE and MicroProfile is a natural progression of our work to enable Java on Azure. Our broad and deep partnerships with key Java ecosystem stakeholders such as Oracle, IBM, Red Hat, and VMware power our Java on Azure work. These strategic partners share our enthusiasm for the Jakarta EE and MicroProfile journeys that Microsoft has embarked upon.

"We're thrilled to have an organization with the influence and reach of Microsoft joining the Jakarta EE Working Group. Microsoft has warmly embraced all things Java across its product and service portfolio, particularly Azure. Its enterprise customers can be confident that they will be actively participating in the further evolution of the Jakarta EE specifications which are defining enterprise Java for today's cloud-native world."—Mike Milinkovich, Executive Director, Eclipse Foundation.

“We welcome Microsoft to the Jakarta EE and MicroProfile Working Groups. We are pleased with our collaboration with Microsoft in delivering Oracle WebLogic Server solutions in Azure, which are helping customers to use Jakarta EE in the cloud. We look forward to more collaboration in the Jakarta EE and MicroProfile Working Groups.”—Tom Snyder, Vice President, Oracle Enterprise Cloud Native Java.

“IBM’s collaboration with Microsoft has shown Jakarta EE and MicoProfile running well in a number of Azure environments on the Liberty runtime, so it’s exciting to see Microsoft now joining the Jakarta EE and MicroProfile Working Groups. I look forward to seeing Microsoft bringing another perspective to the Working Groups based on their experience and needs for Azure customers.”—Ian Robinson, Chief Technology Officer, IBM Application Platform.

"It is great to see Microsoft officially join both MicroProfile and Jakarta EE as they'd been informally involved in these efforts for a long time. I hope to see Microsoft's participation bring experience from their many users and partners who have developed and deployed enterprise Java applications on Azure for several years."—Mark Little, Vice President, Software Engineering, Red Hat.

"We are excited to see Microsoft supporting the Jakarta EE Working Group. Jakarta EE serves as a key integration point for Spring applications and we look forward to the future evolution of common specifications like Servlet, JPA, and others. Microsoft delights developers with their continued support of the Java ecosystem along with their work with VMware on bringing a fully managed Spring service to Azure.”—Ryan Morgan, Vice President, Software Engineering, VMware.

Looking to the future


As part of the Jakarta EE and MicroProfile working groups, we’ll continue to work closely with our long-standing partners. We believe our experience with running Java workloads in the cloud will be valuable to the working groups, and we look forward to building a strong future for Java together with our customers, partners, and the community.

Source: microsoft.com

Thursday, 14 July 2022

MLOps Blog Series Part 4: Testing security of secure machine learning systems using MLOps

The growing adoption of data-driven and machine learning–based solutions is driving the need for businesses to handle growing workloads, exposing them to extra levels of complexities and vulnerabilities.

Cybersecurity is the biggest risk for AI developers and adopters. According to a survey released by Deloitte, in July 2020, 62 percent of adopters saw cybersecurity risks as a significant or extreme threat, but only 39 percent said they felt prepared to address those risks.

In Figure 1, we can observe possible attacks on a machine learning system (in the training and inference stages).

MLOps, Azure Security, Azure Machine Learning, Azure Exam Prep, Azure Certification, Azure Career, Azure Skill, Azure Jobs, Azure Prep
Figure 1: Vulnerabilities of a machine learning system.

To know more about how these attacks are carried out, check out the Engineering MLOps book. Here are some key approaches and tests for securing your machine learning systems against these attacks:

Homomorphic encryption


Homomorphic encryption is a type of encryption that allows direct calculations on encrypted data. It ensures that the decrypted output is identical to the result obtained using unencrypted inputs.

For example, encrypt(x) + encrypt(y) = decrypt(x+y).

Privacy by design


Privacy by design is a philosophy or approach for embedding privacy, fairness, and transparency in the design of information technology, networked infrastructure, and business practices. The concept brings an extensive understanding of principles to achieve privacy, fairness, and transparency. This approach will enable possible data breaches and attacks to be avoided.

MLOps, Azure Security, Azure Machine Learning, Azure Exam Prep, Azure Certification, Azure Career, Azure Skill, Azure Jobs, Azure Prep
Figure 2: Privacy by design for machine learning systems.

Figure 2 depicts some core foundations to consider when building a privacy by design–driven machine learning system. Let’s reflect on some of these key areas:

◉ Maintaining strong access control is basic.

◉ Utilizing robust de-identification techniques (in other words, pseudonymization) for personal identifiers, data aggregation, and encryption approaches are critical.

◉ Securing personally identifiable information and data minimization are crucial. This involves collecting and processing the smallest amounts of data possible in terms of the personal identifiers associated with the data.

◉ Understanding, documenting, and displaying data as it travels from data sources to consumers is known as data lineage tracking. This covers all of the data's changes along the journey, including how the data was converted, what changed, and why. In a data analytics process, data lineage provides visibility while considerably simplifying the ability to track data breaches, mistakes, and fundamental causes.

◉ Explaining and justifying automated decisions when you need to are vital for compliance and fairness. High explainability mechanisms are required to interpret automated decisions.

◉ Avoiding quasi-identifiers and non-unique identifiers (for example, gender, postcode, occupation, or languages spoken) is best practice, as they can be used to re-identify persons when combined.

As artificial intelligence is fast evolving, it is critical to incorporate privacy and proper technological and organizational safeguards into the process so that privacy concerns do not stifle its progress but instead lead to beneficial outcomes.

Real-time monitoring for security


Real-time monitoring (of data: inputs and outputs) can be used against backdoor attacks or adversarial attacks by:

◉ Monitoring data (input and outputs).
◉ Accessing management efficiently.
◉ Monitoring telemetry data.

One key solution is to monitor inputs during training or testing. To sanitize (pre-process, decrypt, transformations, and so on) the model input data, autoencoders, or other classifiers can be used to monitor the integrity of the input data. The efficient monitoring of access management (who gets access, and when and where access is obtained) and telemetry data can result in being aware of quasi-identifiers and help prevent suspicious attacks.

Source: microsoft.com

Tuesday, 12 July 2022

What is desktop as a service (DaaS) and how can it help your organization?

Azure Exam, Azure Exam Prep, Azure Certification, Azure Tutorial  and Material, Azure Career, Azure Skills, Azure Jobs, Azure Materials

Today’s workers want the freedom to respond to email and collaborate with colleagues from anywhere, on any device—whether they’re working at their kitchen table, at the airport waiting for their flight to board, or in the carpool line waiting for their kids to get out of school. The pandemic proved that remote teams could succeed, no matter where they worked and how far-flung they were.

Even so, many companies are still scrambling to accommodate the technological needs of their hybrid and remote workers. Desktop as a service, sometimes known by the acronym DaaS, can help.

What is desktop as a service (DaaS)?

DaaS is a high-performing, secure, cost-effective type of desktop virtualization. DaaS frees businesses from tethering their computer operating systems and productivity software to any physical hardware. Instead, businesses can use DaaS to access virtual desktops over the internet from a cloud provider. Cloud providers that offer this service distribute and manage virtual desktops from their own datacenters. 

DaaS vs. on-premises

DaaS solutions differ from on-premises software in a number of ways, most notably:

Pricing. With DaaS, companies can avoid making advance purchases of hardware that they anticipate their employees needing, such as expensive desktops and laptops. Instead, companies pay cloud providers only for the data, resources, and services that they use.

Scalability. Cloud providers offer companies the freedom to use any amount of desktops on a fluctuating basis. This gives companies instant access to the precise number of desktops they need, whenever and wherever they need them.

Management. Cloud providers offering DaaS conduct maintenance, data storage, updates, backup, and other desktop management for companies that outsource these solutions. DaaS providers often manage their customers’ desktops, applications, and security as well.

What are the benefits of DaaS?

The financial, performance, and administrative benefits of using DaaS are numerous. Let’s look at some of the biggest reasons businesses use this type of desktop virtualization.

Enables remote work. The rise of hybrid and remote workplaces calls for a different approach to accessing applications and data. With DaaS, IT teams can easily move data between different platforms and users can easily access the data they need from multiple machines, no matter where they work.

Supports BYOD. Besides freeing employees from physical offices, DaaS can free employees from solely working on company-issued devices or with one particular device. With DaaS, IT teams can more easily support bring your own device, or BYOD, policies that let employees work on their own phones, tablets, and laptops.

Simplifies desktop management. For IT teams, outsourcing the deployment, configuration, and management of virtual desktops helps reduce the administrative load. The ability to quickly scale up or down the use of desktops, applications, and data based on user need also helps to ease IT duties.

Helps increase security. DaaS poses fewer security risks because the data resides in the cloud provider’s datacenter, not on the laptops, tablets, and phones that employees use. If a computer or device is lost or stolen, it can easily be disconnected from the cloud service.

Reduces IT costs. DaaS solutions save businesses money by shifting IT costs from traditional on-premises hardware and software purchased up front and in bulk to cloud-based services and desktops purchased as needed. DaaS can run on devices that require far less computing power than a standard laptop or desktop machine, which helps companies save money. Allowing employees to use their own devices also helps save on hardware costs, as does reducing the workload of IT teams.

Extends the life of legacy machines. Companies that lack the immediate funds to upgrade all of their outdated machines can use DaaS to install a newer operating system on them. Serving the newer operating system from the cloud is a more affordable prospect than replacing an entire fleet of on-premises equipment all at once.

Real-world uses for DaaS

Cloud providers usually offer two flavors of DaaS, persistent desktop and nonpersistent desktop:

◉ Persistent desktop offers the greatest degree of application compatibility and personalization and is necessary for users that require elevated permissions. This usually results in a higher cost per user than a nonpersistent desktop. A persistent desktop is a good fit for developers and IT professionals.

◉ Nonpersistent desktop offers the lowest cost solution by separating the personalization layer from the underlying operating system. This enables any user to log onto any virtual machine (VM) and maintain a personalized environment. This option is a good fit for knowledge workers and task workers.

We’ve already looked at how DaaS benefits remote and hybrid workforces, BYOD programs, and companies looking to optimize their IT assets and costs. But there are many other business uses for DaaS, including:

◉ Modernizing call centers. Organizations with shift workers who require the same software to do task-based work can optimize IT resources by using nonpersistent desktops and remote applications.

◉ Accelerating deployment and decommissioning. Nonpersistent desktops can help seasonal businesses that routinely undergo staffing fluctuations reduce the time and costs associated with deploying and decommissioning desktop users.

◉ Granting contractors and partners secure data access. Companies can increase the login security of their contractors, vendors, and business partners by enabling them to work on virtual desktops from their own devices.

◉ Ensuring business continuity. Companies can help safeguard their data against natural disasters and other threats to daily operations by outsourcing desktop management to cloud providers that offer airtight data protection at remote datacenters.

◉ Increasing sustainability. By using cloud-based virtual desktops to reduce the amount of hardware used onsite, businesses can decrease their power consumption and electronic waste, thus reducing their environmental impact.

Explore the flexibility of Azure Virtual Desktop

Azure Virtual Desktop is a desktop and application solution that enables your remote workforce to stay productive regardless of location or device—all while being secure, scalable, and cost-effective. With Azure Virtual Desktop, you can:

Deliver Windows 10 and Windows 11 desktops virtually anywhere. Give employees the only virtual desktop solution that’s fully optimized for Windows 10, Windows 11, and Microsoft 365 with multisession capabilities—no matter what device they’re using, no matter where they’re using it.

Keep your applications and data secure and compliant. Use the built-in, reliable security features of Azure to stay ahead of potential threats and take remedial action against breaches.

Simplify deployment and management. The Azure portal enables you to configure your network settings, add users, deploy desktops and applications, and enable security with just a few clicks. Citrix and VMware customers also can streamline the delivery of virtual desktops and applications with Azure.

Reduce costs with multisession and existing licenses. Optimize costs with the eligible Microsoft 365 or Windows licenses that you already have. Use Windows 10 and Windows 11 multisession support to reduce infrastructure costs. Plus, take advantage of flexible, consumption-based pricing to pay for only what you use.

To explore how to get started with Azure Virtual Desktop, read the Quickstart Guide to Azure Virtual Desktop. In it, you’ll find:

◉ Guidance on planning a successful deployment of Azure Virtual Desktop.

◉ Steps to set up and optimize your virtual desktops with just a few clicks.

◉ Best practices, recommendations, and troubleshooting tips.

Source: microsoft.com

Thursday, 7 July 2022

How to choose the right Azure services for your applications—It’s not A or B

If you have been working with Azure for any period, you might have grappled with the question—which Azure service is best to run my apps on? This is an important decision because the services you choose will dictate your resource planning, budget, timelines, and, ultimately, the time to market for your business. It impacts the cost of not only the initial delivery, but also the ongoing maintenance of your applications.

Read More: PL-900: Microsoft Power Platform Fundamentals

Traditionally, organizations have thought that they must choose between two platforms, technologies, or competing solutions to build and run their software applications. For example, they ask questions like: Do we use Web Logic or WebSphere for hosting our Java Enterprise applications?, Should Docker Swarm be the enterprise-wide container platform or Kubernetes?, or Do we adopt containers or just stick with virtual machines (VMs)? They try to fit all their applications on platform A or B. This A or B mindset stems from outdated practices that were based on the constraints of the on-premises world, such as packaged software delivery models, significant upfront investments in infrastructure and software licensing, and long lead times required to build and deploy any application platform. With that history, it’s easy to bring the same mindset to Azure and spend a lot of time building a single platform based on a single Azure service that can host as many of their applications as possible—if not all. Then companies try to force-fit all their applications into this single platform, introducing delays and roadblocks that could have been avoided.

There's a better approach possible in Azure that yields higher returns on investment (ROI). As you transition to Azure, where you provision and deprovision resources on an as-needed basis, you don't have to choose between A or B. Azure makes it easy and cost-effective to take a different—and better—approach: the A+B approach. An A+B mindset simply means instead of limiting yourself to a predetermined service, you choose the service(s) that best meet your application needs; you choose the right tool for the right job.

Azure, Azure Exam Prep, Azure Exam Preps, Azure Career, Azure Skills, Azure Jobs, Azure Guides, Azure Learning, Azure Preparation
Figure 1: Azure enables you to shift your thinking from an A or B to an A+B mindset, which has many benefits.

With A+B thinking, you can:

◉ Select the right tool for the right job instead of force-fitting use cases to a predetermined solution.

◉ Innovate and go to market faster with the greater agility afforded by the A+B approach.

◉ Accelerate your app modernizations and build new cloud-native apps by taking a modular approach to picking the right Azure services for running your applications.

◉ Achieve greater process and cost efficiencies, and operational excellence.

◉ Build best-in-class applications tailored fit for your business

As organizations expand their decision-making process and technical strategy from an A or B mindset to encompass the possibilities and new opportunities offered with an A+B mindset, there are many new considerations. In our new book, we introduce the principles of the A+B mindset that you can use to choose the right Azure services for your applications. We have illustrated the A+B approach using two Azure services as examples in our book; however, you can apply these principles to evaluate any number of Azure Services for hosting your applications–Azure Spring Apps, Azure App Service, Azure Container Apps, Azure Kubernetes Service, and Virtual Machines are commonly used Azure Services for application hosting. A+B mindset applies to any application, written in any language.

Source: microsoft.com

Tuesday, 5 July 2022

MLOps Blog Series Part 3: Testing scalability of secure machine learning systems using MLOps

The capacity of a system to adjust to changes by adding or removing resources to meet demand is known as scalability. Here are some tests to check the scalability of your model.

System testing

System tests are carried out to test the robustness of the design of a system for given inputs and expected outputs (for example, an MLOps pipeline, inference). Acceptance tests (to fulfill user requirements) can be performed as part of system tests.

A/B testing

A/B testing is performed by sending production traffic to alternate systems that will be evaluated. Statistical hypothesis testing is used to decide which system is better.

MLOps, Testing, Machine Learning, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Guides, Azure Career, Azure Skills, Azure Jobs, Azure News
Figure 1: A/B testing

Canary testing

Canary testing is done by delivering the majority of production traffic to the current system while sending traffic from a small group of users to the new system we're evaluating.

MLOps, Testing, Machine Learning, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Guides, Azure Career, Azure Skills, Azure Jobs, Azure News
Figure 2: Canary testing

Shadow testing

Sending the same production traffic to various systems is known as shadow testing. Shadow testing is simple to monitor and validates operational consistency.

MLOps, Testing, Machine Learning, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Guides, Azure Career, Azure Skills, Azure Jobs, Azure News
Figure 3: Shadow testing

Load testing

Load testing is a technique for simulating a real-world load on software, applications, and websites. Load testing simulates numerous users using a software application to simulate the expected usage of the program. It measures the following:

• Endurance: Whether an application can resist the processing load, it is expected to have to endure for an extended period.
• Volume: The application is subjected to a large volume of data to test whether the application performs as expected.
• Stress: Assessing the application's capacity to sustain a specified degree of efficacy in adverse situations.
• Performance: Determining how a system performs in terms of responsiveness and stability under a particular workload.
• Scalability: Measuring the application's ability to scale up or down as a reaction to an increase in the number of users.

Load tests can be performed to test the above factors using various software applications. Let’s look at an example of load testing an AI microservice using locust.io. The dashboard in Figure 4 reflects the total requests made to the microservice per second as well as the response times. Using these insights, we can gauge the performance of the AI microservice under a certain load.

MLOps, Testing, Machine Learning, Azure Exam Prep, Azure Tutorial and Materials, Azure Certification, Azure Guides, Azure Career, Azure Skills, Azure Jobs, Azure News
Figure 4: Load testing using Locust.io

Source: microsoft.com