Thursday, 17 February 2022

Microsoft Azure’s first Azure HPC + AI Day

Microsoft Azures, Microsoft Exam Prep, Microsoft Certification, Microsoft Prep, Microsoft HPC, Microsoft Skills, Microsoft Jobs

Welcome to our first Azure HPC + AI Day! We’re excited to host the first event, which will soon become an annual tradition. Join us on February 24 and 25 to hear directly from the Microsoft Azure team and earn points by attending sessions to get one of its kind Azure HPC + AI Day swag. Tune into a customer panel and learn how to start up an environment with our hands-on lab. It’s bound to be a great time, and we can’t wait to meet you there.

Register now for Azure HPC + AI Day.

Join us from anywhere

For full access to the entire event, make sure to register for the full experience, where you can tune in to live sessions, watch our on-demand content, check out some additional resources, try to win some swag, and take part in our hands-on labs.

Can’t get to the full event? Many of our sessions on February 24 will also be broadcast on LinkedIn. Watch from anywhere, and still get a chance to hear what’s coming and have the opportunity to ask our customers questions in the panel.

Look forward to these sessions

Day one

Start time  Title 
9:00 – 9:30 AM PT  Azure HPC + AI keynote
9:30 – 10:00 AM PT  AI Infra Platform strategy 
10:00 – 10:30 AM PT  Azure HPC + AI software strategy 
10:45 – 11:45 AM PT  Topic breakouts sessions 
12:00 – 1:00 PM PT  Customer panel 

Topic breakouts

Title

Building a successful SaaS solution for Massive Scale Actuarial Modelling: An end-to-end Azure solution

Cloud versus on-premises considerations 
How to onboard as a HPC partner 

Day two

Start time

Title 
9:00 – 9:45 AM PT HPC Operations—Azure, the best place to run HPC in the cloud

10:00 AM – 12:00 PM PT

IT Operator hands-on lab using on az-hop
10:00 AM – 11:30 PM PT Journey from on-premises to cloud: Story by Ansys

12:15 – 2:15 PM PT

Engineer hands-on lab using az-hop

Hear directly from customers


Join us on February 24 at 12:00 PM PT to hear directly from our customers about their experiences. Available on both platforms, be ready to hear about their experiences working in the cloud, the incredible projects they’ve been working on, and what the future holds. Don’t forget to ask questions.

Customers present

◉ Jellyfish Pictures
◉ Vestas
◉ University of Bath
◉ Kensington Mortgages
◉ AMD

Time to roll up your sleeves—with help


On February 25, get ready to roll up your sleeves and work in Azure. We’ll have two hands-on labs guided by Microsoft experts. In the IT operator hands-on lab, you'll get the chance to understand and build from scratch an end-to-end high-performance computing (HPC) cluster in Azure with autoscaling compute nodes, a web portal, a lustre file system, and remote visualization nodes. Later in the day in the Engineering hands-on lab, you’ll then get the opportunity to run a typical HPC workload and visualize the end results.

Big moments in 2021

Microsoft Azures, Microsoft Exam Prep, Microsoft Certification, Microsoft Prep, Microsoft HPC, Microsoft Skills, Microsoft Jobs
Jellyfish Pictures needed to enable secure remote access to immense computing power to render visual effects and animation sequences. They were able to use Azure to gain burst rendering on up to 90,000 processor cores in the cloud, gaining a 70 percent boost in capacity and the freedom to work from anywhere.

Microsoft Azures, Microsoft Exam Prep, Microsoft Certification, Microsoft Prep, Microsoft HPC, Microsoft Skills, Microsoft Jobs
Global leader in sustainable energy solutions Vestas Wind Systems A/S wanted to optimize wind energy production by reducing the negative impact of turbine wakes. Working with Microsoft Azure and minds.ai, Vestas is now able to generate simulations to help wind farms mitigate wake effect, generate more wind energy, and build a more sustainable and prosperous energy future.

Microsoft Azures, Microsoft Exam Prep, Microsoft Certification, Microsoft Prep, Microsoft HPC, Microsoft Skills, Microsoft Jobs
AMD needed more computing power and burst capacity to easily handle spikes in demand, optimize its own processes, and accelerate in market. Drawing on the power and flexibility it gained with Azure HPC, AMD has eliminated weeks and even months of delay, reducing procurement times and overall job times. It’s matching and exceeding on-premises performance and is now getting daily visibility into reports.

Microsoft Azures, Microsoft Exam Prep, Microsoft Certification, Microsoft Prep, Microsoft HPC, Microsoft Skills, Microsoft Jobs
In the midst of COVID-19 the need for a way to get faster test results, Sensyne Health developed its MagnifEye solution, a mobile app that uses a device’s camera to capture the LFT stick image and read it in tenths of seconds with a stunning 99.6 percent accuracy rate.

Previous launches in 2021


NDm A100 v4 series virtual machines now generally available

In November, we announced the general availability of the new NDm A100 v4 series virtual machines (VMs). This new series built on the ND A100 v4 announced in June and expanded Azure’s leadership-class AI supercomputing scalability in the public cloud. The new high-memory NDm A100 v4 for data-intensive GPU compute workloads reaffirms Microsoft’s commitment to rapidly adopting and shipping the latest scale-up and scale-out GPU accelerator technologies to the public cloud.

Source: microsoft.com

Tuesday, 15 February 2022

Advancing Azure Virtual Machine availability monitoring with Project Flash

Flash, as the project is internally known, is a collection of efforts across Azure Engineering, that aims to evolve Azure’s virtual machine (VM) availability monitoring ecosystem into a centralized, holistic, and intelligible solution customers can rely on to meet their specific observability needs. Today, we’re excited to announce the completion of the project’s first two milestones—the preview of VM availability data in Azure Resource Graph, and the private preview of a VM availability metric in Azure Monitor.

What is Project Flash?

Project Flash derives its name from our commitment to building robust and rapid ways to monitor virtual machine (VM) availability as comprehensively as possible—a key prerequisite for efficient application performance. It’s our mission to ensure you can:

◉ Consume accurate and actionable data on VM availability disruptions (for example, VM reboots and restarts, application freezes due to network driver updates, and 30-second host OS updates), along with precise failure details (for example, platform versus user-initiated, reboot versus freeze, planned versus unplanned).

◉ Analyze and alert on trends in VM availability for quick debugging and month-over-month reporting.

◉ Periodically monitor data at scale and build custom dashboards to stay updated on the latest availability states of all resources.

◉ Receive automated root cause analyses (RCAs) detailing impacted VMs, downtime cause and duration, consequent fixes, and similar—all to enable targeted investigations and post-mortem analyses.

◉ Receive instantaneous notifications on critical changes in VM availability to quickly trigger remediation actions and prevent end-user impact.

◉ Dynamically tailor and automate platform recovery policies, based on ever-changing workload sensitivities and failover needs.

With these goals in mind, we’ve divided our execution strategy into two phases—a near-term phase to meet critical current needs, and a long-term phase to deliver the best VM availability monitoring experience. This two-phased approach helps us continually bridge gaps, iterate on service quality, and learn from your feedback at every step along the way.

Announcing new monitoring options

For the first phase, we are providing different options to enable convenient access to VM availability data to address a range of observability needs. We aim to maintain data consistency with similar rigorous quality standards across all of these existing features and solutions, like Resource Health or Activity Log, to deliver a consistent view agnostic of the solution you choose.

Introducing at-scale analysis for VM availability

Today, we’re excited to reach our first Project Flash milestone—with the preview release of VM availability states in Azure Resource Graph for at-scale programmatic consumption.

Azure Resource Graph is a service in Azure that is extensively adopted for its efficient ability to query across many subscriptions, all at once and at low latencies. We’re currently emitting VM availability states (Available, Unavailable, and Unknown) to the Health Resources table in Azure Resource Graph, so you can perform complex Kusto Query Language (KQL) queries for sieving through large datasets at once. This functionality is handy for tracking historical changes in VM availability, for building custom dashboards, and for performing detailed investigations across numerous resource properties spread across multiple tables.

Azure Virtual Machine, Azure Virtual Machine Exam Prep, Azure Virtual Machine Career, Azure Preparation, Azure Skills, Azure Jobs
Figure 1: Azure Resource Graph Explorer Window with query and results, to demonstrate fetching data from the HealthResources table.

We are planning to add failure details and degraded VM scenarios to the Health Resources table in Azure Resource Graph, later this year. These details will ensure you are properly informed on the cause and impact of any failures—so you can either failover, reboot in place, or take the appropriate mitigations to prevent end-user impact.

Navigate to Azure Resource Graph Explorer on the Azure portal to get started with any of the KQL queries published for the Health Resources table.

Introducing VM availability metric in Azure Monitor

We’re also pleased to announce the private preview of an out-of-box VM availability metric in Azure Monitor, for a curated metric alerting and monitoring experience.

Metrics in Azure Monitor are great for monitoring and analyzing time series representations of VM availability for quick and easy debugging, receiving scoped alerts on concerning trends, catching early indicators of degraded availability, correlating with other platform metrics, and more.

The metric allows you to track the pulse of your VMs—during expected behavior, the metric displays a value of 1. In response to any VM availability disruptions, the metric dips to a 0 for the duration of impact. In case of an Azure infrastructure outage, we will emit nulls represented as a dotted line on the portal.

Azure Virtual Machine, Azure Virtual Machine Exam Prep, Azure Virtual Machine Career, Azure Preparation, Azure Skills, Azure Jobs
Figure 2: Screenshot of VM availability metric as seen on Metrics Explorer in the Azure portal, with occasional dips to reflect VM availability disruptions.

We released the private preview of the metric as phase one of our rollout plan, and are currently collecting customer feedback, to further improve our offering. We are planning to add failure details such as metric dimensions and platform logs next year, to allow you to precisely alert on failure scenarios that are impactful.

Coming soon


The two monitoring options introduced above are just the beginning for Project Flash! We will continue to build upon our existing solutions by improving data quality and failure attribution. In parallel, we are designing two new monitoring offerings to meet your latency and mitigation needs, while also investing heavily in the underlying platform to make our fault detection more resilient and comprehensive.

Azure Event Grid for instantaneous notifications

Successfully running business-critical applications requires hyper-awareness of any VM availability impacting event, so remediation actions can be triggered instantaneously to prevent end-user impact. To support you in your daily operations, we are planning to design a notification mechanism that leverages the low-latency technology of Azure Event Grid. This will allow you to simply subscribe to an Event Grid system topic, and route scoped events via event handlers to any downstream tooling, instantaneously.

Automate and tailor platform recovery policies

Considering the numerous ongoing investments to improve your VM availability monitoring experience, Project Flash intends to empower you even further by providing you knobs to customize recovery policies triggered by the platform, in response to cases of VM availability disruptions.

One such knob we are designing is the ability to opt-out of Service Healing for single-instance VMs, in response to a specific set of unanticipated Availability disruptions. This knob will be made available via the portal or at the time of VM deployment and can be updated dynamically. Note that leveraging this feature will render the usual Azure Virtual Machine availability SLAs ineffective.

In the future, we will explore introducing knobs to also opt-out of other applicable recovery policies (for example, Live Migration or Tardigrade), to ensure you can easily adapt to your ever-changing mitigation needs.

Ongoing platform quality investments

While the first phase is designed to meet your current observability needs, we remain focused on our long-term goal of delivering a world-class observability experience surrounding VM availability. We are extremely excited for all the data enrichments and technology advancements that will contribute to this experience, so here’s an early look at our roadmap of planned investments:

1. Fault detection and attribution: We are continuously evolving our underlying infrastructure to detect and attribute failures both precisely and instantaneously—so that we can reduce unknown or missing health status reports, emit actionable failure details, and handle platform recovery customizations. This remains our top investment area on which we continue to iterate every cycle.

3. Root cause analysis (RCA) automation: We are planning to implement easy tracking mechanisms for every unique VM downtime, along with automatic construction and emission of detailed downtime RCA statements to reduce manual tracking and churn on your end.

3. AIOps integration: We are looking to leverage the tremendous advancements being made in AIOps across Microsoft, for enabling smart insights and anomaly detection and diagnosis across the multitude of data points on VM Availability.

4. Centralized and cohesive user experience: We acknowledge that a consequence of our near-term approach is that across our different services we have multiple monitoring, alerting, and recovery tools which may lead to a confusing and disparate experience for you. This is a problem we intend to solve with our final phase. Our north star goal is to provide end-users access to distinct and necessary representations of VM availability, consolidated within Azure Monitor, and categorized according to common usage patterns for discoverability, ease of use and intuitive onboarding.

Source: microsoft.com

Sunday, 13 February 2022

New investments to help you accelerate your Azure migration and modernization journey

Announcements, Azure Migration, Azure Exam Prep, Azure Tutorial and Material, Azure Skills

Last year many organizations turned to the cloud to survive disruptions caused by the pandemic. Businesses are now charting the course for their recovery and further accelerating their cloud momentum to drive faster app innovation, optimize infrastructure costs, and enhance security posture. In a recent survey, 90 percent of enterprises responded that they expect cloud usage to exceed prior plans due to COVID-19.

“The pandemic accelerated our timetables, but we made the right decision to move quickly and are not looking back. Our total cost of ownership (TCO) has gone down 30 percent, the disaster recovery (DR) time has gone down from 30 days to a couple of hours. We now have the scalability we need and are no longer spending money on various upgrades that were previously required. With the Azure Migration and Modernization Program, we were able to move our datacenter to Azure in six months.”—Gurmail Jaswal, Director IT Solutions, Boston Pizza

This week at Microsoft Ignite, we’re announcing expanded program and product investments—to help customers accelerate their migration and modernization journey with Azure.

Azure Migration and Modernization Program (AMMP) is expanding to support new scenarios

Over the last two and a half years, Azure Migration and Modernization Program (AMMP) has helped thousands of customers unlock the benefits of the cloud, with the right mix of expert guidance and best practices at every stage of their journey. Boston Pizza, University of Leicester, British Council, Implenia, and Actavo are just a few examples.

AMMP is designed to serve the needs of our customers for migration and modernization across their entire infrastructure, application, and data estate. This is why we evolved the Azure Migration Program (AMP) to the AMMP a few months ago, expanding it to include support for application and data modernization.

Today, we are announcing support for additional scenarios in AMMP, driven by customer requests:

◉ Infrastructure: We are expanding AMMP to help customers plan and move their SAP environments to Azure, including both SAP lift-and-shift and SAP HANA deployments.

“We were needing to upgrade our mission-critical SAP systems that our largest business entity utilizes. On top of that, we were up against a tight deadline with our datacenter end of service. The Azure Migration and Modernization Program (AMMP) accelerated our transformation to the cloud with skilling, FastTrack for Azure engineering resources, and a specialized partner (Brillio). Together they helped ensure a smooth migration, cost reduction, and risk mitigation. With AMMP, we were able to not only beat the project deadline but quickly realize efficiencies that would have not otherwise been possible on-premises.”—Patty Ward, CIO, Mizkan Americas

◉ Applications: We are deepening our focus on app modernization in two areas:

Azure Red Hat OpenShift: AMMP will now help customers move their Red Hat OpenShift environments to Azure as part of their application modernization initiatives.

“Digital transformation is a key component of the modern enterprise, which can frequently mean adopting the operational and cost efficiencies of cloud services. Enabling this shift is critical for customers and a key priority for Red Hat, so we’re pleased that Azure Red Hat OpenShift will be part of the Azure Migration and Modernization Program, which is intended to help customers and partners gain faster access to the necessary tools and expertise for advancing transformation strategies, from application modernization to workload migration.”—Sathish Balakrishnan, Vice President, Hybrid Cloud Experience, Red Hat

Cloud-native apps: AMMP will help customers innovate and build new cloud-native apps using common app patterns like serverless, containers, and microservices architectures.

◉ Hybrid and security: Customers often bring up hybrid flexibility and enhanced security posture as key requirements during their move to the cloud. AMMP will help customers with deploying Azure Arc to workloads that aren’t planned or ready to move yet, helping them consistently manage their workloads across cloud and on-premises environments. We are deepening our focus on security in AMMP with guidance and deployment assistance for services like Azure Security Center and Azure Defender.

SAP is available in AMMP as of today. Stay tuned for availability details on the others.

With these additions, AMMP is now one comprehensive program for all migration and modernization needs of our customers.

Enhanced migration and modernization tools and product capabilities

Azure Migrate is the central hub to discover, assess, right-size, and move applications, databases, and infrastructure to Azure. Azure Migrate recently announced the preview of two new features: first, Agentless discovery, and assessment of ASP.NET web apps and second, App containerization tool expansion to include Azure App Service as deployment target in addition to Azure Kubernetes Service (AKS).

Announcements, Azure Migration, Azure Exam Prep, Azure Tutorial and Material, Azure Skills
We had previously introduced Azure Automanage, an Azure service that simplifies and optimizes the management of Windows Server and Linux virtual machines. Today we’re announcing the general availability of two key features in Azure Automanage—Extended network for Azure and SMB over QUIC—to simplify migration of Windows Server-based workloads.

Azure SQL is a family of SQL cloud databases providing flexible options to migrate and modernize application databases. Today at Microsoft Ignite, we’re announcing many new capabilities in Azure SQL Managed Instance: premium series hardware, Windows authentication, 16 TB storage capacity—which deliver more performance, scale, and flexibility. The new link feature within Azure SQL Managed Instance reimagines SQL Server database replication to enable mission-critical workload migration with minimal downtime. Learn more about these Azure SQL enhancements.

Azure landing zones in the Microsoft Cloud Adoption Framework for Azure are designed to help customers successfully prepare their cloud environments for migration or modernization. To make Azure landing zones creation even easier, we built the Azure landing zone accelerator which provides an intuitive Azure portal-based deployment experience. To help customers optimize their Azure investments, we have created a new Azure Virtual Machines cost estimator which includes a Power BI template and on-premises compute unit list. This estimator is available to all customers as part of our cloud economics and Cloud Adoption Framework guidance.

Source: microsoft.com

Saturday, 12 February 2022

Improve your security defenses for ransomware attacks with Azure Firewall

Azure Firewall, Azure Firewall Premium, Security, Networking, Management

To ensure customers running on Azure are protected against ransomware attacks, Microsoft has invested heavily in Azure security and has provided customers with the security controls needed to protect their Azure cloud workloads.

A comprehensive overview of best practices and recommendations can be found in the "Azure Defenses for Ransomware Attack" e-book.

Here, we would like to zoom into network security and understand how Azure Firewall can assist you with protecting against ransomware.

Ransomware is basically a type of malicious software designed to block access to your computer system until a sum of money is paid. The attacker usually exploits an existing vulnerability in your system to penetrate your network and execute the malicious software on the target host.

Ransomware is often spread through phishing emails that contain malicious attachments or through drive-by downloading. Drive-by downloading occurs when a user unknowingly visits an infected website and then malware is downloaded and installed without the user’s knowledge.

Here Azure Firewall Premium comes into help. With its intrusion detection and prevention system (IDPS) capability, every packet will be inspected thoroughly, including all its headers and payload to identify malicious activity and to prevent it from penetrating your network. IDPS allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.

The IDPS signatures are applicable for both application and network-level traffic (Layers 4-7), they are fully managed and contain more than 65,000 signatures in over 50 different categories to keep them up to date with the dynamic ever-changing attack landscape:

1. Azure Firewall is getting early access to vulnerability information from Microsoft Active Protections Program (MAPP) and Microsoft Security Response Center (MSRC).

2. Azure Firewall is releasing 30 to 50 new signatures each day.

Nowadays, modern encryption, such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS), is used globally to secure internet traffic. Attackers are using encryption to carry their malicious software into the victim network. Therefore, customers must inspect their encrypted traffic just like any other traffic.

Azure Firewall Premium IDPS allows you to detect attacks in all ports and protocols for non-encrypted traffic. However, when HTTPS traffic needs to be inspected, Azure Firewall can use its TLS inspection capability to decrypt the traffic and accurately detect malicious activities.

After the ransomware is installed on the target machine, it may try to encrypt the machine’s data, therefore it requires using an encryption key and may use the Command and Control (C&C) to get the encryption key from the C&C server hosted by the attacker. CryptoLocker, WannaCry, TeslaCrypt, Cerber, and Locky are some of the ransomware using C&C to fetch the required encryption keys.

Azure Firewall Premium has hundreds of signatures that are designed to detect C&C connectivity and block it to prevent the attacker from encrypting customers’ data.

Azure Firewall, Azure Firewall Premium, Security, Networking, Management
Figure 1: Firewall protection against ransomware attack using command and control channel

Taking a comprehensive approach to fend off ransomware attacks


Taking a holistic approach to fend off ransomware attacks is recommended. Azure Firewall operates in a default deny mode and will block access unless explicitly allowed by the administrator. Enabling Threat Intelligence (TI) feature in alert/deny mode will block access to known malicious IPs and domains. Microsoft Threat Intel feed is updated continuously based on new and emerging threats.

Firewall policy can be used for the centralized configuration of firewalls. This helps with responding to threats rapidly. Customers can enable Threat Intel and IDPS across multiple firewalls with just a few clicks. Web categories let administrators allow or deny user access to web categories such as gambling websites, social media websites, and others. URL filtering provides scoped access to external sites and can cut down risk even further. In other words, Azure Firewall has everything necessary for companies to defend comprehensively against malware and ransomware.

Detection is equally important as prevention. Azure Firewall solution for Microsoft Sentinel gets you both detection and prevention in the form of an easy-to-deploy solution. Combining prevention and detection allows you to ensure that you both prevent sophisticated threats when you can, while also maintaining an “assume breach mentality” to detect and quickly respond to cyberattacks.

Source: microsoft.com

Thursday, 10 February 2022

Genomic analysis on Galaxy using Azure CycleCloud

Azure CycleCloud, Azure Exam Prep, Azure Preparation, Azure Career, Azure Jobs, Azure Skills, Azure Learning

Cloud computing and digital transformation have been powerful enablers for genomics. Genomics is expected to be an exabase-scale big data domain by 2025, posing data acquisition and storage challenges on par with other major generators of big data. Embracing digital transformation offers a practically limitless ability to meet the genomic science demands in both research and medical institutions. The emergence of cloud-based computing platforms such as Microsoft Azure has paved the path for online, scalable, cost-effective, secure, and shareable big data persistence and analysis with a growing number of researchers and laboratories hosting (publicly and privately) their genomic big data on cloud-based services.

At Microsoft, we recognize the challenges faced by the genomics community and are striving to build an ecosystem (backed by OSS and Microsoft products and services) that can facilitate genomics work for all. We’ve focused our efforts on three main core areas—research and discovery in genomic data, building out a platform to enable rapid automation and analysis at scale, and optimized and secure pipelines at a clinical level. One of the core Azure services that has enabled us to leverage high performance compute environment to perform genomic analysis is Azure CycleCloud.

Galaxy and Azure CycleCloud

Galaxy is a scientific workflow, data integration, and data analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience. Although it was initially developed for genomic research, it is largely domain agnostic and is now used as a general bioinformatics workflow management system. Galaxy system is used for accessible, reproducible, and transparent computational research.

◉ Accessible: Programming experience is not required to easily upload data, run complex tools and workflows, and visualize results.

◉ Reproducible: Galaxy captures information so that you don't have to; any user can repeat and understand a complete computational analysis, from tool parameters to the dependency tree.

◉ Transparent: Users share and publish their histories, workflows, and visualizations via the web.

◉ Community-centered: Inclusive and diverse users (developers, educators, researchers, clinicians, and more) are empowered to share their findings.

Azure CycleCloud is an enterprise-friendly tool for orchestrating and managing high-performance computing (HPC) environments on Azure. With Azure CycleCloud, users can provision infrastructure for HPC systems, deploy familiar HPC schedulers, and automatically scale the infrastructure to run jobs efficiently at any scale. Through Azure CycleCloud, users can create different types of file systems and mount them to the compute cluster nodes to support HPC workloads. With dynamic scaling of clusters, the business can get the resources it needs at the right time and the right price. Azure CycleCloud automated configuration enables IT to focus on providing service to the business users.

Deploying Galaxy on Azure using Azure CycleCloud

Galaxy is used by most academic institutions that conduct genomic research. Most institutions that already use Galaxy want to stick to it because it provides multiple tools for genomic analysis as a SaaS platform. Users can also deploy custom tools onto Galaxy.

Galaxy users generally use the SaaS version of Galaxy as part of UseGalaxy resources. UseGalaxy servers implement a common core set of tools and reference genomes and are open to anyone to use. All information on its usage is available on the Galaxy Platform Directory.

However, there are some research institutions that intend to deploy Galaxy in-house as an on-premises solution or a cloud-based solution. The remainder of this article describes how to deploy and run Galaxy on Microsoft Azure using Azure CycleCloud and grid engine cluster. The solution was built during the Microsoft hackathon (October 12 to 14, 2021) with code implementation assistance from Azure HPC Specialist, Jerry Morey. The architectural pattern described below can help organizations to deploy Galaxy in an Azure environment using CycleCloud and a scheduler of choice.

Azure CycleCloud, Azure Exam Prep, Azure Preparation, Azure Career, Azure Jobs, Azure Skills, Azure Learning

As a pre-requisite, genomic data should be available in a storage location, either cloud or on-premises. Azure CycleCloud should be deployed using the steps described in the “Install CycleCloud using the Marketplace image” documentation.

Cluster deployment that is truly supported by Galaxy on the cloud is called the unified method. In this method, the copy of Galaxy on the application server is the same copy as the one on the cluster nodes. The most common method to do this would be to put Galaxy in a network file system (NFS) somewhere that is accessible by the application server and the cluster nodes. This is the most common deployment method for Galaxy.

An admin user can SSH into Azure CycleCloud virtual machines or Galaxy server virtual machines to perform admin-related activities. It is recommended to close the SSH port when in production. Once the Galaxy server is running on a node, end users (researchers) can load the portal on their end device to perform analysis tasks which include loading data, installing, uploading tools, and more.

Access to functionalities (such as installing and deleting tools versus the usage of tools for analysis) are controlled by parameters defined in galaxy.yml that resides in the Galaxy server. Once a user accesses a functionality, they are converted to jobs that are submitted to the grid engine cluster for further execution.

Deployment scripts are available to ease deployment. These scripts can be used to deploy the latest version of Galaxy on Azure CycleCloud.

Following are the steps to use the deployment scripts:

◉ Git clone this project (The project is in active development, so cloning the latest release is recommended).


◉ Upload project to CC locker.

cd galaxy-gridengine

Modify files (if needed)

cyclecloud locker list

Azure cycle Locker (az://mystorageaccount/cyclecloud

cyclecloud project upload "Azure cycle Locker"

◉ Import cluster template to CC.

cyclecloud import_cluster <cluster-name> -c <galaxy-folder-name> -f templates/gridengine-galaxy2.txt

NOTE: Substitute <cluster-name> with a name for your cluster—all lower case, no spaces.

◉ Navigate to CC Portal to configure and start the cluster.

Wait for 30 to 45 minutes for the Galaxy server to be installed.

To check if the server is installed correctly, SSH into Galaxy server node and check galaxy.log in /shared/home/<galaxy-folder-name> directory.

This deployment was adopted by a leading United States-based academic medical center. The Microsoft Industry Solutions team helped deploy this solution on the customer’s Azure tenant. Researchers at the center tested to assess the parity of this solution to existing Galaxy deployment on their on-premises HPC environment. They were able to successfully test the deployed Galaxy server that used Azure CycleCloud for job orchestration. Several common bioinformatics tools such as bedtools, fastqc, bcftools, picard, and snpeff were installed and tested. Galaxy supports local user by default. As part of this engagement, a solution to integrate their corporate active directory was tested and deployed. The solution was found to be on par with their on-premises deployment. With the increased number of execute nodes and size of those nodes, they found that the jobs were executed in less time.

Source: microsoft.com

Tuesday, 8 February 2022

IoT adoption remains strong in the Asia-Pacific region as organizations broaden usage

Microsoft Internet of Things, Microsoft Exam Preparation, Microsoft Preparation, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Learn

The Asia-Pacific region has long been a strong manufacturing base and the sector continues to be a strong adopter of the Internet of Things (IoT). But as the latest Microsoft IoT Signals report shows, IoT is now much more widely adopted across verticals, and across the globe, with smart spaces—a key focus for many markets in the Asia-Pacific region—becoming one of the leading application areas.

The newest edition of this report provides encouraging reading for organizations in the Asia-Pacific region. The global study of over 3,000 business decision-makers (BDMs), developers, and internet technology decision-makers (ITDMs) across ten countries—including Australia, China, and Japan—shows that IoT continues to be widely adopted for a range of uses and is seen as critical to business success by a large majority. Further, rather than slowing growth which some might have feared, the COVID-19 pandemic is driving even greater investment across different industries as IoT becomes more tightly integrated with other technologies.

Microsoft Internet of Things, Microsoft Exam Preparation, Microsoft Preparation, Microsoft Exam Prep, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft Learn

Across the Asia-Pacific region, the research shows that organizations in Australia report the highest rate of IoT adoption at 96 percent—beating both Italy (95 percent) and the United States (94 percent)—and that organizations in China are adopting IoT for more innovative use cases and have the highest rates of implementation against emerging technology strategies. In Japan, it found that companies are using IoT more often to improve productivity and optimize operations. Below we dive into three key trends that emerge for organizations in this region.

1. A greater focus on planning IoT projects pays off


Whilst IoT projects in the region take slightly longer to reach fruition, it seems that this reflects a more thoughtful and diligent approach which appears to be paying off. By thinking through and taking time upfront to determine the primary business objectives for success, organizations in the Asia-Pacific region report high levels of IoT adoption (96 percent in Australia), importance (99 percent of companies in China say IoT is critical to business success), and overall satisfaction (99 percent and 97 percent in China and Australia respectively). These objectives are broadly in line with global findings, with quality assurance and cloud security consistently mentioned across all three countries in this region. Organizations in Australia and Japan adopt IoT to help with optimization and operational efficiencies: in Australia, the focus is on energy optimization (generation, distribution, and usage); and in Japan, it is on manufacturing optimization (agile factory, production optimization, and front-line worker). Those in Australia and China also tend to do more device monitoring as part of IoT-enabled condition-based maintenance practices.

Companies in the region report that these varied use cases are delivering significant benefits in terms of more operational efficiency and staff productivity, improved quality by reducing the chance of human error, and greater yield by increasing production capacity.

2. Emerging technologies accelerate IoT adoption


Of the organizations surveyed, the 88 percent that are set to either increase or maintain their IoT investment in the next year are more likely to incorporate emerging technologies such as AI, edge computing, and digital twins into their IoT solutions. And in the Asia-Pacific region, awareness of these technologies tends to be higher than in other markets.

Organizations in China are far more likely than their counterparts elsewhere to have strategies that address these three areas. They lead all other countries when it comes to implementing against AI and edge computing strategies, and a staggering 98 percent of companies in Australia that are aware of digital twins say they have a specific strategy for that technology. More significantly, their experience with these technologies is driving greater adoption of IoT across the region, with around eight in ten organizations working to incorporate them into their IoT solutions.

3. Industry-specific IoT solutions drive a broader range of benefits


The IoT Signals report analyzed several industries in-depth, all well represented in the Asia-Pacific region. Organizations in Australia, for instance, should note that energy, power, and utility companies use IoT to help with grid automation (44 percent) and maintenance (43 percent), while oil and gas companies tend to apply it more to workplace and employee safety (45 percent and 43 percent respectively). Energy companies are also much more likely to use AI in their IoT solutions than other industries (89 percent of organizations versus 79 percent for all verticals). The benefits of IoT being seen by organizations in these sectors include increases in operational efficiency, increases in production capacity, and increases in customer satisfaction.

In Japan, where manufacturing makes up an important part of the market, we find that there are more IoT projects in the usage stage (26 percent) than in other sectors, mainly focused on bolstering automation. Manufacturing organizations are using these IoT solutions to ensure quality, facilitate industrial automation, and monitor production flow. In doing so, they benefit from improved operational efficiency and greater production capacity, driving competitive advantage. In this industry, it’s not technology that poses a challenge but the huge business transformation that takes extra time and thought, often due to legacy systems and processes.

China, of course, has always been an innovator when it comes to devices, so its manufacturing sector will see the same impacts. But smart spaces—as in other countries in the Asia Pacific region—are getting a lot of attention, and this is where we see the highest levels of IoT adoption (94 percent) and overall satisfaction (98 percent). It also has the strongest indications of future growth with 69 percent planning to use IoT more in the next two years. It’s also the industry sector where the highest proportion of organizations are implementing IoT against AI strategies. The top applications of IoT in smart spaces are around productivity and building safety, where organizations can benefit from improved operational efficiency and personal safety.

Source: microsoft.com

Thursday, 3 February 2022

Announcing the public preview of Microsoft Azure Payment HSM service

Microsoft Azure Payment HSM Service, Microsoft Exam Prep, Microsoft Exam Preparation, Microsoft Career, Microsoft Skills, Microsoft Jobs, Microsoft

The growing trend for running payment workloads in the cloud

Momentum is building as financial institutions move some or all their payment applications to the cloud. This entails a migration from the legacy on-premises applications and hardware security modules (HSM) to a cloud-based infrastructure that is not generally under their direct control. Often it means a subscription service rather than perpetual ownership of physical equipment and software. Corporate initiatives for efficiency and a scaled-down physical presence are the drivers for this. Conversely, with cloud-native organizations, the adoption of cloud-first without any on-premises presence is their fundamental business model. End-users of a cloud-based payment infrastructure expect reduced IT complexity, streamlined security compliance, and flexibility to scale their solution seamlessly as their business grows.

Potential challenges

Cloud offers significant benefits. Yet, there are challenges when migrating a legacy on-premises payment application (involving payment HSM) to the cloud that must be addressed. Some of these are:

◉ Shared responsibility and trust—what potential loss of control in some areas is acceptable?

◉ Latency—how can an efficient, high-performance link between the application and HSM be achieved?

◉ Performing everything remotely—what existing processes and procedures may need to be adapted?

◉ Security certifications and audit compliance—how will current stringent requirements be fulfilled?

The Azure Payment HSM service addresses these challenges and delivers a compelling value proposition to the users of the service.

Introducing the Microsoft Azure Payment HSM

Today, we are excited to announce that Azure Payment HSM is in preview in East US and North Europe.

The Azure Payment HSM is a “BareMetal” service delivered using Thales payShield 10K payment HSMs to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system’s digital transformation strategy and adopt the public cloud. It meets stringent security, audit compliance, low latency, and high-performance requirements by the Payment Card Industry (PCI).

HSMs are provisioned and connected directly to users’ virtual network, and HSMs are under users’ sole administration control. HSMs can be easily provisioned as a pair of devices and configured for high availability. Users of the service utilize Thales payShield Manager for secure remote access to the HSMs as part of their Azure subscription. Multiple subscription options are available to satisfy a broad range of performance and multiple application requirements that can be upgraded quickly in line with end-user business growth. Azure Payment HSM offers the highest performance level 2,500 CPS.

Enhanced security and compliance

End-users of the service can leverage Microsoft security and compliance investments to increase their security posture. Microsoft maintains PCI DSS and PCI 3DS compliant Azure data centers, including those which house Azure Payment HSM solutions. The Azure Payment HSM can be deployed as part of a validated PCI P2PE and PCI PIN component or solution, helping to simplify ongoing security audit compliance. Thales payShield 10K HSMs deployed in the security infrastructure are certified to FIPS 140-2 Level 3 and PCI HSM v3.

*The Azure Payment HSM service is currently undergoing PCI DSS and PCI 3DS audit assessment.

Manage your Payment HSM in Azure

The Azure Payment HSM service offers complete administrative control of the HSMs to the customer. This includes exclusive access to the HSMs. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released to Microsoft to maintain complete privacy and security. The customer is responsible for deploying and configuring HSMs for high availability, backup and disaster recovery requirements, and to achieve the same performance available on their on-premises HSMs.

Accelerate digital transformation and innovation in cloud

The Azure Payment HSM solution offers native access to a payment HSM in Azure for ‘lift and shift’ with low latency. The solution offers high-performance transactions for mission-critical payment applications. Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their Payment HSM.

Typical use cases

With benefits including low latency and the ability to quickly add more HSM capacity as required, the cloud service is a perfect fit for a broad range of use cases which include:

Payment processing:

◉ Card and mobile payment authorization

◉ PIN and EMV cryptogram validation

◉ 3D-Secure authentication

Payment credential issuing:

◉ Cards

◉ Mobile secure elements

◉ Wearables

◉ Connected devices

◉ Host card emulation (HCE) applications

Securing keys and authentication data:

◉ POS, mPOS, and SPOC key management

◉ Remote key loading (for ATM, POS, and mPOS devices)

◉ PIN generation and printing

◉ PIN routing

Sensitive data protection:

◉ Point to point encryption (P2PE)

◉ Security tokenization (for PCI DSS compliance)

◉ EMV payment tokenisation

Suitable for both existing and new payment HSM users

The solution provides clear benefits for both payment HSM users with a legacy on-premises  HSM footprint, and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.

Benefits for existing on-premises HSM users:

◉ Requires no modifications to payment applications or HSM software to migrate existing applications to the Azure solution.

◉ Enables more flexibility and efficiency in HSM utilization.

◉ Simplifies HSM sharing between multiple teams geographically dispersed.

◉ Reduces physical HSM footprint in their legacy data centers.

◉ Improves cash flow for new projects.

Benefits for new payment participants:

◉ Avoids introduction of on-premises HSM infrastructure.

◉ Lowers upfront investment via the Azure subscription model.

◉ Offers access to the latest certified hardware and software on-demand.

Source: microsoft.com

Tuesday, 1 February 2022

New performance and logging capabilities in Azure Firewall

Azure Firewall, Azure Exam Prep, Azure Learning, Azure Preparation, Azure Career, Azure Skills, Azure Jobs, Azure Preparation

Organizations are speeding up workload migration to Azure to take advantage of the growing set of innovative cloud services, scale, and economic benefits of the public cloud. Applications migration to the cloud consequently increases the network traffic throughput demand. This puts pressure on network elements and more specifically on Azure Firewall which is in the critical path of most network traffic. Currently, Azure Firewall supports 30 Gbps which is sufficient to meet current throughput demands for many of our customers. However, we are seeing some organizations require even more throughput and towards this, we are announcing new Azure Firewall capabilities as well as updates for January 2022:

◉ Azure Firewall network rule name logging.

◉ Azure Firewall premium performance boost.

◉ Performance whitepaper.

Azure Firewall network rule name logging

We have heard your feedback and are happy to announce the rule name availability in the Network logs. Like application rules, network rule name is now available in the logs.

Previously, the event of a network rule hit would show the source, destination IP/port, and the action, allow or deny. With the new functionality, the event logs for network rules will also contain the policy name, Rule Collection Group, Rule Collection, and the rule name hit.

After enabling the feature, the following information will be provided for a network rule hit event in the logs:

Azure Firewall, Azure Exam Prep, Azure Learning, Azure Preparation, Azure Career, Azure Skills, Azure Jobs, Azure Preparation
Figure 1: Network rule event in the logs after enabling the “network rule name logging” feature.

Note: For Classic Firewalls (those not managed by an Azure Firewall policy), only the rule name will be visible.

To enable the network rule name logging feature, follow the instructions.

Azure Firewall Premium performance boost


As more applications are moved to the cloud, the performance of network elements might become a bottleneck. The firewall as the central piece of any network design needs to be able to support all those workloads. Hence, we are happy to announce that the Azure Firewall Premium performance boost functionality is going to preview to allow more scalability for those deployments.

This feature increases the maximum throughput of the Azure Firewall Premium by more than 300 percent (to 100Gbps). See the performance whitepaper section below for more details.

To enable the Azure Firewall Premium performance boost feature, follow the instructions.

*Make sure to also check out the comprehensive testing done by Andrew Myers for a detailed analysis and as a reference to build your own test environment.

Azure Firewall Performance whitepaper


Reliable firewall performance is essential to operate and protect your virtual networks in Azure. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. To provide customers with a better visibility into the expected performance of Azure Firewall, we are releasing the Azure Firewall Performance documentation.

As we are always working to improve the Azure Firewall service, the metrics highlighted in the document will be updated to reflect the latest performance results you could expect from the Azure Firewall. So, make sure to bookmark the page to stay up to date with the latest information.

Source: microsoft.com