Saturday, 28 November 2020

Achieving 100 percent renewable energy with 24/7 monitoring in Microsoft Sweden

Earlier this year, we made a commitment to shift to 100 percent renewable energy supply in our buildings and datacenters by 2025. On this journey, we recognize that how we track our progress is just as important as how we get there.

Today, we are announcing that Microsoft will be the first hyperscale cloud provider to track hourly energy consumption and renewable energy matching in a commercial product using the Vattenfall 24/7 Matching solution for our new datacenter regions in Sweden, which will be available in 2021.

Vattenfall and Microsoft are also announcing that the 24/7 hourly matching solution—the first commercial product of its kind—is now generally available. Vattenfall is a leading European energy company with a strong commitment to make fossil-free living possible within one generation. The solution is built using Microsoft’s Azure services, including Azure IoT Central and Microsoft Power BI.

Today’s announcement builds on last year’s partnership announcement with Vattenfall when the 24/7 Matching solution was first introduced. Since then, the solution has been in pilot in Vattenfall headquarters in Solna and the new Microsoft headquarters in Stockholm, which has seen 94 percent of the total office building energy consumption matched with Swedish wind and 6 percent matched with Swedish hydro power.

We continually invest in new ways to make our buildings and datacenters more energy efficient and sustainable. As part of today’s announcement, Microsoft is signing a power purchase agreement (PPA) to cover 100 percent of Microsoft’s energy consumption in Sweden. Microsoft will ensure that the company’s operations in Sweden use renewable energy.

The Vattenfall 24/7 Matching solution enables us to have a more accurate picture of energy used to match with Guarantees of Origin (GOs). This marks another important step in our commitment to be carbon negative by 2030 and use 100 percent renewable energy by 2025.

Azure Exam Prep, Azure Tutorial and Material, Azure Prep, Azure Certification, Azure Learning

Increasing transparency and accuracy of renewable energy matching


Fulfilling our 100 percent renewable energy commitment requires a better way of tracking renewable electricity. Today, the industry is using Energy Attribute Certificates, called Guarantees of Origin (GOs) in Europe and Renewable Energy Certificates (RECs) in the US. These ensure that the amount of electricity sold corresponds to the amount produced. GOs allow end consumers to choose electricity from a specific source; this enables them to choose electricity exclusively from renewable sources such as wind, solar, or hydropower.

While we have seen remarkable progress toward renewable sourcing and commitments, there is a fundamental flaw in monitoring the source and quantity of energy consumed. For any given hour, a business does not know the source of the energy they are consuming. That energy may come from renewable sources, or it may be produced from fossil fuel. The current system has no way of matching the supply of renewable energy with demand for that energy on an hourly basis. And without the transparency of supply and demand, market forces cannot work to ensure that renewable energy demand is supplied from renewable sources.

Through this solution, Microsoft Sweden’s new home is powered by renewable energy through the procurement of GOs, which traces electricity from renewable sources to provide information to electricity customers on the source of their energy—not just on a monthly or yearly basis, but on an hourly basis.

Azure Exam Prep, Azure Tutorial and Material, Azure Prep, Azure Certification, Azure Learning
The 24/7 matching of GOs and renewable energy credits (RECs) offers the following benefits:

◉ Businesses can see if their commitment to 100 percent renewable energy cover each hour of consumption and translate sourcing of renewable energy into climate impact.

◉ Energy providers can more easily understand demands for renewable energy hour-by-hour and take action to help production meet demand.

◉ 24/7 matching of consumption to production drives true market demand for renewable energy. As 24/7 hourly renewable products are rolled out across the world, they will incentivize investment in energy storage such that energy companies can store renewable energy when it is generating, so they can continue to supply their customers with renewable energy when it is not. Over time, this storage will allow electricity grids to supply 100 percent decarbonized power.

◉ The system can inspire regulatory change in how GOs and RECs are created, acquired and retired.

IoT for more accurate energy monitoring


IoT enables companies to gain near real-time insights of the physical world, connecting objects to give you insights into the health of a system or process, predict failures before they happen and gain overall efficiencies in operations.

The Vattenfall 24/7 hourly monitoring solution leverages Azure IoT Central to manage the full picture of energy consumption in a given building. Azure IoT Central helps solution builders move beyond proof of concept to building business-critical applications they can brand and sell directly or through Microsoft AppSource. Today, Microsoft offers two IoT Central energy app templates for solar panel and smart meter monitoring to help energy solution builders accelerate development.

Commitment to building world-class, sustainable datacenters


We believe that our datacenters should be positive contributors to the grid, and we continue to innovate in energy technology and monitoring resources to support our corporate commitment to be carbon negative by 2030.

Source: microsoft.com

Thursday, 26 November 2020

Hybrid and multicloud strategies for financial services organizations

A need for hybrid and multicloud strategies for financial services

The financial services industry is a dynamic space that is constantly testing and pushing novel use cases of information technology. Many of its members must balance immense demands—from the pressures to unlock continuous innovation in a landscape with cloud-native entrants, to responding to unexpected surges in demand and extend services to new regions—all while managing risk and combatting financial crime.

At the same time, financial regulations are also constantly evolving. In the face of the current pandemic, we have seen our customers accelerate in their adoption of new technologies, including public cloud services, to keep up with evolving regulations and industry demands. Hand in hand with growing cloud adoption, we’ve also seen growing regulatory concerns over concentration risk (check out our recent whitepaper on this), which have resulted in new recommendations for customers to increase their overall operational resiliency, address vendor lock-in risks and require effective exit plans.

Further complicating matters, many financial services firms oversee portfolios of services that include legacy apps that have been in use for many years. These apps often cannot support the implementation newer capabilities that can accommodate mobile application support, business intelligence, and other new service capabilities, and suffer from shortcomings that adversely affect their resiliency, such as having outdated and manual processes for governance, updates, and security processes. These legacy applications also have high vendor lock-in because they lack modern interoperability and portability. Furthermore, the blunt force approach of leveraging legacy technology as a means for protecting against financial crime is an unsustainable strategy with diminishing returns—with big banks spending over $1 billion per year maintaining legacy infrastructure and seeing a rise in false positive rates as financial crime evolves in sophistication.

As a means to address the demands of modernization, competition, and compliance, financial services organizations have turned to public cloud, hybrid cloud and multi-cloud strategies. A hybrid model enables existing applications—which originally exist on-premises—to be extended by connecting to the public cloud. This infrastructure framework unleashes the benefits of the public cloud—such as scale, speed, and elastic compute, without requiring organizations to rearchitect entire applications. This approach provides organizations the flexibility to decide what parts of an application should reside in an existing datacenter versus in the public cloud, as such providing them with a consistent and flexible approach to developing a modernization strategy.

Additional benefits of successful hybrid cloud strategies include:

◉ A unified, consistent approach for infrastructure management: Consistently manage, secure and govern IT resources across on-premises, multicloud and the edge, delivering a consistent experience across locations.

◉ Extending geographic reach and openings new markets: Meet the growing global demand and extend into new markets by extending the capabilities of datacenters to new locations – while also meeting data localization requirements from local markets

◉ Managing security and increasing regulatory compliance: Hybrid and multicloud are great alternatives for strictly on-premises strategies due to cloud benefits around service security, availability, resiliency, data protection and data portability. These strategies are often referenced as a preferred way of reducing risk and addressing regulatory compliance challenges.

◉ Increasing Elasticity: Customers can respond with agility to surges in demand or transaction by provisioning and de-provisioning capacity as needed. A hybrid strategy allows organizations to seamlessly scale their capacity beyond their datacenter during high-compute scenarios, such as risk computations and complex risk modeling, without over exhausting servers or slowing down customer interactions.

◉ Reducing CapEx Expenses: The cloud makes the need for such a large capital outlay for managing on-premises infrastructure unnecessary. Through the benefits of elastic capacity in hybrid scenarios, companies can avoid the costs of unused digital capacity, paying only for the resources that are consumed.

◉ Accelerate time to market: A hybrid strategy provides a bridge that connects on-premises data to new cloud-based capabilities across AI and advanced analytics, allowing customers to modernize their services and unlock innovation. With virtualized environments, they can accelerate testing and evaluations cycles and enable deployment seamlessly across different locations.

A multicloud strategy enables customers to leverage services that span different cloud platforms, enabling them to select the services best suited to the workloads or apps they are managing.

Commonly cited benefits of a multicloud strategy include:

◉ Flexibility: Customers wish to have the flexibility to optimize their architectures leveraging the cloud services best suited to their specific needs, including the flexibility to select services based on features or costs

◉ Avoiding vendor lock-in: A common requirement customers often state, customers often seek design multi-cloud deployments to achieve short term flexibility and long-term agility by designing systems across multiple clouds.

Microsoft hybrid and multicloud edge for financial services organizations

Azure hybrid capabilities uniquely address some of the main barriers customers face around hybrid and multicloud strategies. Managing multiple environments is an endeavor that introduces inherent complexity and risk for firms, faced with an expanding data estate that spans diverse on-premises, public cloud(s), and edge environments. Optimizing for productivity without sacrificing security and compliance can be daunting. Azure provides a seamless environment for developing, deploying and managing data and applications across all distributed locations.

For one, Azure uniquely supports the full range of hybrid capabilities across DevOps, Identity, Security, Management, and Data. Given that customer IT estates involve much more than containers, many of our cloud benefits are also available to server-based workloads. Azure enables customers to manage both Windows and Linux servers across their data estate and customers can also manage access and user authentication with hybrid identity services. The Azure Stack portfolio extends Azure services and capabilities to your environment of choice—from the datacenter to edge locations and remote offices and disconnected environments. Customers can run machine learning models on the edge, in order to get quick results before data is sent to the cloud. Furthermore, with capabilities such a Azure Stack Hub, our portfolio enables organizations to operate in offline environments that block data from being sent to the public cloud, especially if required for regulatory compliance.

Second, Azure simplifies the experience of managing a complex data estate by providing a unified, consistent approach for managing and monitoring their hybrid or multicloud environments. With capabilities such as Azure Arc, can manage their data estate with a single management plane—including the capability to monitor non-Microsoft clouds. Customers can also take a similarly simplified approach to managing security across their estate with services such as Azure Sentinel, which provides a consistent threat detection and security analytics view across on-premises, cloud and edge devices. In combination with services such as Azure Security Center, Azure policy, and Azure advisor, customers can also design, deploy, and oversee security and compliance of their deployments across their hybrid and multicloud environments.

Azure leadership in hybrid and multicloud offerings is also rooted in our extensive collaborations with hardware partners (OEMs), which whom we have partnered and co-engineered solutions to deliver a well-defined variety of supporting devices. Partner solutions have been designed with the aim in mind to increase resiliency and expand the reach of virtual data centers. With the new rugged series of Azure Stack Edge for instance, we provide cloud capabilities in the harshest environment conditions supporting scenarios such as tactical edge, humanitarian and emergency response efforts.

The Azure commitment to financial services customers stems from Microsoft industry-leading work with regulators around the world. Our customers require their cloud partners to support transparency, regulatory right to audit, and self-reporting. To enable this, we have a dedicated and comprehensive FSI compliance program available to customers and help customers manage their compliance by enabling choices around data location, transparency and notification of subcontractors, providing commitments on exit planning (see our recent blog here), as well as tools to aid in risk assessments.

Azure Study Material, Azure Exam Prep, Azure Tutorial and Material, Azure Learning, Azure Guides

Customer spotlights


We’ve seen many of our financial services customers begin to realize the benefits of hybrid and multicloud strategies already. In a recent Total Economic Impact study commissioned with Forrester on the impact of shifting from on-premises to Azure IaaS (including to hybrid environments), over a three year period, organizations avoided 90 percent of on-premises infrastructure costs (valued at over $7 million), as well as associated employee costs. Organizations were able to reallocate their IT staff to higher level business initiatives, including ventures of expansion into new markets, which resulted in altogether new streams of income for the companies.

One example of a company that took a hybrid approach was Banco de Crédito e Inversiones (BCI). Their portfolio supported 20 million transactions a month and required a hybrid approach in order to keep apps and resources on-premises for regulatory and performance reasons. With Azure Stack Hub, they were able to improve the performance and reliability of their systems, and even rolled out new products quickly. They were able to switch from outsourced IT management to in-house management.

“We’ve found the whole Azure platform to be very reliable and stable, and it gets better with each release. In fact, we have statistics showing that when we enabled Azure Stack Hub, customer satisfaction went up. It’s very clear. We’re delivering a better experience for our customers through the reliability and performance of Azure Stack Hub and the new functionality our team is building on top of it.”—German Matosas, Head of Architecture and Cloud Platform, BCI

Another example is Volkswagen Financial Services, a branch of VW that manages approximately 80 web apps across ten countries—a complex IT estate by any measure. They needed to modernize their apps and development approach and leveraged Azure Stack Hub to bring cloud speed and scale to their DevOps practices. This strategy also allowed them to maintain components of their highly customized apps on-premises (such as core databases and SAP systems), due to privacy and compliance requirements. This also enabled them to add new services without needing to rework their existing applications.

What about full or single cloud?


While the focus of this blogpost has been hybrid and multicloud strategies, it is also worth briefly touching on the value of partnering with a single cloud provider to provide end-to-end solutions. This is referred to as a “full cloud” or "single cloud” strategy and serves the long-term objective of shutting down all on-premises data centers and moving all workloads to a single cloud provider. This strategy also has its merits and in fact may offer benefits over both hybrid and multicloud solutions, such as offering simplified management, less complexity, and lower total cost of ownership (TCO). Partnering with a highly resilient CSP, such as Microsoft, for a full cloud strategy, has been the solution of choice for several financial institutions. The unique benefits of a full cloud strategy need to be weighed against potential downsides, but in principle, this approach is allowed by regulators in most global jurisdictions.

Deciding on a hybrid or multicloud strategy


Many organizations commence their journey from a fully on-premises baseline. We’ve seen that as they start to consume public cloud services, questions arise around what the most appropriate deployment strategy could be—whether they should take a full cloud, hybrid cloud, or multicloud approach.

If you respond positively to one or more of the questions below you are likely in a good position for using hybrid or multicloud strategies:

Azure Study Material, Azure Exam Prep, Azure Tutorial and Material, Azure Learning, Azure Guides

1. Does your organization’s digital strategy enable your organization to easily adopt new and emerging technologies and deploy them to on-premises or legacy apps? 

2. Does your organization have a digital strategy that welcomes innovation but is not ready to fully commit to a public cloud?

3. Do you find it challenging to meet capacity demands in your IT infrastructure and meet unexpected surges in demand or maintain performance levels?

4. Does your IT department struggle to manage different technologies from different providers and keep oversight across multiple environments?

5. Does your organization face pressure from regulators or risk departments to maintain certain processes on-premise, or within specific geographic regions (data residency)?

6. Is your organization considering expanding into new geographies or new markets?

Source: microsoft.com

Tuesday, 24 November 2020

Guest health feature in Azure Monitor for virtual machines

It is imperative to monitor the health of your virtual machine. But how much time do you spend reviewing each metric and alert to monitor the health of a virtual machine?

We are announcing the preview of Azure Monitor for virtual machines guest health feature that monitors the health of your virtual machines and fires an alert when any parameter being monitored is outside the acceptable range. This feature provides you:

◉ A simple experience to monitor the overall health of your virtual machine.

◉ Out-of-the-box health monitors based on key VM metrics to track the health of your virtual machine.

◉ Out-of-the-box alerts to notify if the virtual machine is unhealthy.

Virtual machine guest health feature has a parent-child hierarchical model. It monitors the health state of CPU, disks, and memory for a virtual machine and notifies the customer about the changes. The three states—healthy, warning, and critical—are defined based on the thresholds set by the customer for each child monitor. Each monitor measures the health of a particular component. The overall health of the virtual machine is determined by the health of its individual monitors. The top level monitor on the VM groups the health state of all the child monitors and provides a single health state of the virtual machine. It matches the state of the child monitor with the least healthy state.

Azure Study Material, Azure Certification, Azure Learning, Azure Exam Prep

Get started


You can view the health of each VM in your subscription and resource group in the Guest VM health column from the get started page of Azure Monitor for virtual machines.

Azure Study Material, Azure Certification, Azure Learning, Azure Exam Prep

Health tree


You can view the detailed health status of the VM by clicking on the health status from the get started page. In the side pane, the overview tab provides a description of the monitor, the last time it was evaluated, and values sampled to determine the current health state. The history tab lists the history of state changes for the monitor. You can view and modify the thresholds for critical and warning states for each monitor from the configuration tab. From this tab, you can also enable the alert status if you wish to receive an alert upon the state change of the monitor.

Azure Study Material, Azure Certification, Azure Learning, Azure Exam Prep

Pricing


There is no direct cost for the guest health feature, but there is a cost for ingestion and storage of health state data in the Log Analytics workspace. All data is stored in the HealthStateChangeEvent table.

Supported OS and regions


For the preview, only Azure Virtual Machines are supported. Virtual machine scale sets and Azure Arc for servers are not currently supported.

◉ Virtual machine must run one of the following operating systems:
     ◉ Ubuntu 16.04 LTS, Ubuntu 18.04 LTS
     ◉ Windows Server 2012 or later
◉ Virtual machine and Log Analytics workspace must be located in one of the regions as listed here.

Source: azure.microsoft.com

Monday, 23 November 2020

Get Ready To Attempt Microsoft MS-500 Exam


Responding to threats and enforcing data management are skills that the professionals must acquire in this era of frequent threats and attacks. The organizations realize what it takes to have their systems and data secure. They also perceive how expensive it will be if gaps are not taken care of. This demands the acceptance of security measures that have to be put in place by certified security professionals. This has unlocked many opportunities for professionals through employment. If you aspire to become a security administrator, Microsoft offers you an opportunity through one of its sought after certification exam.
Microsoft MS-500 is the exam you need to pass. In this post, we will guide you through this exam's details and give some preparation tips. Keep reading if you want to understand more.

Thursday, 19 November 2020

Azure Backup for Azure PostgreSQL long-term retention in preview

If you have opted for Azure Database for PostgreSQL server, you are probably looking for a fully managed, intelligent, and flexible cloud database service that enables you to focus on building applications while offloading critical management tasks such as availability, scalability, and data protection to the service provider. However, some of these tasks—backup being a case in point—may have additional requirements pertaining to your organization’s compliance and business needs that call for a specialized, end-to-end solution.

Azure Backup and Azure Databases have come together to build an enterprise-scale backup solution for Azure Database for PostgreSQL that facilitates flexible and granular backups and restores while supporting retention for up to 10 years. It is an elastic-scale, zero-infrastructure solution that does not require you to deploy or manage backup infrastructure, agents, or storage accounts while providing a simple and consistent experience to centrally manage and monitor the backups.

Azure Backup, Azure PostgreSQL, Azure Certification, Azure Exam Prep, Azure Prep, Microsoft Guides

Enhanced capabilities from Azure Backup and Azure Databases


Long-term retention in standard or archive tier

Retain backups for up to 10 years in the standard or archive tier according to your compliance and audit needs with recovery points pruned automatically by the built-in lifecycle management capability beyond the specified retention duration.

Customer-controlled, granular backup and restore across subscriptions

Define the backup policy with your choice of backup schedule and retention rules with the flexibility to trigger an on-demand backup out of the regular schedule for patching scenarios. Both backup and restores can be triggered for an individual database or a group of databases across subscriptions.

Restore anywhere

Trigger point-in-time restores to the source server or any other Azure Database for PostgreSQL server, even on higher database versions, making restores backward-compatible. Alternatively, restore the backup dump to a blob storage account and restore later to any PostgreSQL deployment on or off Azure.

Central management and monitoring with Backup Center

Manage and monitor all the backup-related operations and jobs across servers, resource groups, locations, subscriptions, and tenants from a single pane of glass called the Backup Center.

Never lose your backups, even if you lose the source server

Backup data is encrypted and stored in a separate security and fault domain such that even if the source server were to become compromised, the backups would remain intact in the Azure Backup managed storage accounts, which are in Microsoft tenant instead of customer’s tenant. The geo-redundant storage for backups also maintains a copy of the backup data in the paired secondary region.

RBAC-based access to the database using Azure Active Directory (Azure AD) authentication

The service doesn't assume access on the PostgreSQL server, neither does it ask for your credentials to connect to the database that it needs to backup. Aligning to the Azure security principles, the user is expected to grant the vault MSI (managed service identity is a feature of Azure AD) and the necessary permissions on the resource.

Get started


Watch the demo below to learn more about Azure Backup for Azure Database for PostgreSQL.

Azure Backup, Azure PostgreSQL, Azure Certification, Azure Exam Prep, Azure Prep, Microsoft Guides

You may use this solution independently or in addition to the native backup solution offered by Azure Database for PostgreSQL that offers retention for up to 35 days. The native solution is suited for operational recoveries when the database admin wants to recover from the latest backups. The Azure Backup solution on the other hand helps the IT admin with their organization’s compliance needs as well as more granular and flexible backup and restore.

Upcoming enhancements

◉ Azure CLI support for automating all operations.

◉ Extending the solution for other Azure Database services such Azure Database for MySQL and Azure Database for MariaDB.

Source: microsoft.com

Tuesday, 17 November 2020

Advancing global network reliability through intelligent software — Part 1

Microsoft’s global network is analogous to a highway system. Highways connect major cities and roadways connect small cities to major cities. In this analogy, data centers grouped into Azure regions are the major cities, and the edge PoPs are the small cities. Networking assets in data centers, at the edge, and across the global network are one pool shared by all. Think of them as a common highway with multiple lanes that all vehicles can use. This allows us to share resources, adapt to changes, and develop features that benefit all customers. Internet service providers, cloud providers, and content delivery networks have similar networks—some global and some local, so we link up with their networks at edge PoPs. Traffic moving between our data centers, and traffic between any one data center and the edge, all stays on Microsoft’s highway system.

Microsoft Tutorial and Material, Microsoft Guides, Microsoft Certifications, Microsoft Exam Prep

Network system reliability is a top priority for us, and we are continuously making improvements to our systems. Although network incidents at our hyperscale are inevitable, we take these opportunities to learn from the various contributing factors and re-think our availability design principles to make our services even more reliable. In early October 2020, we experienced a rare service incident that caused network congestion on some routes for a short duration, which impacted other Microsoft cloud services. There are great learnings from this particular issue, and we have incorporated those learnings in our software and processes—and called them out below.

Key design principles


Like a highway system that offers fast and reliable transportation to passengers, we have designed our global network to provide maximum reliability and performance with the following guidelines.

◉ Well-provisioned network with redundant topology: Design hardware and software failure domains with redundancy, to handle multiple simultaneous failures and eliminate single points of failure. Design the network with enough capacity to meet service level objectives during times of high demand or crisis.

◉ De-risk changes: Change is constant in a large-scale network. However, small changes can lead to major, unforeseen repercussions. Follow safe deployment practices to reduce blast radii, have automatic rollbacks, allow sufficient bake time between deployment phases, and eliminate manual touches.

◉ Intelligent software systems to manage, monitor, and operate the network: The physical infrastructure is always prone to risks such as fiber cuts, hardware failure, demand surges, destructive weather events, and sabotage. Build software systems that maintain real-time views of topology, demands, traffic routing, failures, performance, and automatically recover from network events.

◉ Proximity: Customer traffic should flow on the global network for the bulk of its journey, so that we can control the network performance. All inter-regional traffic stays entirely within our network.

◉ Feedback loop and using AI to improve: Continuous learning is critical to our reliability mission. Every incident is thoroughly analyzed to determine root cause(s) and any contributing factors; understanding what went wrong is an important learning opportunity. Use self-learning with telemetry data to perform quicker triage, deeper analysis, and engage human operators.

De-risking changes


Change is both inevitable and beneficial given the need to deploy service updates and improvements, and to react quickly to vulnerabilities or incidents. Changes roll out gradually beginning with integration environments, canary regions, and building up to mass roll out. The change process is declarative through our Zero Touch Operations systems. The validation cycle uses Open Network Emulator for a complete emulation of the global network, and our Real time Operation Checker validates the risk of each change to the network.

Open Network Emulator (ONE)—software emulation of the network


When astronauts prepare for space missions, they practice each move and test every procedure on high-fidelity emulators. Similarly, when we make a change to our network, we validate it on high-fidelity network emulators. The entire global network can be instantiated in a software emulation environment called One Network Emulator (ONE). Network devices are emulated with images of their software running in containers and virtual machines in Azure. Multiple network devices are emulated on a single host, which enables us to instantiate the entire network for large scale testing. Network links are emulated with Virtual Extensible LAN (VXLAN) tunnels. Network topology and device configurations are downloaded from production configuration systems. Software controllers interact with ONE, oblivious that they are interacting with emulated devices. ONE allows us to do large scale tests before entering production. Every team has its independent ONE environment for continuous integration. ONE can also interoperate with real devices and be used to test the actual hardware.

Microsoft Tutorial and Material, Microsoft Guides, Microsoft Certifications, Microsoft Exam Prep

Imagine that a team wants to upgrade network device software, deploy a patch, make a new configuration change, or deploy new controller software. The entire method of procedure to perform the change and roll back the change is tested in ONE. ONE has surfaced bugs in configuration, device software, inter-operability between devices, and scaling issues. Validation in ONE is integrated in our safe deployment framework.

In reference to the recent service incident (10/7 Issues accessing Microsoft and Azure services) mentioned in the introduction, the incident was caused by a code defect in a version update of a component that controls network traffic routing between Azure regions, but only at production scale and scope. To catch issues like this that happen rarely, we use a pre-production validation process to force similar issues to happen frequently in a virtualized environment by performing chaos testing. In chaos testing, bursts of random failures are introduced to see whether the network recovers back to a healthy state. Since this incident, we have increased test coverage and bake time in ONE to improve the resiliency of the traffic engineering controllers.

Real Time Operation Checker (ROC)—network simulator


Real Time Operation Checker (ROC) validates the impact of any planned change or unplanned downtime to the real-time network. For example, ROC answers the question “is it safe to offline a node or link for the next two hours?” The answer depends on the current condition of the network, other changes that have been allowed to proceed, network demands for the next two hours, and the risk of new failures causing isolation or congestion. While ONE emulates, ROC simulates.

ROC uses live topology and demand feeds and simulates device forwarding and queuing, and software controller behavior under failure and load scenarios. These simulations are parallelized using Azure Batch and results are available in minutes. ROC tokens are granted for human-initiated and service-initiated changes, and no network change can proceed without a ROC token. Maintenance by our fiber provider partners are also simulated in ROC to measure risk and take corrective actions in daily operations.

Zero Touch Operations


When operating a network of our size, manual configuration changes must be avoided. The network is too large for a human to watch and react to all health signals and determine if the configuration change had the intended outcome. Automated changes are delivered to the network by a software system, wrapped in pre and post checks to analyze safety and health before and after the change. Changes are rolled back immediately or blocked if health signals fail.

Automated changes are applied using three services—internally named Fuse, Clockwerk, and WARP. Fuse's strength is performing repeatable tasks like upgrading network device software. In addition to being safer, Fuse is faster and frees up engineers. Clockwerk and WARP specialize in human-centered or ad-hoc changes. Engineers write stateful workflows to execute a sequence of changes wrapped in automatic checks and rollbacks. Users interact with the workflow over a modern web interface to provide human input within the workflow. An example of such a workflow is replacing hardware in a network device which requires traffic to be drained from the device before a technician replaces the hardware, then traffic restored back to the device in a safe manner. When changes are needed for an interactive repair operation, engineers enter their commands in a virtual console which are again peer reviewed and wrapped inside safety checks before sending to the devices. These systems all work together to eliminate the need to make a manual change.

All automated changes leverage the ROC framework as well as a myriad of canaries, monitors, and other health signals to analyze changes constantly to ensure the intent was delivered without adverse impact to customers—enabling a roll back at the slightest hint of impact.

Capacity planning


Capacity prediction and deployment velocity is vital to providing maximum reliability and performance. In our highway system analogy, capacity planning is about determining where to add lanes or new highways, how many to add, when they should be ready for traffic, and which ones to close.

Our capacity planning process begins by setting the topology of the network. The Topology Analysis system uses availability metrics from our network and external sources, new Azure region buildouts, and outside-in latency measurements, to generate future topologies. A demand prediction platform generates traffic matrices for multiple months using different algorithms, scores them against actuals, and creates a blended forecast. Optimized Repeatable Compiler for Augmentation Support (ORCAS) runs large-scale network simulations for future demands and topology. ORCAS programmatically assesses reliability, utilization, and efficiency and proposes how to evolve the network across more than 120,000 backbone routes and 6,000 shared risk link groups (SRLG).

Thousands of failure scenarios, involving multiple failing entities, are simulated along with the reaction of the traffic management controllers to determine the capacity needed for the worst-case failures. Historical performance data such as mean-time-between-failure (MTBF) and mean-time-to-repair (MTTR) are used to optimize the capacity requirement in both the short and long term. This signal is fed into our capacity workflow automation engine to complete the capacity delivery in a timely manner. This end-to-end system was used extensively to respond to the demand surge from COVID-19 and delivered 110 Tbps of additional backbone capacity in less than two months.

Optical fiber path diversity is extremely important to ensure reliable capacity. Fiber paths are like the pavement that supports the roads; flooded pavement affects all roads using it. Geo spatial information on fiber paths and other metadata on fiber cross points, power, and conduits is used to compute shared risks. We actively track the coverage of this data and maintain freshness in partnership with our fiber partners.

Source: microsoft.com

Saturday, 14 November 2020

How to optimize your Azure workload costs

Azure Study Material, Azure Exam Prep, Azure Tutorial and Material, Microsoft Prep

Managing IT costs is critical during this time of economic uncertainty. The global pandemic is challenging organizations across the globe to reinvent business strategies and make operations more effective and productive. Faster than ever, you’ll need to find ways to increase efficiencies and optimize costs across your IT organizations.

When it comes to cloud cost optimization, organizations typically divide responsibilities between central IT departments and distributed workload teams. Central IT departments manage overall cloud strategy and governance, setting and auditing corporate policies for cost management. In compliance with central IT policy, workload teams across the organization assume end-to-end ownership for cloud applications they’ve built, including cost management.

In this new normal, if you’re a workload owner, it’s doubly challenging for you and your teams who are taking on new cost responsibilities daily, all while continuously adapting to working in a cloud environment. We created the Microsoft Azure Well-Architected Framework for you to help you design, build, deploy, and manage successful cloud workloads across five key pillars: security, reliability, performance efficiency, operational excellence, and cost optimization. While we’re uniquely focusing on cost optimization here, we’ll soon be addressing best practices on how to balance the priorities of your organization against the other four pillars of the framework in order to deploy high-quality, well-architected workloads.

So how can the Azure Well-Architected Framework help you, as a workload owner, optimize your workload costs?

The four stages of cost optimization

The Azure Well-Architected Framework offers comprehensive guidance for cost optimization across four stages: design, provision, monitor, and optimize.

The design stage covers the initial planning of your workload’s architecture and cost model, including:

◉ Capturing clear requirements.

◉ Estimating initial costs.

◉ Understanding your organization’s policies and constraints.

Once your design stage is complete, you move into the provision stage, where you choose and deploy the resources that will make up your workload. There are many considerations and tradeoffs impacting cost at this stage, like which services to choose and which SKUs and regions to select.

After you provision your resources, the next stage, monitor, is about keeping a close watch on your deployed workload, how it’s being used, and your overall spend. This includes activities like:

◉ Building spending reports based on tags.

◉ Conducting regular cost reviews with your team.

◉ Responding to alerts, for example, when you’re approaching a certain spending limit or have detected an anomaly in spending.

Finally, you move to the last stage, optimize, where you’ll make your workload more efficient through techniques like:

◉ Resizing underutilized resources.

◉ Using reserved instances for long-running, consistent workloads.

◉ Taking advantage of licensing offers for cost benefits.

◉ Re-evaluating your initial design choices, for example, your storage tier and data store.

The initial design and provision stages generally apply to new workloads you’re planning to develop. The last two, monitor and optimize, contain guidance primarily geared towards workloads you’ve already deployed and are running in the cloud.

If you’re like most workload owners at our customers right now, you’re probably wondering now, what can I do that will have the biggest cost impact on existing workloads?

High-impact techniques to cost optimize your existing workloads

While each cost optimization stage contributes significantly to overall cost efficiency, as a workload owner undertaking a cost optimization initiative, you’ll probably first want to look into the latter two stages of the Azure Well-Architected Framework for your existing workloads: monitor and optimize. We provide several tools to help you with these stages. First, there’s the Microsoft Azure Well-Architected Review, which you can use to assess your workloads across the five pillars of the Well-Architected Framework, including cost optimization. The Well-Architected Review provides a holistic view into cost optimization for your deployed workloads and actionable recommendations to help you optimize your workloads. Additionally, tools like Azure Advisor and Azure Cost Management and Billing provide you with cost analysis and optimization guidance to help you achieve the cost efficiency you need.

By using these tools and referencing the framework itself, you’ll find many monitoring and optimization opportunities for your existing workloads. They broadly tend to fall into four high impact categories: increasing cost awareness, eliminating cloud waste, taking advantage of licensing offers, and modernizing your workload architecture.

Increasing cost awareness

As a customer, this means improving your cost visibility, and accountability. Increasing cost awareness starts with implementing workload budgets and operational practices to enforce those budgets, such as:

◉ Tag to break down costs by tag values, so you can pull reports easily.

◉ Alerts to notify you (the budget owner) when you’re approaching certain spending thresholds.

◉ Regular reviews to strengthen your team’s culture of cost management.

While it might not immediately reduce your cloud bill, increasing cost awareness is a necessary foundation that will provide you with critical insights that will be useful to you later.

Eliminating cloud waste

Here is where you start to really see cost efficiencies and cost avoidance. This can include cost optimization techniques like:

◉ Shutting down your unused resources.

◉ Right-sizing under-utilized resources.

◉ Using autoscaling and auto-shutdown for resource flexibility and scalability.

For many customers, the most immediate financial impact will come from eliminating cloud waste.

Azure Study Material, Azure Exam Prep, Azure Tutorial and Material, Microsoft Prep
Taking advantage of licensing offers

Azure has several licensing offers that can provide significant cost benefits for your workloads:

◉ The Azure Hybrid Benefit allows you to bring your on-premises Windows Server and SQL Server licenses with active Software Assurance—and your Linux subscriptions—to Azure and save.

◉ Azure Reservations enables you to spend less against the pay-as-you-go rate when you commit to use Azure resources over a longer period.

Licensing offers can contribute substantially to reducing cloud costs, and often represent a great opportunity for collaborations between central IT and workload teams.

Modernizing your workload architecture

While you might be hesitant to revisit your workload’s architecture, modernizing your applications using the latest services and cloud-native design can drastically increase your cost efficiency. For example, you might want to:

◉ Revisit your initial architectural and design decisions. Search and find a more cost-efficient way to accomplish your objectives, such as your choice of data store or storage tier.

◉ Assess the types of Azure services you are using. Explore and find out if other SKUs or other types of services such as PaaS or serverless might offer you cost benefits while still meeting your workload needs.

Source: microsoft.com

Thursday, 12 November 2020

New enhanced DNS features in Azure Firewall—now generally available

Custom DNS, DNS proxy, and FQDN filtering in network rules (for non-HTTP/S and non-MSSQL protocols) in Azure Firewall are now generally available. In this blog, we also share an example use-case on using DNS proxy with Private Link.

Azure Firewall is a cloud-native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application, NAT, and network-level filtering and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

Custom DNS support is now generally available

Since its launch in September 2018, Azure Firewall has been hardcoded to use Azure DNS to ensure the service can reliably resolve its outbound dependencies. Custom DNS allows you to configure Azure Firewall to use your own DNS server, while ensuring the firewall outbound dependencies are still resolved with Azure DNS. You may configure a single DNS server or multiple servers in Azure Firewall and Firewall Policy DNS settings.

Azure Firewall can also resolve names using Azure Private DNS. The Virtual Network within which the Azure Firewall resides must be linked to the Azure Private Zone.

DNS proxy is now generally available

With DNS proxy enabled, Azure Firewall can process and forward DNS queries from a Virtual Network(s) to your desired DNS server. This functionality is crucial and required to have reliable FQDN filtering in network rules. You can enable DNS proxy in Azure Firewall and Firewall Policy settings.

DNS proxy configuration requires three steps:

1. Enable DNS proxy in Azure Firewall DNS settings.

2. Optionally configure your custom DNS server or use the provided default.

3. Finally, you must configure the Azure Firewall’s private IP address as a custom DNS server in your virtual network DNS server settings. This ensures DNS traffic is directed to Azure Firewall.

DNS proxy listens for requests on TCP port 53 and forwards them to Azure DNS or the custom DNS specified.

Azure Study Material, Azure Exam Prep, Azure Certification, Azure Guides, Azure Prep

Figure 1. Custom DNS and DNS proxy settings on Azure Firewall.

FQDN filtering in network rules now generally available


You can now use fully qualified domain names (FQDNs) in network rules based on DNS resolution in Azure Firewall and Firewall Policy. The specified FQDNs in your rule collections are translated to IP addresses based on your firewall DNS settings. This capability allows you to filter outbound traffic using FQDNs with any TCP/UDP protocol (including NTP, SSH, RDP, and more). As this capability is based on DNS resolution, it is highly recommended you enable the DNS proxy to ensure name resolution is consistent for your protected virtual machines and firewall.

What’s the difference between FQDN filtering in application rules verses network rules? 

FQDN filtering in application rules for HTTP/S and MSSQL is based on an application level transparent proxy. As such, it can discern between two FQDNs that are resolved to the same IP address. This is not the case with FQDN filtering in network rules, so it is always recommended you use application rules when possible.

Azure Study Material, Azure Exam Prep, Azure Certification, Azure Guides, Azure Prep

Figure 2. FQDN filtering in network rules.

Using Azure Firewall as a DNS proxy to enable private endpoints access from on-premises


Azure Private Link provides the ability to connect to Microsoft PaaS services, including storage accounts, app services, and more, over a private connection, using private endpoints. A private endpoint is a network interface that connects you privately and securely to a PaaS service powered by Azure Private Link. Private endpoints use a private IP address from your Virtual Network (VNet), effectively bringing the service inside your private network in the cloud. This approach provides additional security benefits as it removes the exposure and accessibility of public IP addresses of the PaaS service.

One of the big benefits of Azure Private Link is the ability to consume Microsoft PaaS services over privately addressed hybrid connections (for example, Azure ExpressRoute Private Peering or Site-to-Site VPN). However, this benefit also comes with the challenge of forwarding DNS requests from on-premises to Azure Private DNS to benefit from the automated lifecycle management of DNS records that map to your private endpoints.

Each Azure PaaS service that utilizes Private Link is given an FQDN that is mapped and stored in an Azure Private DNS zone. Requests sent to Azure DNS Private Zones go to the platform address of 168.63.129.16 that is only reachable from inside of Azure. Therefore, if the DNS request originates from on-premises (outside of Azure), there is a requirement to proxy the DNS request via a service inside of a Virtual Network.

With this general availability announcement, Azure Firewall DNS proxy is an option to meet this DNS forwarding requirement, applicable with a hub-and-spoke model. To do this, configure your on-premises DNS server to conditionally forward requests to Azure Firewall for the required zone name. Ensure that your private DNS zone is linked to the Virtual Network within which the Azure Firewall resides. Configure Azure Firewall to use the default Azure DNS for lookups, and enable DNS proxy in Azure Firewall DNS settings.

Source: microsoft.com

Tuesday, 10 November 2020

Meeting developer needs with powerful new features in Azure Cache for Redis

Azure Exam Prep, Azure Tutorial and Material, Azure Certification, Azure Career

We announced a groundbreaking partnership with Redis Labs to bring their Redis Enterprise software to Azure as a part of Azure Cache for Redis. We were humbled by the level of excitement and interest we received. We are announcing that you can now use Redis to tackle new challenges while making your caches larger and more resilient than ever before.

There has never been a more critical time for a technology like Redis. With billions of people working from home globally, web-based applications must be more responsive than ever, and enterprises both large and small need to be able to scale rapidly to meet unexpected demand. Solutions like Redis empower developers to optimize their data architectures and solve these problems. We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on Open Source Redis, as Azure customers have used Redis as a distributed cache, session store, and message broker. We’re excited to incorporate Redis Enterprise technology and make this solution even more powerful and available while also unlocking important new use cases for developers like search, deduplication, and time series analysis.

What is Redis Enterprise on Azure?

Microsoft and Redis Labs have partnered closely to create the first native integration between Redis Labs technology and a major cloud platform. The result is a solution jointly engineered by Microsoft and Redis Labs with operations handled by Microsoft. Support is streamlined, with Microsoft handling the first call and pulling in dedicated Redis Labs expertise as needed. The solution consists of two new tiers—Enterprise and Enterprise Flash—and adds many features now in preview that are useful to enterprise customers such as:

◉ Zone redundant replication, with improved availability of 99.99 percent.

◉ Redis on Flash, enabling ten times larger cache sizes on NVMe flash storage.

◉ Redis Enterprise Modules, including:

   ◉ RediSearch

   ◉ RedisBloom

   ◉ RedisTimeSeries

◉ The latest version of Open Source Redis 6.0.

◉ Private link network isolation and transport layer security (TLS) encrypted connectivity.

As a native offering, this solution offers unique benefits beyond running Redis Enterprise as a marketplace offering, including:

◉ Access to set-up and configuration through the Azure Portal.

◉ Seamless integration into your favorite Azure security and monitoring tools.

◉ Integrated billing and the ability to use Azure spend commitment.

Furthermore, the product is backed all day, every day by the Microsoft support team, with a direct line to the experts at Redis Labs for additional support on Enterprise features.

Azure Exam Prep, Azure Tutorial and Material, Azure Certification, Azure Career
When to use Redis Enterprise on Azure

While the current three tiers of Azure Cache for Redis provide advanced and highly scalable capabilities for all types of customers, the new offering brings even more capabilities to enterprise customers. Three of the key scenarios for you to consider are:

1. New use cases

Azure Cache for Redis has always been a great option for developers looking for a lightweight implementation of a distributed cache, session store, or message broker. With the addition of Redis Modules, there are several useful new ways to use Redis. The new modules supported are:

◉ RediSearch—a real-time secondary index with a full-text search engine that runs on top of your data in Redis. This module enables you to build extremely fast search functionality into your application while supporting advanced functionalities like full-text indexing and auto-complete.

◉ RedisBloom—adds a series of probabilistic data structures such as TopK and Count-Min Sketch, Bloom, and Cuckoo filters. These probabilistic data structures allow you to trade minimal accuracy for a significant reduction of memory and increase of performance.

◉ RedisTimeSeries—adds a time series data structure to ingest and query millions of samples in real-time. It also includes a toolbox specifically for time-series with features such as downsampling, configurable retention periods, and aggregated queries. Key use-cases include IoT, streaming, and application monitoring.

2. Using Redis at enterprise scale

With the Enterprise Flash option, extend DRAM capacity with NVMe flash storage and persistent memory to store significantly more data in a cost-effective manner. As a result, you’re now able to create caches that are ten times larger—up to 13 TB—limiting the need to split data on multiple caches. This configuration also significantly reduces your cost on a per GB basis.

3. The highest resiliency

Azure Cache for Redis already offers 99.9 percent availability. The new offering expands on this through support for zone redundant replica nodes, creating cache copies in separate availability zones that can be failed over to in case of an outage. Seamless zone-failover allows you to increase availability up to 99.99 percent—an essential update for enterprises that can’t afford downtime.

Source: microsoft.com

Monday, 9 November 2020

What Is the Simplest Way to Pass Microsoft 70-462 Exam?

70-462, microsoft 70-462, 70-462 exam, exam 70-462, microsoft exam 70-462, microsoft sql certification 70-462, 70-462 practice test, 70-462: sql server database administration (dba), 70-462 certification, 70-462 microsoft certification, administering microsoft sql server 2012/2014 databases, administering microsoft sql server 2012 databases, mcsa sql server 2014 certification, mcsa sql, mcsa sql server certification, mcsa sql server, mcsa practice exam, mcsa practice test, mcsa sql server 2012, mcsa syllabus, mcsa exam price

Microsoft is one of the world's most significant IT vendors, supplying software and hardware products for private and commercial use and building itself as one of the world's top technology corporations.

Apart from building leading software and hardware products, Microsoft has created a crowd of certification programs to prove knowledge of modern technologies. These certifications play a significant role in preparing candidates for meaningful IT sector jobs at the fundamental, associate, and expert levels.

What Should Candidates Know About 70-462 Exam?

The 70-462 Certification opens employment opportunities in the software and database developer and analyst fields. The MCSA SQL Server 2012/2014 credential is a symbol of excellence that helps candidates show their professional skills and insights to grow and support the next generation, mission-specific environment.

Earning this 70-462 certification demonstrates a candidate’s capabilities to work as a software or database developer or a database analyst. It is suggested that candidates have prior experience of at least two years of working in the IT field before applying for this certification. For the Microsoft Technology Associate (MTA) exams are a better opening point.

70-462, microsoft 70-462, 70-462 exam, exam 70-462, microsoft exam 70-462, microsoft sql certification 70-462, 70-462 practice test, 70-462: sql server database administration (dba), 70-462 certification, 70-462 microsoft certification, administering microsoft sql server 2012/2014 databases, administering microsoft sql server 2012 databases, mcsa sql server 2014 certification, mcsa sql, mcsa sql server certification, mcsa sql server, mcsa practice exam, mcsa practice test, mcsa sql server 2012, mcsa syllabus, mcsa exam price,
Benefits and Importance of Microsoft 70-462 Certification

Candidates who make the MCSA SQL Server 2012/2014 certification can go for the expert-level MCSE Data Management and Analytics certification, which is often needed for expert positions such as database analyst, database designer, and business intelligence analyst.

Microsoft has started Exam 70-462 Administering Microsoft SQL Server 2012/2014 Databases for IT professionals who are well-versed with database installation, maintenance, and configuration functions. These specialists are vested in developing database systems, ensuring these systems' operational efficiency, backing up data, and guarding the data against illegitimate access.

70-462 Exam Study Plans from Microsoft

The Microsoft official website is your go-to site when looking for useful study materials intended for exam success. When attempting your MCSA SQL Server 2012/2014 credential, be sure to check out the test preparation videos to find real and verified exam details as offered by the Microsoft training experts.

Suppose this option does not fit your schedule. Other options worth examining include the training kit for Microsoft 70-462 that can be bought on the Microsoft Press Store and the practice test questions verified by SQL Server experts. However, there are many other preparation materials so you can find the one that will be the most comfortable for you.

Microsoft 70-462 Exam Pass Tips

  • It is recommended that an individual study widely to have complete knowledge of all the topics tested in each exam's subjective category.
  • Candidates for this exam are encouraged to read several study materials and participate in practical exercises related to this exam.
  • Setting a target score in the main 70-462 exam is very important for every candidate for this exam. The target score allows an individual to work very hard towards achieving the set target.
  • Candidates for this exam should be very fast when answering questions to complete the exam in the time provided.
  • Individuals waiting to sit the exam are encouraged to adjust their daily schedules and fix a temporary study timetable. They should guarantee they have enough time to carry out their studies.
  • Studies need to be carried in a quiet environment. This will improve an individual’s level of understanding of the concepts tested in the exam.
  • The candidates of this 70-462 exam need to read carefully the instructions provided in the exam material to know what is required from them before answering the questions.
  • The individuals should sit this exam to relax and worry less before and after the 70-462 certification exam. Worrying will affect your thinking capacity, and this may lead to poor performance.

70-462 Certification Practice Exams

Practice exams of Microsoft 70-462 prepare candidates adequately as they usually have questions similar to those which are tested in the real exam. Through these exams, individuals can learn how to approach different questions and answer them appropriately. Individuals preparing for Microsoft 70-462 exam are advised to utilize as many practice tests as possible to become familiar with the questions that are likely to be tested in the main exam.

Final Thoughts

Passing Microsoft 70-462 is an essential step in your plan to becoming a database professional. It also serves as a stepping stone towards achieving the top-ranked MCSA SQL Server 2012/2014 certification and then MCSE credential that will give you substantial career benefits such as improved earning potential and career prospects. After that, get in your test and become a step closer to the so-desired MCSA badge.

Saturday, 7 November 2020

Enabling resilient DevOps practices with code to cloud automation

As digital transformation has evolved, so have customer expectations. Enterprises are expected to deliver continuous value by releasing frequent bug-proof updates with little to no disruption to their userbase. This is no small feat, as nations and governments roll out stringent new data and security measures.

So how are enterprises able to ship faster while maintaining governance, security, and compliance standards amidst a pandemic that has forced work to be remote?

Based on the Enterprise DevOps 2020-2021 report, 80 percent of top-performing enterprises surveyed say they have begun “shifting-left,” or automating governance, security, and compliance into the early stages of their software development lifecycles (SDLC). These enterprises are not only automating their build and release workflows, but they’re also adopting an “everything-as-code” mindset. This shift means that not just infrastructure configurations and release pipelines, but also compliance and security policies, are written “as code,” enabling continuous improvement, while promoting better re-use, resilience, and driving greater transparency.

It’s with all this in mind that we continue to refine, update, and add to our GitHub Actions portfolio. At Microsoft Ignite we announced new GitHub Actions for Azure, which we added to our growing catalogue of Azure and GitHub integrations, with more to come soon. GitHub Actions for Azure enables deployments to multiple Azure services—from web applications to serverless functions to Kubernetes, to Azure SQL and MySQL databases—ultimately helping enterprises stay resilient while giving the flexibility to build an automated workflow to manage your SDLC.

Manage Azure Policy as Code in GitHub

As you progress on your Cloud Governance journey, there is an increasing need to shift from manually managing each policy in the Azure portal to something more manageable, collaborative, and repeatable at enterprise scale. We are announcing that we made the integration between Azure Policy and GitHub even stronger to help you on this journey. You can now easily export Azure policies to a GitHub repository in just a few clicks. All exported policies will be stored as files in GitHub. You can then collaborate and track changes using version control and push policy file changes to Azure Policy using Manage Azure Policy action. 

Azure Exam Prep, Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Certifications Guide

Deploy ARM infrastructure as code


With the move to the cloud, many teams need to repeatedly deploy their solutions to the cloud and ensure their infrastructure is in a reliable state. To meet these challenges, you can automate deployments by defining the infrastructure that needs to be deployed as code with Azure Resource Management (ARM) templates.

To address this, we are releasing a Deploy Azure Resource Manager Template action. With this action you can automate your workflow to deploy ARM templates and manage Azure resources. This action can be used to deploy ARM templates at any deployment scope; resource group, subscription or at a management group scope. The output of this ARM deployment action can be used in subsequent actions within the workflow as well for further processing.

Azure Exam Prep, Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Certifications Guide

Build Azure Virtual Machine Images for immutable infrastructure


With the newly rolled out Build Azure Virtual Machine Image action, customizing, creating, and distributing virtual machine (VM) images just got easier. You can now use this action to create custom VM images that hold artifacts produced in your Continuous Delivery workflows, distribute them as a Shared Image Gallery version or a managed image or a virtual hard disk (VHD) and get complete traceability between the GitHub and Azure portal.

Azure Exam Prep, Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Certifications Guide

Trace Kubernetes changes from Azure portal to GitHub commits


Deploy to Kubernetes cluster action is now enhanced to enable a changelog view in the Azure portal to trace any deployment done on an Azure Kubernetes Service (AKS) cluster from the exact GitHub commits and issues that got deployed all the way to the specific GitHub workflow that was used to deploy the changes.

Azure Exam Prep, Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Certifications Guide

Scan container images as part of Pull Request workflows


You can now add the container scanning action to your workflows and add additional checks to secure the Docker images created as part of Continuous Integration (CI) or Pull Request (PR) workflows. This helps developers scan for a common vulnerabilities in their Docker images and gain confidence before pushing to a container registry or deploying to a containerized web app or a Kubernetes cluster.

Azure Exam Prep, Azure Certification, Azure Guides, Azure Tutorial and Materials, Azure Certifications Guide

Source: microsoft.com