Thursday 31 October 2019

Disaster recovery for SAP HANA Systems on Azure

This blog will cover the design, technology, and recommendations for setting up disaster recovery (DR) for an enterprise customer, to achieve best in class recovery point objective (RPO) and recovery time objective (RTO) with an SAP S/4HANA landscape.

Microsoft Azure provides a trusted path to enterprise-ready innovation with SAP solutions in the cloud. Mission critical applications such as SAP run reliably on Azure, which is an enterprise proven platform offering hyperscale, agility, and cost savings for running a customer’s SAP landscape.

System availability and disaster recovery are crucial for customers who run mission-critical SAP applications on Azure.

RTO and RPO are two key metrics that organizations consider in order to develop an appropriate disaster recovery plan that can maintain business continuity due to an unexpected event.  Recovery point objective refers to the amount of data at risk in terms of “Time” whereas Recovery Time Objective refers to the amount of time or the maximum tolerable time that system can be down after disaster occurs.

The below diagram gives a view of RPO and RTO on a timeline view in a business as usual (BAU) scenario.

SAP HANA Systems, Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Certifications, Microsoft Online Exam

Orica is the world's largest provider of commercial explosives and innovative blasting systems to the mining, quarrying, oil and gas, and construction markets. They are also a leading supplier of sodium cyanide for gold extraction and a specialist provider of ground support services in mining and tunneling.

As part of Orica’s digital transformation journey, Cognizant has been chosen as a trusted technology advisor and managed cloud platform provider to build highly available, scalable, disaster proof IT platforms for SAP S/4HANA and other SAP applications in Microsoft Azure.

This blog describes how Cognizant took up the challenge of building a disaster recovery solution for Orica as a part of the Digital Transformation Program with SAP S/4HANA as a digital core. This blog contains the SAP on Azure architectural design considerations, by Cognizant and Orica, over the last two years, leading to a reduction in RTO to 4 hours. This is achieved by deploying the latest technology features available on Azure, coupled with automation. Along with reduction in RTO, there’s also reduction in RPO to less than 5 minutes with the use of database specific technologies such as SAP HANA system replication and Azure Site Recovery.

Design principles for disaster recovery systems


◈ Selection of DR Region based on SAP Certified VMs for SAP HANA – It is important to verify the availability of SAP Certified VMs types in DR Region.

◈ RPO and RTO Values – Businesses need to lay out clear expectations in RPO and RTO values which greatly affect the architecture for Disaster Recovery and requirements of tools and automation required to implement Disaster Recovery

◈ Cost of Implementing DR, Maintenance and DR Drills
     ◈ Criticality of systems – It is possible to establish Trade-off between Cost of DR implementation and Business Requirements. While most critical systems can utilize state of the art DR architecture, medium and less critical systems may afford higher RPO/RTO values.
     ◈ On Demand Resizing of DR instances – It is preferable to use small size VMs for DR instances and upsize those during active DR scenario. It is also possible to reserve the required capacity of VMs at DR region so that there is no “waiting” time to upscale the VMs. Microsoft offers Reserved Instances with which one can reserve virtual machines in advance and save up to 80 percent. According to required RTO value a tradeoff needs to be worked out between running smaller VMs vs. Azure RI.
     ◈ Additional considerations for cloud infrastructure costs, efforts in setting up environment for Non-disruptive DR Tests. Non-disruptive DR Tests refers to executing DR Tests without performing failover of actual productive systems to DR systems thereby avoiding any business downtimes. This involves additional costs for setting up temporary infrastructure which is in completely isolated vNet during the DR Tests.
     ◈ Certain components in SAP systems architecture such as clustered network file system (NFS) which are not recommended to be replicated using Azure Site Recovery, hence there is a need for additional tools with license costs such as SUSE Geo-cluster or SIOS Data keeper for NFS Layer DR.

◈ Selection of specific technology and tools – While Azure offers “Azure Site Recovery (ASR)” which replicates the virtual machines across the region, this technology is used at non-database components or layers of the system while database specific methods such as SAP HANA system replication (HSR) are used at database layer to ensure consistency of databases.

Disaster recovery architecture for SAP systems running on SAP HANA Database


At a very high level, the below diagram depicts the architecture of SAP systems based on SAP HANA and which systems will be available in case of local or regional failures.

SAP HANA Systems, Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Certifications, Microsoft Online Exam

The diagram below gives next level details of SAP HANA systems components and corresponding technology used for achieving disaster recovery.

SAP HANA Systems, Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Certifications, Microsoft Online Exam

Database layer


At the database layer, database specific method of replications such as SAP HANA system replication (HSR) is used. Use of database specific replication method allows better control over RPO values by configuring various replication specific parameters and offers consistency of database at DR site. The alternative methods of achieving disaster recovery at the database (DB) layer such as backup and restore, and recovery or storage base replications are available however, they result in higher RTO values.

RPO Values for SAP HANA database depend on factors including replication methodology (Synchronous in case of high availability or Asynchronous in case of DR replication), backup frequency, backup data retention policies, savepoint, and replication configuration parameters.

SAP Solution Manager can be used to monitor the replication status, such that an e-mail alert is triggered if the replication is impacted.

SAP HANA Systems, Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Certifications, Microsoft Online Exam

Even though multi-node replication is available as of SAP HANA 2.0 SP 3, revision 33, at the time or writing this article, this scenario is not tested in conjunction with high availability cluster. With successful implementation of multi-target replications, the DR maintenance process will become simpler and will not need manual interventions due to fail-over scenarios at primary site.

Application layer – (A)SCS, APP, iSCSI


Azure Site Recovery is used for replication of non-database components of SAP systems architecture including (A)SCS, application servers, and Linux cluster fencing agents such as iSCSI (with an exception of NFS layer which is discussed below.) Azure Site Recovery replicates workloads running on a virtual machines (VMs) from a primary site to a secondary location at storage layer and it does not require VM to be in a running state, and VMs can be started during actual disaster scenarios or DR drills.

There are two options to set up a pacemaker cluster in Azure. You can either use a fencing agent, which takes care of restarting a failed node via the Azure APIs or you can use a storage based death (SBD) device. The SBD device requires at least one additional virtual machine that acts as an iSCSI target server and provides an SBD device. These iSCSI target servers can however be shared with other pacemaker clusters. The advantage of using an SBD device is a faster failover time.

Below diagram describes disaster recovery at the application layer, (A)SCS, App servers, and iSCSI servers use the same architecture to replicate the data across DR region using Azure Site Recovery. 

SAP HANA Systems, Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Certifications, Microsoft Online Exam

NFS layer – NFS layer at primary site uses a cluster with distributed replicated block device (DRBD) for high availability replication purposes. We evaluated multiple technologies for the implementation of DR at NFS layer. Since DRBD and Site Recovery configurations are not compatible, solutions such as SUSE geo cluster, SIOS data keeper, or simple VM snapshot backups and restore are available for achieving NFS layer DR. Since DRBD enables high availability at NFS layer using disk replication, Site Recovery replication is not supported. In case where DRBD is enabled, the cost-effective solution to achieve DR for NFS layer is by using simple backup/restore using VM snapshot backups.

Steps for invoking DR or a DR drill


Microsoft Azure Site Recovery technology helps in faster replication of data at the DR region. In a DR implementation where Site Recovery is not used or configured, it would take more than 24 hours to recover about five systems, and eventually RTO will result in 24 or more hours. However, when Site Recovery is used at the application layer with database specific method of replication at DB Layer being leveraged, it is possible to reduce the RTO value to well below four hours for same number of systems. Below diagram describes timeline view with the steps to activate disaster recovery with four hours RTO Value.

Steps for Invoking DR or a DR drill:

◈ DNS Changes for VMs to use new IP addresses
◈ Bring up iSCSI – single VM from ASR Replicated data
◈ Recover Databases and Resize the VMs to required capacity
◈ Manually provision NFS – Single VM using snapshot backups
◈ Build Application layer VMs from ASR Replicated data
◈ Perform cluster changes
◈ Bring up applications
◈ Validate Applications
◈ Release systems

SAP HANA Systems, Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Certifications, Microsoft Online Exam

Recommendations on non-disruptive DR drills


Some businesses cannot afford down-time during DR drills. Non-disruptive DR drills are suggested in case where it is not possible to arrange downtimes to perform DR. A non-disruptive DR procedure can be achieved by creating an additional DR VNet, isolating it from the network, and carrying out DR Drill with below steps.

As a prerequisite, build SAP HANA database servers in the isolated VNet and configure SAP HANA system replication.

1. Disconnect express route circuit to DR region, as express route gets disconnected it simulates abrupt unavailability of systems in primary region

2. As a prerequisite, backup domain controller is required to be active and in replication mode with primary domain controller until the time of express route disconnection

3. DNS server needs to be configured in isolated DR VNet (additional DR VNet Created for non-disruptive DR drill) and kept in standby mode until the time of express route disconnection

4. Establish point to site VPN tunnel for administrators and key users for DR test

5. Manually update the NSGs so that DR VNet is isolated from the entire network

6. Bring up applications using DR enable procedure in DR region

7. Once test is concluded, reconfigure NSGs, express route, and DR replications

Involvement of relevant infrastructure and SAP subject matter experts is highly recommended during DR tests.

Note that the non-disruptive DR procedure need to be executed with extreme caution with prior validation and testing with non-production systems. Database VMs capacity at DR region should be decided with a tradeoff between reserving full capacity vs. Microsoft’s timeline to allocate required capacity to resize the database VMs.

Thursday 24 October 2019

Updates to geospatial features in Azure Stream Analytics – Cloud and IoT edge

Azure Stream Analytics is a fully managed PaaS service that helps you run real-time analytics and complex event processing logic on telemetry from devices and applications. Numerous built-in functions available in Stream Analytics helps users build real-time applications using simple SQL language with utmost ease. By using these capabilities customers can quickly realize powerful applications for scenarios such as fleet monitoring, connected cars, mobile asset tracking, geofence monitoring, ridesharing, etc.

Azure Stream Analytics, Azure Learning, Azure Tutorials and Material, Azure Guides, Azure Online Guides

Today, we are excited to announce several enhancements to geospatial features. These features will help customers manage a much larger set of mobile assets and vehicle fleet easily, accurately, and more contextually than previously possible. These capabilities are available both in the cloud and on Azure IoT edge.

Here is a quick run-down of the new capabilities:

Geospatial indexing


Previously, to track ‘n’ number of assets in streaming data across ‘m’ number of geofence reference data points, in the geospatial context, translated into a cross join of every reference data entry with every streaming event thus resulting in an O(n*m) operation. This presented scale issues in scenarios where customers need to manage thousands of assets across hundreds of sites.

To address this limitation, Stream Analytics now supports indexing geospatial data in relevant queries. When indexed, geospatial data is joined with streaming events. Instead of generating a cross join of every streaming event with reference data, an index is created with the reference data of geospatial objects and every lookup is optimized using the index. This will enable a faster reference data lookup to O(n * log m), thereby offering support for scale that is magnitudes of order higher than what was previously possible.

Support for WKT format


GeoJSON is an open standard format designed for representing simple geographical features, along with their non-spatial attributes, based on JavaScript Object Notation. Previously, Azure Stream Analytics, did not extend support for all the types otherwise defined in GeoJSON specification. As a result, users could not successfully export some of their geospatial objects and process them in Stream Analytics.

To remedy this gap, we are adding full support for WKT geospatial format in Stream Analytics. This format is natively supported by Microsoft SQL Server and hence can be readily used in reference data to represent specific geospatial entities or attributes. This will enable users to easily export their data into WKT and add each entry as nvarchar(max).

Geometry based calculations


Previously, in Stream Analytics we implemented Geographical calculations without the possibility of geometric projections. This would mean that users would ingress projected coordinates and expect calculations to follow geometric projections. Unfortunately, in many cases the output would not match their expectation as calculations were based on geography and were ignoring projections.

To help users overcome this limitation and to allow full fidelity projected calculations, we are moving away from geographic based computation and towards geometric calculations. This means that developers can now input their projected geo coordinates using the same functions as before, but the output will preserve their projection properties. That said, ST_DISTANCE function will continue to be the only function over geography.

Tuesday 22 October 2019

Azure Search – New Storage Optimized service tiers available in preview

Azure Study Materials, Azure Tutorial and Material, Azure Guides, Azure Certifications, Microsoft Online Exam

Azure Search is an AI-powered cloud search service for modern mobile and web app development. Azure Search is the only cloud search service with built-in artificial intelligence (AI) capabilities that enrich all types of information to easily identify and explore relevant content at scale. It uses the same integrated Microsoft natural language stack as Bing and Office, plus prebuilt AI APIs across vision, language, and speech. With Azure Search, you spend more time innovating on your websites and applications, and less time maintaining a complex search solution.

Azure Study Materials, Azure Tutorial and Material, Azure Guides, Azure Certifications, Microsoft Online Exam
Today we are announcing the preview of two new service tiers for Storage Optimized workloads in Azure Search. These L-Series tiers offer significantly more storage at a reduced cost per terabyte when compared to the Standard tiers, ideal for solutions with a large amount of index data and lower query volume throughout the day, such as internal applications searching over large file repositories, archival scenarios when you have business data going back many years, or e-discovery applications. 


Searching over all your content


From finding a product on a retail site to looking up an account within a business application, search services power a wide range of solutions with differing needs. While some scenarios like product catalogs need to search over a relatively small amount of information (100MB to 1GB) quickly, for others it’s a priority to search over large amounts of information in order to properly research, perform business processes, and make decisions. With information growing at the rate of 2.5 quintillion bytes of new data per day, this is becoming a much more common–and costly– scenario, especially for businesses.

What’s new with the L-series tier


The new L-Series service tiers support the same programmatic API, command-line interfaces, and portal experience as the Basic and Standard tiers of Azure Search. Internally, Azure Search provisions compute and storage resources for you based on how you’ve scaled your service. Compared to the S-Series, each L-Series search unit has significantly more storage I/O bandwidth and memory, allowing each unit’s corresponding compute resources to address more data. The L-Series is designed to support much large indexes overall (up to 24 TB total on a fully scaled out L2) for applications.

Standard S1 Standard S2 Standard S3 Storage Optimized L1  Storage Optimized L2 
Storage  25 GB/partition
(max 300 GB documents per service)
100 GB/partition
(max 1.2 TB documents per service)
200 GB/partition
(max 2.4 TB documents per service)
1 TB/partition
(max 12 TB documents per service)
2 TB/partition
(max 24 TB documents per service)
Max indexes per service  50  200  200 or 1000/partition in high density2 mode 10 10
Scale out limits  Up to 36 units per service
(max 12 partitions; max 12 replicas) 
Up to 36 units per service
(max 12 partitions; max 12 replicas)
Up to 36 units per service
(max 12 partitions; max 12 replicas)
up to 12 replicas in high density2 mode
Up to 36 units per service
(max 12 partitions; max 12 replicas)
Up to 36 units per service
(max 12 partitions; max 12 replicas)

Customer success and common scenarios


We have been working closely with Capax Global LLC, A Hitachi Group Company to create a service tier that works for one of their customers. Capax Global combines well-established patterns and practices with emerging technologies while leveraging a wide range of industry and commercial software development experience. In our discussions with them, we found that a storage optimized tier would be a good fit for their application since it offers the same search functionality at a significantly lower price than the standard tier. 

“The new Azure Search Storage Optimized SKU provides a cost-effective solution for customers with a tremendous amount of content. With it, we’re now able to enrich the custom solutions we build for our customers with a cloud hosted document-based search that meets the search demands of millions of documents while continuing to lead with Azure. This new SKU has further strengthened the array of services we have to utilize to help our customers solve their business problems through technology.”

– Mitch Prince, VP Cloud Productivity + Enablement at Capax Global LLC, A Hitachi Group Company

The Storage Optimized service tiers are also a great fit for applications that incorporate the new cognitive search capabilities in Azure Search, where you can leverage AI-powered components to analyze and annotate large volumes of content, such as PDFs, office documents, and rows of structured data. These data stores can result in many terabytes of indexable data, which becomes very costly to store in a query latency-optimized service tier like the S3. Cognitive search combined with the L-Series tiers of Azure Search provide a full-text query solution capable of storing terabytes of data and returning results in seconds.

Regional availability


For the initial public preview, the Storage Optimized service tiers will be available in the following regions:

◈ West US 2
◈ South Central US
◈ North Central US
◈ West Europe
◈ UK South
◈ Australia East

Sunday 20 October 2019

Introducing cost-effective increment snapshots of Azure managed disks in preview

The preview of incremental snapshots of Azure managed disks is now available. Incremental snapshots are a cost-effective point-in-time backup of managed disks. Unlike current snapshots, which are billed for the full size, incremental snapshots are billed for the delta changes to disks since the last snapshot. They are always stored on the most cost-effective storage i.e., standard HDD irrespective of the storage type of the parent disks. Additionally, for increased reliability, they are stored on Zone redundant storage (ZRS) by default in regions that support ZRS. They cannot be stored on premium storage. If you are using current snapshots on premium storage to scale up virtual machine deployments, we recommend you to use custom images on standard storage in Shared Image Gallery. It will help you to achieve a more massive scale with lower cost.

Incremental snapshots provide a differential capability, a unique capability available only in Azure managed disks. It enables customers and independent solution vendors (ISV) to build backup and disaster recovery solutions for managed disks. It allows you to get the changes between two snapshots of the same disk, thus copying only changed data between two snapshots across regions, reducing time and cost for backup and disaster recovery. For example, you can download the first incremental snapshot as a base blob in another region. For the subsequent incremental snapshots, you can copy only the changes since the last snapshot to the base blob. After copying the changes, you can take snapshots on the base blob that represent your point in time backup of the disk in another region. You can restore your disk either from the base blob or from a snapshot on the base blob in another region.

Azure Study Materials, Azure Tutorial and Materials, Azure Online Guides

Incremental snapshots inherit all the compelling capabilities of current snapshots. They have a lifetime independent of their parent managed disks, making them available even when the parent managed disk is deleted. Moreover, they are accessible instantaneously meaning you can read the underlying VHD of incremental snapshots or restore disks from them as soon as they are created.

You can create incremental snapshots by setting the new incremental property to true.

az snapshot create \
-g yourResourceGroupName \
-n yourSnapshotName \
-l westcentralus \
--source subscriptions/yourSubscriptionId/resourceGroups/yourResourceGroupName/providers/Microsoft.Compute/disks/yourDiskName \
--incremental

Azure Study Materials, Azure Tutorial and Materials, Azure Online Guides

You can identify incremental snapshots of the same disk by using the SourceResourceId and SourceUniqueId properties of snapshots. SourceResourceId is the Azure Resource Manager (ARM) resource Id of the parent disk. SourceUniqueId is the value inherited from the UniqueId property of the disk. If you delete a disk and then create a disk with the same name, the value of the UniqueId property will change.

az snapshot show \
-g yourResourceGroupName \
-n yourSnapshotName \
--query [creationData.sourceResourceId] -o tsv

az snapshot show \
-g yourResourceGroupName \
-n yourSnapshotName \
--query [creationData.sourceUniqueId] -o tsv

Availability and pricing


You can now create incremental snapshots and generate SAS URI for reading the underlying data in West Central US region via Azure Compute Rest API version 2019-03-01. You can also use the latest Azure PowerShell SDK, .Net SDK and CLI to perform these operations. The differential capability is supported via the pre-released versions of .NET, Python, and CPP Storage SDKs only. 

Saturday 19 October 2019

Announcing the general availability of larger, more powerful standard file shares for Azure Files

Better scale and more power for IT professionals and developers!

We're excited to announce the general availability of larger, more powerful standard file shares for Azure Files. Azure Files is a secure, fully managed public cloud file storage with full range of data redundancy options and hybrid capabilities using Azure File Sync.

Here is a quick look at some of the improvements in the Azure Files standard file shares' capacity and performance.

Azure Learning, Azure Tutorial and Material, Azure Study Materials, Azure Storage, Azure Backup & Recovery

With the release of large file shares, a single standard file share in a general purpose account can now support up to 100 TiB capacity, 10K IOPS, and 300 MiB/s throughput. All premium file shares in Azure FileStorage accounts currently support large file shares by default. If your workload is latency sensitive and requires a higher level of performance, you should consider Azure Files premium tier. 

What’s new?


Since the preview of large file shares, we have been working on making the Azure Files experience even better. Large file shares now has:

◈ Ability to upgrade existing general purpose storage accounts and existing file shares.

◈ Ability to opt in for larger files shares at a storage account instead of subscription level.

◈ Expanded regional coverage.

◈ Support for both locally redundant and zonal redundant storages.

◈ Improvements in the performance and scale of sync to work better with larger file shares. Visit Azure File Sync scalability targets to keep informed of the latest scale.

Pricing and availability


The increased capacity and scale of standard file shares on your general purpose accounts come at zero additional cost.

Currently, standard large file shares support is available for locally redundant and zone redundant storages and available in 13 regions worldwide. We are quickly expanding the coverage to all Azure regions. Stay up to date on region availability by visiting Azure Files documentation.

Getting started


You no longer need to register your subscription for the large file shares feature.

New storage account

Create a new general purpose storage account in one of the supported regions on a supported redundancy option. While creating storage account, go to Advanced tab and enable Large file shares feature. See detailed steps on how to enable large file shares support on a new storage account. All new shares created under this new account will, by default, have 100 TiB capacity with increased scale.

Azure Learning, Azure Tutorial and Material, Azure Study Materials, Azure Storage, Azure Backup & Recovery

Existing storage account


On an existing general purpose storage account that resides on one of the supported regions, go to Configuration, enable Large file shares feature, and hit Save. You can now update quota for existing shares under this upgraded account to more than 5 TiB. All new shares created under this upgraded account will, by default, have 100 TiB capacity with increased scale.

Azure Learning, Azure Tutorial and Material, Azure Study Materials, Azure Storage, Azure Backup & Recovery

Opting in your storage accounts into large file shares feature does not cause any disruption to your existing workloads, including Azure File Sync. Once opted in, you cannot disable the large files shares feature on your account.

Friday 18 October 2019

Microsoft 70-462 Certification: Launch Your Career as Database Experts

Cloud computing is prevalent nowadays. It is because it makes every task much more effective and much less time-consuming. Cloud computing is a service that performs it possible for a person to store, access, and share data over the Internet.
70-462, exam 70-462, 70-462 exam, microsoft 70-462, 70-462 certification, microsoft sql certification 70-462, microsoft exam 70-462, 70-462: sql server database administration (dba), 70-462 pdf, microsoft 70-462 exam, 70-462 practice test, sql 70-462
Microsoft SQL is one such cloud computing service that is given by the Microsoft organization. Since many companies now use Microsoft SQL web, there is an enhanced demand for certified professionals who can deal with this platform. Microsoft offers certifications compared to SQL to anyone interested in confirming their skills.

If you need to earn any Microsoft certification, you require to pass the corresponding exam. If you are someone who is engaged in SQL and understands how to work with it, then this certification test is just for you. Passing this exam and taking the associated credential is going to change your life in a lot of ways.

70-462: Administering Microsoft SQL Server 2012/2014 Databases Exam Overview

  • Full Name: Administering Microsoft SQL Server 2012/2014 Databases
  • Code: 70-462
  • Duration: 120 minutes
  • Exam Questions: 40-60
  • Passing score: 700 out of 1000
  • Exam Price: $165 (USD)

This exam will test you on the following topics:

  • Install and configure (20–25%)
  • Maintain instances and databases (15–20%)
  • Optimize and troubleshoot (15–20%)
  • Manage data (20–25%)
  • Implement security (15–20%)
  • Implement high availability (5–10%)

Preparing for 70-462 Certification Exam

When it comes to any exam, preparation is mandatory. And to become well-prepared, you require to use the best resources you can get.

To prepare for the 70-462 exam, Microsoft offers the following Tips:

  • The Microsoft 70-462 test has a lot of benefits, but to enjoy them, you require to pass the exam first. Passing this test is not simple, but all you need to do is to prepare for the questions correctly, which will enable you to go further. Just like for any other exam, you require to give a reasonable amount of time. Otherwise, things will become quite tricky.
  • There are a lot of methods that you can resort to when you are preparing for the exam.
  • However, one of the best ones is AnalyticsExam.com. If you want this online platform, then you will get access to a ton of study materials. It allows study bundles for the Microsoft 70-462 exam, which will give you access to practice questions. Whenever you are practicing such a test, you require to have practice questions close to you. This is because, without them, you will not be ready to pass any exam. Microsoft never returns any of the questions, and that is why there is no point in learning them. You should only answer them for practice plans, and that should prepare you quite simply.
  • The Microsoft 70-462 exam is made up of several different topics, and this makes the preparation part a lot easier. This is because, with just six topics to cover, you can easily create a schedule and divide your time accordingly so that you can learn all of the topics correctly. Visit the official Microsoft website and see all the required information. This must be your first step to know what the test is and how to prepare for it.

Career Opportunities with Microsoft 70-462 Certification

  • Passing the Microsoft 70-462 test has some distinct career-oriented advantages. With the aid of this exam, you can do a better job as a database expert. However, before you learn and enroll for the test, you need to know that Microsoft 70-462 is not designed for everyone. Instead, it is targeted at some particular individuals, and that is why you should only take this exam if you believe it will make a distinction.
  • If you are a person who enjoys SQL and databases, then this test is for you. However, you require to create sure that you have a bit of hands-on experience with technologies before you fall into the 70-462 exam. If you can pass the test, then it can make a massive difference to your career as the certified individuals have better luck at landing jobs.
  • You can even get better salaries, and if you have a job role as a database administrator, then you can also earn as much as $109K per year.

Benefits of Having Microsoft SQL Certification

Validates Your Skills:

  • The primary purpose of a certification is to provide proof of the candidate’s ability. If you go to an interview and show the interviewer that you are an expert in SQL, there is a high chance that they will not believe you if you have no credentials to prove your skills. The Microsoft SQLcertification can aid you in avoiding such problems.

Makes You Stand Out Among Other Interviewees:

  • Most people nowadays try to acquire basic qualifications before applying for a job. And when many candidates qualify for the position, the interviewers will look for something more. And that something more will most usually be an extra qualification or credential. Having a Microsoft SQL certification is a positive approach to take your interviewer’s attention.

Way to More Jobs:

  • There are a lot of jobs and high positions that do the Microsoft SQL certification as a requirement. It means that you can only apply for them if you are Microsoft SQL certified.

Eligible for a Higher Pay:

  • Since getting certified allows you to work in higher positions, you also get a higher salary concerning the position. And becoming an extra qualification can get you a pay increase since you deserve it.

Updates Your Knowledge:

  • IT is a powerful field. There are new developments and data every day. Certifications, too, are updated to follow up with these changes. Hence, by pursuing this credential, you will be updating your knowledge and getting the new skills required to act in the changing environment.

Dedication:

  • Microsoft certifications are identified to be tough and hard to crack. So, the fact that you maintain such a qualification shows how dedicated and hard-working you are.

Conclusion

The Microsoft 70-462 exam is one of the best tests that Microsoft has to give; passing it can act as a launchpad for your career, and that is why it is recommended that you take it. Of course, we should consider that this is not working to be a walk in the park, and you will have to work very hard to cross the line. However, if you are among some individuals who can make that jump, then you will get to enjoy a lot of benefits.

Earning a certification is not at all a fearful task if you work diligently and use the best resources to prepare for it. All that hard work will be deserving of it when you begin to experience the benefits of getting certified. But you must make assured that you are working for a credential that is relevant to your goals because if not, your skills will be pretty much worthless to you.
The companies are always looking for those candidates who make something innovative and raise income, and with this exam, you can give them your confidence. The world of tech is changing continuously, and that is why the organizations need those specialists who manage to work on their skills and keep growing.

If you are a person who wants to keep the skills up-to-date, then you should go for the Microsoft certification exams as they allow the quickest way to do so.

Thursday 17 October 2019

Microsoft Azure AI hackathon’s winning projects

Microsoft Azure AI, Azure Tutorial and Material, Azure Learning, Azure Certifications, Azure Guides, Azure Study Materials

We are excited to share the winners of the first Microsoft Azure AI Hackathon, hosted on Devpost. Developers of all backgrounds and skill levels were welcome to join and submit any form of AI project, whether using Azure AI to enhance existing apps with pre-trained machine learning (ML) models or by building ML models from scratch. Over 900 participants joined in, and 69 projects were submitted. A big thank you to all who participated and many congratulations to the winners.

First place—Trashé


Microsoft Azure AI, Azure Tutorial and Material, Azure Learning, Azure Certifications, Azure Guides, Azure Study Materials

Submitted by Nathan Glover and Stephen Mott, Trashé is a SmartBin that aims to help people make more informed recycling decisions. What I enjoyed most was watching the full demo of Trashé in action! It’s powerful when you see not just the intelligence, but the end-to-end scenario of how it can be applied in a real-world environment.

This team used many Azure services to connect the hardware, intelligence, and presentation layers—you can see this is a well-researched architecture that is reusable in multiple scenarios. Azure Custom Vision was a great choice in this case, enabling the team create a well performing model with very little training data. The more we recycle, the better the model will get. It was great to see the setup instructions included to build unique versions of Trashé so users can contribute to helping the environment by recycling correctly within their local communities—this community approach is incredibly scalable.

Second place—AfriFarm


Microsoft Azure AI, Azure Tutorial and Material, Azure Learning, Azure Certifications, Azure Guides, Azure Study Materials

Niza Siwale’s app recognizes crop diseases from images using Azure Machine Learning service and publicly publishes the findings so anyone can track disease breakouts. This also provides a real-time update for government agencies to act quickly and provide support to affected communities. As quoted by Niza, this project has an incredible reach to a possible 200 million farmers whose livelihoods depend on farming in Africa.

Creating a simple Android application where users can take photos of crops to analyze if each farmer is getting information when they need it, users can also contribute their own findings back to the community around them keeping everyone more informed and connected. Using the popular Keras framework along with the Azure Machine Learning service, this project built and deployed a good plant disease recognition model which could be called from the application. Any future work or improved versions of models can be monitored and deployed in a development cycle. From this, the progression of the model can be tracked over time.

Third place—Water Level Anomaly Detector


Microsoft Azure AI, Azure Tutorial and Material, Azure Learning, Azure Certifications, Azure Guides, Azure Study Materials

Roy Kincaid’s project identifies drastic changes in water levels using an ultrasonic sensor that could be useful for detecting potential floods and natural disasters. This information can then be used to provide adequate warning for people to best prepare to major changes in their environment. Water Level Anomaly Detector could also be beneficial for long-term analysis of the effects of climate change. This is another great example of an end-to-end intelligent solution.

Roy is well skilled in the hardware and connection parts of this project, so it was brilliant to see the easy integration of the Anomaly Detector API from Azure Cognitive Services and to hear how quickly Roy could start using the service. Many IoT scenarios have a similar need for detecting rates and levels, and I hope to see Roy's hinted at coffee level detector in the future (sign me up for one of those!). In a world where we all want to do our part to help the environment, it’s a great example of how monitoring means we can measure changes over time and be alerted when issues arise.

Tuesday 15 October 2019

Azure Analysis Services web designer adds new DAX query viewer

We released the Azure Analysis Services web designer. This new browser-based experience allows developers to start creating and managing Azure Analysis Services (AAS) semantic models quickly and easily. While SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) are still the primary tools for development, this new experience is intended to make modeling fast and easy. It is great for getting started on a new model or to do things such as adding a new measure to an existing model.

Today we are announcing new functionality that will allow you to generate, view and edit your DAX queries. This provides a great way to learn DAX while testing the data in your models. DAX or Data Analysis Expressions is a formula language used to create custom calculations in Analysis Services. DAX formulas include functions, operators, and values to perform advanced calculations on data in tables and columns.

To get started open the web designer from the Azure Portal.

Azure Certifications, Azure Learning, Azure Tutorial and Materials, Azure Guides, Azure Online Exam

Once inside the designer, select the model that you wish to query.

Azure Certifications, Azure Learning, Azure Tutorial and Materials, Azure Guides, Azure Online Exam

This opens up the query designer where you can drag and drop fields from the right to graphically generate and then run a query against your model.

Azure Certifications, Azure Learning, Azure Tutorial and Materials, Azure Guides, Azure Online Exam

Now switch the view from designer to query.

Azure Certifications, Azure Learning, Azure Tutorial and Materials, Azure Guides, Azure Online Exam

This will bring up the new query editor with the DAX query that was generated from the query that was graphically created in the previous steps.

Azure Certifications, Azure Learning, Azure Tutorial and Materials, Azure Guides, Azure Online Exam

The query text can be edited and rerun to see new results.

Sunday 13 October 2019

Gartner names Microsoft a leader in 2019 Gartner Magic Quadrant for Enterprise iPaaS

Microsoft accelerates application development with Azure Integration Services


Personal computers revolutionized the way work was done. New software unlocked unprecedented levels of productivity, and for a time, business flourished. As the personal computer exploded in popularity, more and more software was created. For the individual, this was a golden age. For the enterprise, this was also a golden age ... with an asterisk.

Azure Tutorial and Material, Azure Learning, Azure Certifications, Azure Online Guides

As it was when you add more people to an organization, so too it was with software. Making software work cooperatively with other unrelated software ended up being a very tricky problem to solve. The more software that was added, the more that overhead was introduced. This was an unfortunate consequence. The cost of doing business increased, meaningful results decreased, and organizational productivity plummeted.

Large businesses and enterprises were locked in this pattern, until a new category of software was created, integration software. And for many years, on-premise integration tools, such as Microsoft BizTalk Server, helped mitigate the issue created by the rapid proliferation and adoption of new software.

And then one day, everything changed. The cloud was born, and with it, the need for new ways to connect everything together.

The adoption of cloud-native integration platforms to support business workflows


As before, a new category of software has come into existence to help solve the challenges organizations are struggling with. iPaaS, or Enterprise Integration Platform as a Service tools are key to a successful integration strategy, and in turn, a successful application development strategy.

Microsoft is once again named a leader in the 2019 Gartner Magic Quadrant for Enterprise Integration Platform as a Service (iPaaS.)

Azure Tutorial and Material, Azure Learning, Azure Certifications, Azure Online Guides

Microsoft is powering enterprises across industry verticals in adopting comprehensive app innovation and modernization strategies, with integration as the backbone to these efforts. In fact, most modern application design makes use of integration capabilities, without being cognizant that they are doing so. Application development and application integration are becoming more and more intertwined, making it almost impossible to figure out where one starts and the other one ends

We are continuously investing in our integration offerings, including how APIs play a role in the modern enterprise, how business units increasingly need more and more flexible rules and logic to accommodate changing market demands, and more.

Integration is the surface upon which strong application infrastructure stands


Microsoft goes way beyond just integration, and instead focuses on helping you make better applications. Companies like Finastra, Evoqua, and Vipps are using a wide variety of Azure services, such as Azure Kubernetes Service, Azure API Management, Azure Logic Apps, Azure Functions, and more to create applications faster, easier, and better connected with the rest of their application ecosystem.

“Our platform intersects a great deal of data and technology,” says Félix Grévy, Global Head of Product Management at FusionFabric.cloud, Finastra, “yet our complete integration with Azure streamlines our infrastructure, simplifies our processes and makes our lives infinitely easier.”

Saturday 12 October 2019

Leveraging Cognitive Services to simplify inventory tracking

Microsoft Tutorial and Materials, Microsoft Study Materials, Microsoft Online Guides, Azure Learning

Who spends their summer at the Microsoft Garage New England Research & Development Center (or “NERD”)? The Microsoft Garage internship seeks out students who are hungry to learn, not afraid to try new things, and able to step out of their comfort zones when faced with ambiguous situations. The program brought together Grace Hsu from Massachusetts Institute of Technology, Christopher Bunn from Northeastern University, Joseph Lai from Boston University, and Ashley Hong from Carnegie Mellon University. They chose the Garage internship because of the product focus—getting to see the whole development cycle from ideation to shipping—and learning how to be customer obsessed.

Microsoft Garage interns take on experimental projects in order to build their creativity and product development skills through hacking new technology. Typically, these projects are proposals that come from our internal product groups at Microsoft, but when Stanley Black & Decker asked if Microsoft could apply image recognition for asset management on construction sites, this team of four interns accepted the challenge of creating a working prototype in twelve weeks.

Starting with a simple request for leveraging image recognition, the team conducted market analysis and user research to ensure the product would stand out and prove useful. They spent the summer gaining experience in mobile app development and AI to create an app that recognizes tools at least as accurately as humans can.

The problem


In the construction industry, it’s not unusual for contractors to spend over 50 hours every month tracking inventory, which can lead to unnecessary delays, overstocking, and missing tools. All together, large construction sites could lose more than $200,000 worth of equipment over the course of a long project. Addressing this problem is an unstandardized mix that typically involves barcodes, Bluetooth, RFID tags, and QR codes. The team at Stanley Black & Decker asked, “wouldn’t it be easier to just take a photo and have the tool automatically recognized?”

Because there are many tool models with minute differences, recognizing a specific drill, for example, requires you to read a model number like DCD996. Tools can also be assembled with multiple configurations, such as with or without a bit or battery pack attached, and can be viewed from different angles. You also need to take into consideration the number of lighting conditions and possible backgrounds you’d come across on a typical construction site. It quickly becomes a very interesting problem to solve using computer vision.

Microsoft Tutorial and Materials, Microsoft Study Materials, Microsoft Online Guides, Azure Learning

How they hacked it


Classification algorithms can be easily trained to reach strong accuracy when identifying distinct objects, like differentiating between a drill, a saw, and a tape measure. Instead, they wanted to know if a classifier could accurately distinguish between very similar tools like the four drills shown above. In the first iteration of the project, the team explored PyTorch and Microsoft’s Custom Vision service. Custom Vision appeals to users by not requiring a high level of data science knowledge to get a working model off the ground, and with enough images (roughly 400 for each tool), Custom Vision proved to be an adequate solution. However, it immediately became apparent that manually gathering this many images would be challenging to scale for a product line with thousands of tools. The focus quickly shifted to find ways of synthetically generating the training images.

For their initial approach, the team did both three-dimensional scans and green screen renderings of the tools. These images were then overlaid with random backgrounds to mimic a real photograph. While this approach seemed promising, the quality of the images produced proved challenging.

In the next iteration, in collaboration with Stanley Black & Decker’s engineering team, the team explored a new approach using photo-realistic renders from computer-aided design (CAD) models. They were able to use relatively simple Python scripts to resize, rotate, and randomly overlay these images on a large set of backgrounds. With this technique, the team could generate thousands of training images within minutes.

Microsoft Tutorial and Materials, Microsoft Study Materials, Microsoft Online Guides, Azure Learning

On the left is an image generated in front of a green screen versus an extract from CAD on the right.

Benchmarking the iterations


The Custom Vision service offers reports on the accuracy of the model as shown below.

Microsoft Tutorial and Materials, Microsoft Study Materials, Microsoft Online Guides, Azure Learning

For a classification model that targets visually similar products, a confusion matrix like the one below is very helpful. A confusion matrix visualizes the performance of a prediction model by comparing the true label of a class in the rows with the label outputted by the model in the columns. The higher the scores on the diagonal, the more accurate the model is. When high values are off the diagonal it helps the data scientists understand which two classes are being confused with each other by the trained model.

Existing Python libraries can be used to quickly generate a confusion matrix with a set of test images.

Microsoft Tutorial and Materials, Microsoft Study Materials, Microsoft Online Guides, Azure Learning

The result


The team developed a React Native application that runs on both iOS and Android and serves as a lightweight asset management tool with a clean and intuitive UI. The app adapts to various degrees of Wi-Fi availability and when a reliable connection is present, the images taken are sent to the APIs of the trained Custom Vision model on Azure Cloud. In the absence of an internet connection, the images are sent to a local computer vision model.

These local models can be obtained using Custom Vision, which exports models to Core ML for iOS, TensorFlow for Android, or as a Docker container that can run on a Linux App Service in Azure. An easy framework for the addition of new products to the machine learning model can be implemented by exporting rendered images from CAD and generating synthetic images.

Microsoft Tutorial and Materials, Microsoft Study Materials, Microsoft Online Guides, Azure Learning

Images in order from left to right: inventory checklist screen, camera functionality to send a picture to Custom Vision service, display of machine learning model results, and a manual form to add a tool to the checklist.

Microsoft Tutorial and Materials, Microsoft Study Materials, Microsoft Online Guides, Azure Learning

What’s next


Looking for an opportunity for your team to hack on a computer vision project? Search for an OpenHack near you.

Microsoft OpenHack is a developer focused event where a wide variety of participants (Open) learn through hands-on experimentation (Hack) using challenges based on real world customer engagements designed to mimic the developer journey. OpenHack is a premium Microsoft event that provides a unique upskilling experience for customers and partners. Rather than traditional presentation-based conferences, OpenHack offers a unique hands-on coding experience for developers.

Thursday 10 October 2019

SAP on Azure–Designing for availability and recoverability

Robust SAP on Azure Architectures are built on the pillars of security, performance and scalability, availability and recoverability, efficiency and operations.

We covered designing for performance and scalability previously and within this blog we will focus on availability and recoverability.

Designing for availability


Designing for availability ensures that your mission critical SAP applications such as SAP ERP or S/4HANA have high-availability (HA) provisions applied. These HA provisions ensure the application is resilient to both hardware and software failures and that the SAP application uptime is secured to meet your service-level-agreements (SLAs).

Within the links below, you will find a comprehensive overview on Azure virtual machine maintenance versus downtime where unplanned hardware maintenance events, unexpected downtime and planned maintenance events are covered in detail.

◈ Manage the availability of Linux Virtual Machines documentation
◈ Manage the availability of Windows virtual machines in Azure

Azure Tutorial and Material, Azure Certifications, Azure Learning, Azure Online Exam

From an availability perspective the options you have for deploying SAP on Azure are as follows:

1. 99.9 percent SLA for single instance VMs with Azure premium storage. In this case, the SAP database (DB), system central services A(SCS) and application servers are either running on separate VMs or consolidated on one or more VMs. A 99.9 percent SLA is also offered on our single node, bare metal HANA Large Instances.

2. 99.95 percent SLA for VMs within the same Azure availability set. The availability set enforces that the VMs within the set are deployed in separate fault and update domains, in turn this ensures the VMs are safeguarded against unplanned hardware maintenance events, unexpected downtime and planned maintenance events. To ensure HA of the SAP application, the availability sets are used in conjunction with Azure Load Balancers,  guest operating system clustering technologies such as Windows Failover cluster or Linux Pacemaker to facilitate short failover times and synchronous database replication technologies (SQL AlwaysOn, HANA System Replication, etc) to guarantee no loss of data. Additionally, configuring the SAP Enqueue Replication Server can mitigate against loss of the SAP lock table during a failover of the A(SCS).

3. 99.99 percent SLA for VMs within Azure availability zones. An availability zone in an Azure region is a combination of a fault domain and an update domain. The Azure platform recognizes this distribution across update domains to ensure that VMs in different zones are not updated at the same time in the case of Azure planned maintenance events.  Additionally, availability zones are physically separate zones within an Azure region where each zone has its own power source, network, cooling and is logically separated from the other zones within the Azure region. This construct hedges against unexpected downtime due to a hardware or infrastructure failure within a given zone. By architecting the SAP deployment to leverage replication across zones i.e. DBMS replication (HANA System Replication, SQL AlwaysOn), SAP Enqueue Replication Server and distributing the SAP application servers (for redundancy) across zones you can protect the SAP system from the loss of a complete datacenter. If one zone is compromised, the SAP System will be available in another zone.

4. HANA Large Instances are offered at an SLA of 99.99 percent when they are configured as an HA pair, this applies to single datacenter and availability zones deployments.

In the case of availability sets and availability zones, guest OS clustering is necessary for HA. We would like to use this opportunity to clarify the Linux Pacemaker Fencing options on Azure to avoid split brain of your SAP application, these are:

◈ Azure Fencing Agent
◈ Storage Based Death (SBD)

The Azure Fencing Agent is available on both RedHat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLES) and SBD is supported by SLES, but not RHEL;  for the shortest cluster failover times for SAP on Azure with Pacemaker, we recommend:

◈ Azure Fencing Agent for SAP clusters built on RHEL.
◈ SBD for SAP clusters built on SLES

In the case of productive SAP applications, we strongly recommend availability sets or availability zones.  Availability zones are an alternative to availability sets to provide HA with the addition of resiliency to datacenter failures within an Azure region. However, be mindful, there is no guarantee of a certain distances between the building structures hosting different availability zones. Different Azure regions can encounter different setups in terms of distance of the physical buildings. Therefore, for deterministic application performance and the lowest network Round-Trip-Time (RTT), Availability sets could be the better option.

Single Instance VMs can be a good fit for non-production (project, sandbox and test SAP systems) which don’t have availability SLAs on the same level as production, this option also helps to minimize run costs.

Designing for recoverability


Designing for recoverability means recovering from data loss, such as a logical error on the SAP database, from large scale disasters, or loss of a complete Azure region. When designing for recoverability, it is necessary to understand the Recovery Point Objective (RPO) and Recovery Time Objective (RTO) of your SAP Application. Azure Regional Pairs are recommended for disaster recovery which offer isolation and availability to hedge against the risks of natural or human disasters impacting a single region.

On the DBMS layer, asynchronous replication can be used to replicate your production data from your primary region to your disaster recovery (DR) region. On the SAP application layer, Azure-to-Azure Site Recovery can be used as part of an efficient, cost-conscious DR solution. You could also choose to architect a dual-purpose scenario on your DR side such as running a combined QA/DR system for a better return on your investments as shown below.

In addition to HA and DR provisions an enterprise data protection solution for backup and recovery of your SAP data is essential.

Our first party Azure Backup offering is certified for SAP HANA, the solution is currently in public preview (as of September 2019) and supports SAP HANA scale-up (data and log backup) with further scenarios to be supported in the future such as data snapshot and SAP HANA scale-out.

Additionally, the Azure platforms supports a broad range of ISVs which offer enterprise data protection and management for your SAP applications. One such ISV is Commvault where Microsoft have recently partnered to produce this whitepaper. A key advantage of Commvault is the IntelliSnap (data snapshot) capability which offers instantaneous application consistent data snapshots of your SAP database – this is hugely beneficial for large databases which have low RTO requirements. Commvault facilitates highly performant multi-streaming (backint) data backup directly to Azure Blob storage for both SAP HANA scale-up, SAP HANA scale-out and anyDB workloads. Your enterprise data protection strategy can include a combination of data snapshots and data backup i.e. running daily snapshots and a data backup (backint) on the weekend. Below, a data snapshot executed via IntelliSnap against an SAP HANA database on an M128s (2TB) VM, the snapshot duration is 20 seconds.

Azure Tutorial and Material, Azure Certifications, Azure Learning, Azure Online Exam

Within this blog we have summarized the options for designing SAP on Azure for Availability and Recoverability. When architecting and deploying your production SAP applications on Azure, it is essential to include availability sets or availability zones to support your mission critical SAP SLAs. Furthermore, you should apply DR provisions and enterprise data protection to secure your SAP application against the loss of a complete Azure region or data corruption.

Be sure to execute HA and DR testing through the lifecycle of your SAP to Azure project and also re-test these capabilities during maintenance windows once your SAP Applications are in productive operations i.e. DR drill tests annually.

Availability and Recoverability should be reviewed on an ongoing basis to incorporate the latest technologies and guidance on best practices from Microsoft.