Saturday 31 August 2019

Plan migration of your Hyper-V servers using Azure Migrate Server Assessment

Azure Migrate is focused on streamlining your migration journey to Azure. We recently announced the evolution of Azure Migrate, which provides a streamlined, comprehensive portfolio of Microsoft and partner tools to meet migration needs, all in one place. An important capability included in this release is upgrades to Server Assessment for at-scale assessments of VMware and Hyper-V virtual machines (VMs.)

This is the first in a series of blogs about the new capabilities in Azure Migrate. In this post, I will talk about capabilities in Server Assessment that help you plan for migration of Hyper-V servers. This capability is now generally available as part of the Server Assessment feature of Azure Migrate. After assessing your servers for migration, you can migrate your servers using Microsoft’s Server Migration solution available on Azure Migrate. You can get started right away by creating an Azure Migrate project.

Server Assessment earlier supported assessment of VMware VMs for migration to Azure. We’ve now included Azure suitability analysis, migration cost planning, performance-based rightsizing, and application dependency analysis for Hyper-V VMs. You can now plan at-scale, assessing up to 35,000 Hyper-V servers in one Azure Migrate project. If you use VMware as well, you can discover and assess both Hyper-V and VMware servers in the same Azure Migrate project. You can create groups of servers, assess by group, and refine the groups further using application dependency information.

Microsoft Tutorial and Materials, Microsoft Online Exam, Azure Certifications

Azure suitability analysis


The assessment determines whether a given server can be migrated as-is to Azure. Azure support is checked for each server discovered. If it is found that a server is not ready to be migrated, remediation guidance is automatically provided. You can customize your assessment and regenerate the assessment reports. You can apply subscription offers and reserved instance pricing on the cost estimates. You can also generate a cost estimate by choosing a VM series of your choice, and specify the uptime of the workloads you will run in Azure.

Cost estimation and sizing


Assessment reports provide detailed cost estimates. You can optimize on cost using performance-based rightsizing assessments. The performance data of your on-premise server is taken into consideration to recommend an appropriate Azure VM and disk SKU. This helps to optimize and right-size on cost as you migrate servers that might be over-provisioned in your on-premise data center.

Microsoft Tutorial and Materials, Microsoft Online Exam, Azure Certifications

Dependency analysis


Once you have established cost estimates and migration readiness, you can go ahead and plan your migration phases. Use the dependency analysis feature to understand the dependencies between your applications. This is helpful to understand which workloads are interdependent and need to be migrated together, ensuring you do not leave critical elements behind on-premises. You can visualize the dependencies in a map or extract the dependency data in a tabular format. You can divide your servers into groups and refine the groups for migration using this feature.

Assess your Hyper-V servers in three simple steps:


◈ Create an Azure Migrate project and add the Server Assessment solution to the project.

◈ Set up the Azure Migrate appliance and start discovery of your Hyper-V virtual machines. To set up discovery, the Hyper-V host or cluster names are required. Each appliance supports discovery of 5,000 VMs from up to 300 Hyper-V hosts. You can set up more than one appliance if required.

◈ Once you have successfully set up discovery, create assessments and review the assessment reports.

◈ Use the application dependency analysis features to create and refine server groups to phase your migration.

Note that the inventory metadata gathered is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Azure Government, Canada, Europe, India, Japan, United Kingdom, and United States geographies.

When you are ready to migrate the servers to Azure, you can use Server Migration to carry out the migration. You will be able automatically carry over the assessment recommendations from Server Assessment into Server Migration. You can read more in our documentation “Migrate Hyper-V VMs to Azure.”

In the coming months, we will add assessment capabilities for physical servers. You will also be able to run a quick assessment by adding inventory information using a CSV file. Stay tuned!

In the upcoming blogs, we will talk about tools for scale assessments, scale migrations, and the partner integrations available in Azure Migrate.

Friday 30 August 2019

Track the health of your disaster recovery with Log Analytics

Once you adopt Azure Site Recovery, monitoring of your setup can become a very involved exercise. You’ll need to ensure that the replication for all protected instances continue and that virtual machines are always ready for failover. While Azure Site Recovery solves this need by providing point-in-time health status, active health alerts, and the latest 72 hour trends, it still needs several man hours to keep track and analyze these signals. The problem is aggravated when the number of protected instances grow. It often needs a team of disaster recovery operators to do this for hundreds of virtual machines.

We have heard through multiple feedback forums that customers receive too many alerts. Even with these alerts, long-term corrective actions were difficult to identify as there is no single pane to look at historical data. Customers have reached out to us with a need to track various metrics such as recovery point objective (RPO) health over time, data change rate (churn) of machine disks over time, current state of the virtual machine, and test failover status as some of the basic requirements. It is also important for customers to be notified for alerts as per your enterprise’s business continuity and disaster recovery compliance needs.

The integrated solution with logs in Azure Monitor and Log Analytics


Azure Site Recovery brings to you an integrated solution for monitoring and advanced alerting powered by logs in Azure Monitor. You can now send the diagnostic logs from the Site Recovery vault to a workspace in Log Analytics. The logs are, also known as Azure Monitor logs, visible in the Create diagnostic setting blade as of today.

The logs are generated for Azure Virtual Machines, as well as any VMware or physical machines protected by Azure Site Recovery.

Azure Tutorials and Materials, Azure Learning, Azure Guides, Azure Online Exam, Azure Storage

Once the data starts feeding in the workspace, the logs can be queried using Kusto Query Language to produce historical trends, point-in-time snapshots, as well as disaster recovery admin level and executive level dashboards for a consolidated view. The data can be fed into a workspace from multiple Site Recovery vaults. Below are a few example use cases that can be currently solved with this integration:

◈ Snapshot of replication health of all protected instances in a pie chart

◈ Trend of RPO of a protected instance over time

◈ Trend of data change rate of all disks of a protected instance over time

◈ Snapshot of test failover status of all protected instances in a pie chart

◈ Summarized view as shown in the Replicated Items blade

◈ Alert if status of more than 50 protected instances turns critical

◈ Alert if RPO exceeds beyond 30 minutes for more than 50 protected instances

◈ Alert if the last disaster recovery drill was conducted more than 90 days ago

◈ Alert if a particular type of Site Recovery job fails

Sample use cases


Azure Tutorials and Materials, Azure Learning, Azure Guides, Azure Online Exam, Azure Storage

These are just some examples to begin with. Dig deeper into the capability with many more such examples captured in the documentation “Monitor Site Recovery with Azure Monitor Logs.” Dashboard solutions can also be built on this data to fully customize the way you monitor your disaster recovery setup. Below is a sample dashboard:

Azure Tutorials and Materials, Azure Learning, Azure Guides, Azure Online Exam, Azure Storage

Azure natively provides you the high availability and reliability for your mission-critical workloads, and you can choose to improve your protection and meet compliance requirements using the disaster recovery provided by Azure Site Recovery.

Thursday 29 August 2019

Latency is the new currency of the Cloud: Announcing 31 new Azure edge sites

Providing users fast and reliable access to their cloud services, apps, and content is pivotal to a business’ success.

The latency when accessing cloud-based services can be the inhibitor to cloud adoption or migration. In most cases, this is caused by commercial internet connections that aren’t tailored to today’s global cloud needs. Through deployment and operation of globally and strategically placed edge sites, Microsoft dramatically accelerates the performance and experience when you are accessing apps, content, or services such as Azure and Office 365 on the Microsoft global network.

Edges optimize network performance through local access points to and from the vast Microsoft global network, in many cases providing 10x the acceleration to access and consume cloud-based content and services from Microsoft.

What is the network edge?


Solely providing faster network access isn’t enough, and applications need intelligent services to expedite and simplify how a global audience accesses and experiences their offerings. Edge sites provide application development teams increased visibility and higher availability to access services that improve how they deliver global applications.

Edge sites benefit infrastructure and development teams in multiple key areas


◈ Improved optimization for application delivery through Azure Front Door (AFD.) Microsoft recently announced AFD, which allows customers to define, manage, accelerate, and monitor global routing for web traffic with customizations for the best performance and instant global failover for application accessibility.

◈ An enhanced customer experience via high-bandwidth access to Azure Blob storage, web applications, and live video-on-demand streams. Azure Content Delivery Network delivers high-bandwidth content by caching objects to the consumer’s closest point of presence.

◈ Private connectivity and dedicated performance through Azure ExpressRoute. ExpressRoute provides up to 100 gigabits per second of fully redundant bandwidth directly to the Microsoft global network at select peering locations across the globe, making connecting to and through Azure a seamless and integrated experience for customers.


New edge sites


Today, we’re announcing the addition of 31 new edge sites, bringing the total to over 150 across more than 50 countries. We’re also adding 14 new meet-me sites to Azure ExpressRoute to further enable and expand access to dedicated private connections between customers’ on-premises environments and Azure.


More than two decades of building global network infrastructure have given us a keen awareness of globally distributed edge sites and their critical role in a business’ success.

By utilizing the expanding network of edge sites, Microsoft provides more than 80 percent of global GDP with an experience of sub-30 milliseconds latency. We are adding new edges every week, and our ambition is to provide this level of performance to all of our global audience.

This expansion proves its value further when workloads move to the cloud or when Microsoft cloud services such as Azure, Microsoft 365, and Xbox are used. By operating over a dedicated, premium wide-area-network, our customers avoid transferring customer data over the public internet, which ensures security, optimizes traffic, and increases performance.

New edge sites


Country
City 
Colombia 
Bogota
Germany 
Frankfurt
Munich 
India 
Hyderabad 
Indonesia 
Jakarta 
Kenya 
Nariobi 
Netherlands 
Amsterdam 
New Zealand 
Auckland
Nigeria 
Lagos 
Norway 
Stavanger
United Kingdom 
London 
United States 
Boston
Portland 
Vietnam
Ho Chi Minh City 

Upcoming edge sites


Country City 
Argentina Buenos Aires
Egypt Cairo
Germany Dusseldorf
Israel Tel Aviv
Italy Rome
Japan Tokyo
Norway Oslo
Switzerland Geneva
Turkey Istanbul
United States Detroit
Jacksonville
Las Vegas
Minneapolis
Nashville
Phoenix
Quincy (WA)
San Diego

Upcoming ExpressRoute meet-me sites


Country City 
Canada Vancouver
Colombia Bogota
Germany Berlin
Munich
Indonesia Jakarta
Italy Milan
Mexico Queretaro (Mexico City)
Norway Oslo
Stavanger
Switzerland Geneva
Thailand Bangkok
United States Minneapolis
Phoenix
Quincy (WA)

With this latest announcement, Microsoft continues to offer cloud customers the fastest and most accessible global network, driving a competitive advantage for organizations accessing the global market and increased satisfaction for consumers.

Tuesday 27 August 2019

IRAP protected compliance from infra to SAP application layer on Azure

Australian government organizations are looking for cloud managed services providers capable of providing deployment of a platform as a service (PaaS) environment suitable for the processing, storage, and transmission of AU-PROTECTED government data that is compliant with the objectives of the Australian Government Information Security Manual (ISM) produced by the Australian Signals Directorate (ASD).

One of Australia’s largest federal agencies that is responsible for improving and maintaining finances of the state was looking to implement the Information Security Registered Assessors Program (IRAP) which is critical to safeguard sensitive information and ensure security controls around transmission, storage, and retrieval.

The Information Security Registered Assessors Program is an Australian Signals Directorate initiative to provide high-quality information and communications technology (ICT) security assessment services to the government.

The Australian Signals Directorate endorses suitably-qualified information and communications technology professionals to provide relevant security services that aim to secure broader industry and Australian government information and associated systems.

Cloud4C took up this challenge to enable this federal client on the cloud delivery platforms. Cloud4C analyzed and assessed the stringent compliance requirements within the Information Security Registered Assessors Program guidelines.

Following internal baselining, Cloud4C divided the whole assessment into three distinct categories – physical, infrastructure, and managed services. The Information Security Registered Assessors Program has stringent security controls around these three specific areas.

Cloud4C realized that the best way to successfully meet this challenge was to partner and share responsibilities to achieve an improbable but successful and worthy assessment together. In April 2018, the Australian Cyber Security Center (ACSC) announced the certification of Azure and Office 365 at the PROTECTED classification. Microsoft became the first and only public cloud provider to achieve this level of certification. Cloud4C partnered with Microsoft to deploy the SAP applications and SAP HANA database on Azure and utilized all the Information Security Registered Assessors Program compliant infrastructure benefits to enable seamless integration of native and marketplace tools and technologies on Azure.

Cloud4C identified the right Azure data center in Australia, Australia Central and Australia Central 2, which had undergone a very stringent Information Security Registered Assessors Program assessment for physical security and information and communications equipment placements.

This compliance by Azure for infrastructure and disaster recovery gave Cloud4C a tremendous head-start as a managed service provider in focusing energies to address the majority of remaining controls that were focused solely for the cloud service provider.

The Information Security Registered Assessors Program assessment for Cloud4C involved meeting 412 high risks and 19 of the most critical security aspects distributed across 22 major categories, after taking out the controls that were addressed by Azure disaster recovery.

Solution overview


The scope of the engagement was to configure and manage the SAP landscape onto Azure with managed services up to the SAP basis layer while maintaining the Information Security Registered Assessors Program protected classification standards for the processing, storage, and retrieval of classified information. As the engagement model is PaaS, the responsibility matrix was up to the SAP basis layer and application managed services were outside the purview of this engagement.

Platform as a service with single service level agreement and Information Security Registered Assessors Program protected classification

The proposed solution included various SAP solutions including SAP ERP, SAP BW, SAP CRM, SAP GRC, SAP IDM, SAP Portal, SAP Solution Manager, Web Dispatcher, and Cloud Connector with a mix of databases including SAP HANA, SAP MaxDB, and former Sybase databases. Azure Australia Central, the primary disaster recovery, and Australia Central 2, the secondary disaster recovery region, were identified as the physical disaster recovery locations for building the Information Security Registered Assessors Program protected compliant environment. The proposed architecture encompassed certified virtual machine stock keeping units (SKUs) for SAP workloads, optimized storage and disks configuration, right network SKUs with adequate protection, a mechanism to achieve high availability, disaster recovery, backup, and monitoring, an adequate mix of native and external security tools, and most importantly, processes and guidelines around service delivery.

The following Azure services were considered as part of the proposed architecture:

◈ Azure Availability Sets
◈ Azure Active Directory
◈ Azure Privileged Identity Management
◈ Azure Multi-Factor Authentication
◈ Azure ExpressRoute gateway
◈ Azure application gateway with web application firewall
◈ Azure Load Balancer
◈ Azure Monitor
◈ Azure Resource Manager
◈ Azure Security Center
◈ Azure storage and disk encryption
◈ Azure DDoS Protection
◈ Azure Virtual Machines (Certified virtual machines for SAP applications and SAP HANA database)
◈ Azure Virtual Network
◈ Azure Network Watcher
◈ Network security groups

Information Security Registered Assessors Program compliance and assessment process


Cloud4C navigated through the accreditation framework with the help of the Information Security Registered Assessors Program assessor, who helped to understand and implement the Australian government security and build the technical feasibility of porting SAP applications and the SAP HANA database to the Information Security Registered Assessors Program protected setup on the Azure protected cloud.

The Information Security Registered Assessors Program assessor assessed the implementation, appropriateness, and effectiveness of the system's security controls. This was achieved through two security assessment stages, as dictated in the Australian Government Information Security Manual (ISM):

◈ Stage 1: Security assessment identifies security deficiencies that the system owner rectifies or mitigates
◈ Stage 2: Security assessment assesses residual compliance

Cloud4C has achieved successful assessment under all applicable information security manual controls, ensuring the zero risk environment and protection of the critical information systems with support from Microsoft.

The Microsoft team provided guidance around best practices on how to leverage Azure native tools to achieve compliance. The Microsoft solution architect and engineering team participated in the design discussions and brought an existing knowledge base around Azure native security tools, integration scenarios for third party security tools, and possible optimizations in the architecture.

During the assessment, Cloud4C and the Information Security Registered Assessors Program assessor performed the following activities:

◈ Designed the system architecture incorporating all components and stakeholders involved in the overall communication

◈ Mapped security compliance against the Australian government security policy

◈ Identified physical facilities, the Azure Data centers Australia Central and Australia Central 2, that are certified by the Information Security Registered Assessors Program

◈ Implemented Information Security Manual security controls

◈ Defined mitigation strategies for any non-compliance

◈ Identified risks to the system and defined the mitigation strategy

Azure Learning, Azure Tutorial and Material, Azure Certifications, Azure Online Exam

Steps to ensure automation and process improvement


◈ Quick deployment using Azure Resource Manager (ARM) templates combined with tools. This helped in the deployment of large landscapes comprising of more than 100 virtual machines and 10 SAP solutions in less than a month.

◈ Process automation using Robotic Process Automation (RPA) tools. This helped to identify the business as usual stage within the SAP eco-system deployed for the Information Security Registered Assessors Program environment and enhanced the process to ensure minimum disruption to actual business processes on top of automation that takes care of the infrastructure level ensuring the application availability.

Learnings and respective solutions that were implemented during the process


◈ The Azure Central and Azure Central 2 regions were connected to each other over fibre links offering less than sub-ms latency, with the SAP application and SAP HANA database replication in synchronous mode and zero recovery point objective (RPO) was achieved.

◈ Azure Active Directory Domain Services were not available in the Australia Central region, so the
Azure South-East region was leveraged to ensure seamless delivery.

◈ Azure Site Recovery was successfully used for replication of an SAP Max DB database.

◈ Traffic flowing over Azure ExpressRoute was not encrypted by default, it was encrypted using a network virtual appliance from a Microsoft security partner.

Complying with the Information Security Registered Assessors Program requires Australian Signals Directorate defined qualifications to be fulfilled and to pass through assessment phases. Cloud4C offered the following benefits:

◈ Reduced time to market - Cloud4C completed the assessment process in 9 months as compared to the industry achievement of nearly 1-2 years.

◈ Cloud4C’s experience and knowledge of delivering multiple regions and industry specific compliances for customers on Azure helped in mapping the right controls with Azure native and external security tools.

The partnership with Microsoft helped Cloud4C reach another milestone and take advantage of all the security features that Azure Hyperscaler has to offer to meet stringent regulatory and geographic compliances.

Cloud4C has matured in the use of many of the security solutions that are readily available from Azure Native, as well as Azure Marketplace to reduce time-to-market. Cloud4C utilized the Azure portfolio to its fullest in terms of securing the customer's infrastructure as well as encourage a secure culture in supporting their clients as an Azure Expert Managed Service Provider (MSP). The Azure security portfolio has been growing and so has Cloud4C's use of its solution offerings.

Cloud4C and Microsoft plan to take this partnership to even greater heights in terms of providing an unmatched cloud experience to customers in the marketplace across various geographies and industry verticals.

Friday 23 August 2019

IoT Plug and Play is now available in preview

Today we are announcing that IoT Plug and Play is now available in preview! At Microsoft Build in May 2019, we announced IoT Plug and Play and described how it will work seamlessly with IoT Central. We demonstrated how IoT Plug and Play simplifies device integration by enabling solution developers to connect and interact with IoT devices using device capability models defined with the Digital Twin definition language. We also announced a set of partners who have launched devices and solutions that are IoT Plug and Play enabled. You can find their IoT Plug and Play certified devices at the Azure Certified for IoT device catalog.

With today’s announcement, solution developers can start using Azure IoT Central or Azure IoT Hub to build solutions that integrate seamlessly with IoT devices enabled with IoT Plug and Play. We have also launched a new Azure Certified for IoT portal, for device partners interested to streamline the device certification submission process and get devices into the Azure IoT device catalog quickly.

This article outlines how solution developers can use IoT Plug and Play devices in their IoT solutions, and how device partners can build and certify their products to be listed in the catalog.

Faster device integration for solution developers


Azure IoT Central is a fully managed IoT Software as a Service (SaaS) offering that makes it easy to connect, monitor, and manage your IoT devices and products. Azure IoT Central simplifies the initial setup of your IoT solution and cuts the management burden, operational costs, and overhead of a typical IoT project. Azure IoT Central integration with IoT Plug and Play takes this one step further by allowing solution developers to integrate devices without writing any embedded code. IoT solution developers can choose devices from a large set of IoT Plug and Play certified devices to quickly build and customize their IoT solutions end-to-end. Solution developers can start with a certified device from the device catalog and customize the experience for the device, such as editing display names or units. Solution developers can also add dashboards for solution operators to visualize the data; as part of this new release, developers have a broader set of visualizations to choose from. There is also the option to auto generate dashboards and visualizations to get up and running quickly. Once the dashboard and visualizations are created, solution developers can run simulations based on real models from the device catalog. Developers can also integrate with the commands and properties exposed by IoT Plug and Play capability models to enable operators to effectively manage their device fleets. IoT Central will automatically load the capability model of any certified device, enabling a true Plug and Play experience!

Another option available for developers who’d like more customization is to build IoT solutions with Azure IoT Hub and IoT Plug and Play devices. With today’s release, Azure IoT Hub now supports RESTful digital twin APIs that expose the capabilities of IoT Plug and Play device capability models and interfaces. Developers can set properties to configure settings like alarm thresholds, send commands for operations such as resetting a device, route telemetry, and query which devices support a specific interface. The most convenient way is to use the Azure IoT SDK for Node.js (other languages are coming soon). And all devices enabled for IoT Plug and Play in the Azure Certified for IoT device catalog will work with IoT Hub just like they work with IoT Central.

Azure Tutorial and Material, Azure Certifications, Azure Learning, Azure Guides, Azure IoT

Streamlined certification process for device partners


The Azure Certified for IoT device catalog allows customers to quickly find the right Azure IoT certified device to quickly start building IoT solutions. To help our device partners certify their products as IoT Plug and Play compatible, we have revamped and streamlined the Azure Certified for IoT program by launching a new portal and submission process. With the Azure Certified for IoT portal, device partners can define new products to be listed in the Azure Certified for IoT device catalog and specify product details such as physical dimensions, description, and geo availability. Device partners can manage their IoT Plug and Play models in their company model repository, which limits access to only their own employees and select partners, as well as the public model repository. The portal also allows device partners to certify their products by submitting to an automated validation process that verifies correct implementation of the Digital Twin definition language and required interfaces implementation.

Azure Tutorial and Material, Azure Certifications, Azure Learning, Azure Guides, Azure IoT

Device partners will also benefit from investments in developer tooling to support IoT Plug and Play. The Azure IoT Device Workbench extension for VS Code adds IntelliSense for easy authoring of IoT Play and Play device models. It also enables code generation to create C device code that implements the IoT Plug and Play model and provides the logic to connect to IoT Central, without customers having to worry about provisioning or integration with IoT Device SDKs.

The new tooling capabilities also integrates with the model repository service for seamless publishing of device models. In addition to the Azure IoT Device Workbench, device developers can use tools like the Azure IoT explorer and the Azure IoT extension for Azure Command-line Interface. Device code can be developed with the Azure IoT SDK for C and for Node.js.

Azure Tutorial and Material, Azure Certifications, Azure Learning, Azure Guides, Azure IoT

Connect sensors on Windows and Linux gateways to Azure


If you are using a Windows or Linux gateway device and you have sensors that are already connected to the gateway, then you can make these sensors available to Azure by simply editing a JSON configuration. We call this technology the IoT Plug and Play bridge. The bridge allows sensors on Windows and Linux to just work with Azure by bridging these sensors from the IoT gateway to IoT Central or IoT Hub. On the IoT gateway device, the sensor bridge leverages OS APIs and OS plug and play capabilities to connect to downstream sensors and uses the IoT Plug and Play APIs to communicate with IoT Central and IoT Hub on Azure. A solution builder can easily select from sensors enumerated on the IoT device and register them in IoT Central or IoT Hub. Once available in Azure, the sensors can be remotely accessed and managed. We have native support for Modbus and a simple serial protocol for managing and obtaining sensor data from MCUs or embedded devices and we are continuing to add native support for other protocols like MQTT. On Windows, we also support cameras, and general device health monitoring for any device the OS can recognize (such as USB peripherals). You can extend the bridge with your own adapters to talk to other types of devices (such as I2C/SPI), and we are working on adding support for more sensors and protocols (such as HID).

Thursday 22 August 2019

Pave Your Way Towards a Successful Score in Microsoft 98-364 Exam

The Microsoft Technology Associate (MTA) is a new type of certification exams administered by Microsoft that helps IT professionals to begin developing a career based on technologies by Microsoft. The 98-364 examination deals with database administration fundamentals. This certification exam helps demonstrate a developer’s expertise in the insight into the significant concepts of data administration and management. Most IT professionals who want to develop a career in IT management and administration broadly choose to earn this certification.

Microsoft 98-364 Exam 44-660 questions that should be answered within 45 minutes. These questions are multiple choices, and the minimum score you need to get is 700/1000. The exam is taken in English, German, French, Chinese, Russian, Portuguese, Korean, Spanish, and Japanese languages. The topics included in the exam include:
  • Administering Database Objects
  • Manipulating Data
  • Understanding Data Storage and Major Database Concepts

Top Resources to Prepare for Microsoft 98-364 Exam


Microsoft does a great job of presenting study material for their certification exams, and you can always check the official website for the latest study materials and practice tests. Here are the best resources for the MTA test:
The Ultimate Guide from Beginner to Expert – Learn and Master SQL in No Time!
Microsoft 98-364 Exam: Database Fundamentals Kindle
Exam 98-364 MTA Database Administration Fundamentals

Recommended Books for Microsoft 98-364 Exam


Although you are not convinced about reading books, these are the most traditional and necessary resource. Here are recommended books for this Microsoft exam:

Tips for Taking Microsoft 98-364 Exam


The different concepts covered in this certification exam should not make you feel like the test is difficult. Here are tips that can help you score it well:

Master the Skillset Needed


The information you get through the training process is significant as it allows you to understand how different technologies are applied, but you should not stop there. To assure you build upon the skills that you need when using the technology and equipment. For this exam, the candidates must consolidate the knowledge they obtain together with lab encounters. Getting hands-on experience will allow them to find it easy taking the exam. You need to develop your equipment handling skills, preferably using old computers for you to prepare well for the exam.

Understand the Objectives of the Microsoft 98-364 Exam


Understanding the objectives of this MTA certification exam and emphasizing on them during your learning will help you foresee the questions that likely to be asked in the exam. This will make your preparation easy. The objectives will also allow you to know the amount of weightage every topic carries in the exam, and it will help you spare out more time on the topics that carry more weightage.

Perform Practice Tests


For you to prepare efficiently for this exam, you must take practice tests. These questions are mimic of the actual exam format, and you must assess your level of understanding by evaluating the number of questions you get right. This will also allow you to pay more attention to areas that you find challenging. You should choose extraordinary practice tests because the more difficult the practice questions, the easier the exam will be for you. Easy practice questions will give you a fake sense of confidence.

Attend the Labs for Microsoft 98-364


Microsoft prioritizes hands-on experience together with the knowledge the applicant obtained from training. You should take benefits of the labs available to practice your skills. These labs allow you to learn how to develop speedy approaches to problems that you may face in a professional setting. The best part about lab lessons is that you can download them and carry on with the practices wherever you are. This is also a nice way to prepare for the exam during the final moments before you take it.

Use all Study Materials


When you are preparing for Microsoft 98-364 exam, do not ignore any study resource. The secret to passing the exam is in-depth learning. The Internet has infinite resources that you require to take advantage of all of them. You cannot be sure about passing the exam when you do not have the study materials you need to resolve all the equations. From video training, study guides, labs, books, and practice questions, you have to strengthen your fundamentals before sitting for the exam. There is no shortcut for passing the exam. You either go through with all this, or you will not stand a chance of passing the exam.

Self-Study


The fact that you have a job or other commitments should not stop you from taking this MTA 98-364 certification exam seriously. You should create a schedule that works for you, and that will allow you to carry out the self-study options. Self-study is proved to work best for those people who already have hands-on experience before sitting for the certification exam. If you are competent in understanding the concepts with the help of the many study resources available, you can go for self-study for you to have an adaptable schedule.

Keep a Sharp Eye


Even with all the preparation in the world, you cannot ensure success in this exam without the ultimate sharpness when it comes to the Microsoft certification exams. Make sure you read the questions accurately before answering them. Do not be in a hurry to complete the exam but rather use your time to read the questions and understand what is expected from you before concluding to the possible answers. This will give you a better chance of answering most of the questions correctly and getting the passing score.

Conclusion


To conclude, although MTA certification is not a must to achieve MCSA certification, it’s worth getting if you want to sharpen your skills and knowledge. Microsoft 98-364 exam is a significant exam for the MTA certification and will be helpful to accelerate your career. For example, employers are always looking for certified professionals because their skills have been proven by the globally recognized IT vendor – Microsoft. Not all applicants pass Microsoft exams, that’s why organizations hold on to the ones they’ve found.
With the above-mentioned study tips, you can achieve the best score out of the overall and pass this Microsoft Database Fundamentals 98-364 exam. Center all your hard work on the certification exam preparation, and you will be sure of success.
Good Luck!

Tuesday 20 August 2019

Announcing the general availability of Python support in Azure Functions

Python support for Azure Functions is now generally available and ready to host your production workloads across data science and machine learning, automated resource management, and more. You can now develop Python 3.6 apps to run on the cross-platform, open-source Functions 2.0 runtime. These can be published as code or Docker containers to a Linux-based serverless hosting platform in Azure. This stack powers the solution innovations of our early adopters, with customers such as General Electric Aviation and TCF Bank already using Azure Functions written in Python for their serverless production workloads. Our thanks to them for their continued partnership!

In the words of David Havera, blockchain Chief Technology Officer of the GE Aviation Digital Group, "GE Aviation Digital Group's hope is to have a common language that can be used for backend Data Engineering to front end Analytics and Machine Learning. Microsoft have been instrumental in supporting this vision by bringing Python support in Azure Functions from preview to life, enabling a real world data science and Blockchain implementation in our TRUEngine project."

Throughout the Python preview for Azure Functions we gathered feedback from the community to build easier authoring experiences, introduce an idiomatic programming model, and create a more performant and robust hosting platform on Linux. This post is a one-stop summary for everything you need to know about Python support in Azure Functions and includes resources to help you get started using the tools of your choice.

Bring your Python workloads to Azure Functions


Many Python workloads align very nicely with the serverless model, allowing you to focus on your unique business logic while letting Azure take care of how your code is run. We’ve been delighted by the interest from the Python community and by the productive solutions built using Python on Functions.

Workloads and design patterns

While this is by no means an exhaustive list, here are some examples of workloads and design patterns that translate well to Azure Functions written in Python.

Simplified data science pipelines

Python is a great language for data science and machine learning (ML). You can leverage the Python support in Azure Functions to provide serverless hosting for your intelligent applications. Consider a few ideas:

◉ Use Azure Functions to deploy a trained ML model along with a scoring script to create an inferencing application.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

◉ Leverage triggers and data bindings to ingest, move prepare, transform, and process data using Functions.

◉ Use Functions to introduce event-driven triggers to re-training and model update pipelines when new datasets become available.

Automated resource management

As an increasing number of assets and workloads move to the cloud, there's a clear need to provide more powerful ways to manage, govern, and automate the corresponding cloud resources. Such automation scenarios require custom logic that can be easily expressed using Python. Here are some common scenarios:

◉ Process Azure Monitor alerts generated by Azure services.

◉ React to Azure events captured by Azure Event Grid and apply operational requirements on resources.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

◉ Leverage Azure Logic Apps to connect to external systems like IT service management, DevOps, or monitoring systems while processing the payload with a Python function.

◉ Perform scheduled operational tasks on virtual machines, SQL Server, web apps, and other Azure resources.

Powerful programming model

To power accelerated Python development, Azure Functions provides a productive programming model based on event triggers and data bindings. The programming model is supported by a world class end-to-end developer experience that spans from building and debugging locally to deploying and monitoring in the cloud.

The programming model is designed to provide a seamless experience for Python developers so you can quickly start writing functions using code constructs that you're already familiar with, or import existing .py scripts and modules to build the function. For example, you can implement your functions as asynchronous coroutines using the async def qualifier or send monitoring traces to the host using the standard logging module. Additional dependencies to pip install can be configured using the requirements.txt file.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

With the event-driven programming model in Functions, based on triggers and bindings, you can easily configure the events that will trigger the function execution and any data sources the function needs to orchestrate with. This model helps increase productivity when developing apps that interact with multiple data sources by reducing the amount of boilerplate code, SDKs, and dependencies that you need to manage and support. Once configured, you can quickly retrieve data from the bindings or write back using the method attributes of your entry-point function. The Python SDK for Azure Functions provides a rich API layer for binding to HTTP requests, timer events, and other Azure services, such as Azure Storage, Azure Cosmos DB, Service Bus, Event Hubs, or Event Grid, so you can use productivity enhancements like autocomplete and Intellisense when writing your code. By leveraging the Azure Functions extensibility model, you can also bring your own bindings to use with your function, so you can also connect to other streams of data like Kafka or SignalR.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

Easier development

As a Python developer, you can use your preferred tools to develop your functions. The Azure Functions Core Tools will enable you to get started using trigger-based templates, run locally to test against real-time events coming from the actual cloud sources, and publish directly to Azure, while automatically invoking a server-side dependency build on deployment. The Core Tools can be used in conjunction with the IDE or text editor of your choice for an enhanced authoring experience.

You can also choose to take advantage of the Azure Functions extension for Visual Studio Code for a tightly integrated editing experience to help you create a new app, add functions, and deploy, all within a matter of minutes. The one-click debugging experience enables you to test your functions locally, set breakpoints in your code, and evaluate the call stack, simply with the press of F5. Combine this with the Python extension for Visual Studio Code, and you have an enhanced Python development experience with auto-complete, Intellisense, linting, and debugging.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

For a complete continuous delivery experience, you can now leverage the integration with Azure Pipelines, one of the services in Azure DevOps, via an Azure Functions-optimized task to build the dependencies for your app and publish them to the cloud. The pipeline can be configured using an Azure DevOps template or through the Azure CLI.

Advance observability and monitoring through Azure Application Insights is also available for functions written in Python, so you can monitor your apps using the live metrics stream, collect data, query execution logs, and view the distributed traces across a variety of services in Azure.

Host your Python apps with Azure Functions


Host your Python apps with the Azure Functions Consumption plan or the Azure Functions Premium plan on Linux.

The Consumption plan is now generally available for Linux-based hosting and ready for production workloads. This serverless plan provides event-driven dynamic scale and you are charged for compute resources only when your functions are running. Our Linux plan also now has support for managed identities, allowing your app to seamlessly work with Azure resources such as Azure Key Vault, without requiring additional secrets.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

The Consumption plan for Linux hosting also includes a preview of integrated remote builds to simplify dependency management. This new capability is available as an option when publishing via the Azure Functions Core Tools and enables you to build in the cloud on the same environment used to host your apps as opposed to configuring your local build environment in alignment with Azure Functions hosting.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

Workloads that require advanced features such as more powerful hardware, the ability to keep instances warm indefinitely, and virtual network connectivity can benefit from the Premium plan with Linux-based hosting now available in preview.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

With the Premium plan for Linux hosting you can choose between bringing only your app code or bringing a custom Docker image to encapsulate all your dependencies, including the Azure Functions runtime as described in the documentation “Create a function on Linux using a custom image.” Both options benefit from avoiding cold start and from scaling dynamically based on events.

Announcements, Developer, Data Science, Azure DevOps, Azure Study Materials, Azure Tutorial and Materials

Saturday 17 August 2019

Azure Ultra Disk Storage: Microsoft's service for your most I/O demanding workloads

Microsoft Azure, announced the general availability of Azure Ultra Disk Storage, an Azure Managed Disks offering that provides massive throughput with sub-millisecond latency for your most I/O demanding workloads. With the introduction of Ultra Disk Storage, Azure includes four types of persistent disk—Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD. This portfolio gives you price and performance options tailored to meet the requirements of every workload. Ultra Disk Storage delivers consistent performance and low latency for I/O intensive workloads like SAP Hana, OLTP databases, NoSQL, and other transaction-heavy workloads. Further, you can reach maximum virtual machine (VM) I/O limits with a single Ultra disk, without having to stripe multiple disks.

Durability of data is essential to business-critical enterprise workloads. To ensure we keep our durability promise, we built Ultra Disk Storage on our existing locally redundant storage (LRS) technology, which stores three copies of data within the same availability zone. Any application that writes to storage will receive an acknowledgement only after it has been durably replicated to our LRS system.

Below is a clip from a presentation I delivered at Microsoft Ignite demonstrating the leading performance of Ultra Disk Storage:

Azure Learning, Azure Tutorials and Materials, Azure Learning, Azure Online Exam, Azure Certifications

Inside Ultra Disk Storage


Ultra Disk Storage is our next generation distributed block storage service that provides disk semantics for Azure IaaS VMs and containers. We designed Ultra Disk Storage with the goal of providing consistent performance at high IOPS without compromising our durability promise. Hence, every write operation replicates to the storage in three different racks (fault domains) before being acknowledged to the client. Compared to Azure Premium Storage, Ultra Disk Storage provides its extreme performance without relying on Azure Blob storage cache, our on-server SSD-based cache, and hence it only supports un-cached reads and writes. We also introduced a new simplified client on the compute host that we call virtual disk client (VDC). VDC has full knowledge of virtual disk metadata mappings to disks in the Ultra Disk Storage cluster backing them. That enables the client to talk directly to storage servers, bypassing load balancers and front-end servers used for initial disk connections. This simplified approach minimizes the layers that a read or write operation traverses, reducing latency and delivering performance comparable to enterprise flash disk arrays.

Below is a figure comparing the different layers an operation traverses when issued on an Ultra disk compared to a Premium SSD disk. The operation flows from the client to Hyper-V to the corresponding driver. For an operation done on a Premium SSD disk, the operation will flow from the Azure Blob storage cache driver to the load balancers, front end servers, partition servers then down to the stream layer servers as documented in this paper. For an operation done on an Ultra disk, the operation will flow directly from the virtual disk client to the corresponding storage servers.

Azure Learning, Azure Tutorials and Materials, Azure Learning, Azure Online Exam, Azure Certifications
Comparison between the IO flow for Ultra Disk Storage versus Premium SSD Storage

One key benefit of Ultra Disk Storage is that you can dynamically tune disk performance without detaching your disk or restarting your virtual machines. Thus, you can scale performance along with your workload. When you adjust either IOPS or throughput, the new performance settings take effect in less than an hour.

Azure implements two levels of throttles that can cap disk performance, a “leaky bucket” VM level throttle that is specific to each VM size. A key benefit of Ultra Disk Storage is a new time-based disk level throttle that is applied at the disk level. This new throttle system provides more realistic behavior of a disk for a given IOPS and throughput. Hitting a leaky bucket throttle can cause erratic performance, while the new time-based throttle provides consistent performance even at the throttle limit. To take advantage of this smoother performance, set your disk throttles slightly less than your VM throttle. We will publish another blog post in the future describing more details about our new throttle system.

Available regions

Currently, Ultra Disk Storage is available in the following regions:

◈ East US 2
◈ North Europe
◈ Southeast Asia

We will expand the service to more regions soon. 

Virtual machine sizes

Ultra Disk Storage is supported on DSv3 and ESv3 virtual machine types. Additional virtual machine types will be supported soon.

Friday 16 August 2019

New for developers: Azure Cosmos DB .NET SDK v3 now available

The Azure Cosmos DB team is announcing the general availability of version 3 of the Azure Cosmos DB .NET SDK, ​released in July. Thank you to all who gave feedback during our preview.

In this post, we’ll walk through the latest improvements that we’ve made to enhance the developer experience in .NET SDK v3.

Azure Cosmos DB, Azure Study Materials, Azure Certifications, Azure Learning, Azure Online Exam

You can get the latest version of the SDK through NuGet and contribute on GitHub.

//Using .NET CLI
dotnet add package Microsoft.Azure.Cosmos

//Using NuGet
Install-Package Microsoft.Azure.Cosmos

What is Azure Cosmos DB?


Azure Cosmos DB is a globally distributed, multi-model database service that enables you to read and write data from any Azure region. It offers turnkey global distribution, guarantees single-digit millisecond latencies at the 99th percentile, 99.999 percent high availability, and elastic scaling of throughput and storage.

What is new in Azure Cosmos DB .NET SDK version 3?


Version 3 of the SDK contains numerous usability and performance improvements, including a new intuitive programming model, support for stream APIs, built-in support for change feed processor APIs, the ability to scale non-partitioned containers, and more. The SDK targets .NET Standard 2.0 and is open sourced on GitHub.

For new workloads, we recommend starting with the latest version 3.x SDK for the best experience. We have no immediate plans to retire version 2.x of the .NET SDK.

Targets .NET Standard 2.0

We’ve unified the existing Azure Cosmos DB .NET Framework and .NET Core SDKs into a single SDK, which targets .NET Standard 2.0. You can now use the .NET SDK in any platform that implements .NET Standard 2.0, including your .NET Framework 4.6.1+ and .NET Core 2.0+ applications.

Open source on GitHub

The Azure Cosmos DB .NET v3 SDK is open source, and our team is planning to do development in the open. To that end, we welcome any pull requests and will be logging issues and tracking feedback on GitHub.

New programming model with fluent API surface

Since the preview, we’ve continued to improve the object model for a more intuitive developer experience. We’ve created a new top level CosmosClient class to replace DocumentClient and split its methods into modular database and container classes. From our usability studies, we’ve seen that this hierarchy makes it easier for developers to learn and discover the API surface.

using Microsoft.Azure.Cosmos;
using System;
using System.Threading.Tasks;

//Getting started with .NET SDK V3
namespace CosmosDotnetSDKGetStarted
{
    class Program
    {
        static async Task Main(string[] args)
        {
            // Create new CosmosClient to communiciate with Azure Cosmos DB
            using (var cosmosClient = new CosmosClient("endpoint", "key"))
            {
                // Create new database
                Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync("databaseId");

                // Create new container
                Container container = await database.CreateContainerIfNotExistsAsync("containerId", "/PartitionKey");

                // Add item to container
                var todoItem = new TodoItem()
                {
                    id = Guid.NewGuid().ToString(),
                    PartitionKey = Guid.NewGuid().ToString(),
                    Task = "Get started with Azure Cosmos DB!"
                };
                var todoItemResponse = await container.CreateItemAsync<TodoItem>(todoItem, new PartitionKey(todoItem.PartitionKey));
            }
        }
    }
}

We’ve also added in fluent builder APIs, which make it easier to create CosmosClient, Container, and ChangeFeedProcessor classes with custom options.

using Microsoft.Azure.Cosmos;
using Microsoft.Azure.Cosmos.Fluent; // Use the new fluent namespace
using System.Threading.Tasks;

// Creating a new CosmosClient and Container using the fluent builder pattern
namespace CosmosDotnetSDKGetStarted
{
    class Program
    {
        static async Task Main(string[] args)
        {
            //Create new instance of CosmosClient using builder pattern
            CosmosClient cosmosClient = new CosmosClientBuilder("endpoint", "key")
                .WithApplicationRegion("West US 2")
                .WithApplicationName("ProductReviewsApp")
                .Build();

            //Define and create a new container using builder pattern
            Container container = await cosmosClient.GetDatabase("databaseId").DefineContainer("containerId", "/PartitionKey")
                // Define indexing policy with included and excluded paths
                .WithIndexingPolicy()
                    .WithIncludedPaths()
                        .Path("/username/*")
                        .Path("/productName/*")
                        .Path("/rating/*")
                        .Attach()
                    .WithExcludedPaths()
                        .Path("/*")
                        .Attach()
                    .Attach()
                // Define time to live (TTL) in seconds on container
                .WithDefaultTimeToLive(30)
                .CreateAsync();
        }
    }
}

View all samples on GitHub.

Stream APIs for high performance


The previous versions of the Azure Cosmos DB .NET SDKs always serialized and deserialized the data to and from the network. In the context of an ASP.NET Web API, this can lead to performance overhead. Now, with the new stream API, when you read an item or query, you can get the stream and pass it to the response without deserialization overhead, using the new GetItemQueryStreamIterator and ReadItemStreamAsync methods.

Easier to test and more extensible


In .NET SDK version 3, all APIs are mockable, making for easier unit testing.

We also introduced an extensible request pipeline, so you can pass in custom handlers that will run when sending requests to the service. For example, you can use these handlers to log request information in Azure Application Insights, define custom retry polices, and more. You can also now pass in a custom serializer, another commonly requested developer feature.

Use the Change Feed Processor APIs directly from the SDK


One of the most popular features of Azure Cosmos DB is the change feed, which is commonly used in event-sourcing architectures, stream processing, data movement scenarios, and to build materialized views. The change feed enables you to listen to changes on a container and get an incremental feed of its records as they are created or updated.

The new SDK has built-in support for the Change Feed Processor APIs, which means you can use the same SDK for building your application and change feed processor implementation. Previously, you had to use the separate change feed processor library.

Ability to scale non-partitioned containers

We’ve heard from many customers who have non-partitioned or “fixed” containers that they wanted to scale them beyond their 10GB storage and 10,000 RU/s provisioned throughput limit. With version 3 of the SDK, you can now do so, without having to create a new container and move your data.

All non-partitioned containers now have a system partition key “_partitionKey” that you can set to a value when writing new items. Once you begin using the _partitionKey value, Azure Cosmos DB will scale your container as its storage volume increases beyond 10GB. If you want to keep your container as is, you can use the PartitionKey.None value to read and write existing data without a partition key.

// Scale fixed container using partition key in .NET V3 SDK
public static async Task ScaleNonPartitionedContainer()
{   
    using (var cosmosClient = new CosmosClient(endpoint, key))
    {
        //Get reference to existing fixed container
        var container = cosmosClient.GetDatabase(databaseId).GetContainer(containerId);             

        // Add item to container without partition key
        var user = new User()
        {
            Id = "bob",
            Status = "Learning Azure Cosmos DB!"
        };
        await container.CreateItemAsync<User>(user, PartitionKey.None);

        // Now start taking advantage of partitioning! Create and add a new item with partition key value of user Id
        var userWithPartitionKey = new User()
        {
            Id = "alice",
            PartitionKey = "alice",
            Status = "Partitioning all the things"
        };

        await container.CreateItemAsync<User>(userWithPartitionKey, new PartitionKey(userWithPartitionKey.PartitionKey));

        // Scale throughtput beyond  10,000 RU/s limit of fixed containers
        var throughputResponse = await container.ReplaceThroughputAsync(15000);
    }

}

// Example of the User class
public class User
{
    public User()
    {
    }

    [JsonProperty(PropertyName = "id")]
    public string Id { get; set; }

    [JsonProperty(PropertyName = "_partitionKey", NullValueHandling = NullValueHandling.Ignore)] // Enbales optional PartitionKey value
    public string PartitionKey { get; set; }

    [JsonProperty(PropertyName = "status")]
    public string Status { get; set; }

}

Easier APIs for scaling throughput


We’ve redesigned the APIs for scaling provisioned throughput (RU/s) up and down. You can now use the ReadThroughputAsync method to get the current throughput and ReplaceThroughputAsync to change it. View sample.