Tuesday, 31 December 2019

New in Stream Analytics: Machine Learning, online scaling, custom code, and more

Azure Stream Analytics is a fully managed Platform as a Service (PaaS) that supports thousands of mission-critical customer applications powered by real-time insights. Out-of-the-box integration with numerous other Azure services enables developers and data engineers to build high-performance, hot-path data pipelines within minutes. The key tenets of Stream Analytics include Ease of use, Developer productivity, and Enterprise readiness. Today, we're announcing several new features that further enhance these key tenets. Let's take a closer look at these features:

Preview Features


Rollout of these preview features begins November 4th, 2019. Worldwide availability to follow in the weeks after.

Also Read: 70-745: Microsoft Implementing a Software-Defined Datacenter

Online scaling

In the past, changing Streaming Units (SUs) allocated for a Stream Analytics job required users to stop and restart. This resulted in extra overhead and latency, even though it was done without any data loss.

With online scaling capability, users will no longer be required to stop their job if they need to change the SU allocation. Users can increase or decrease the SU capacity of a running job without having to stop it. This builds on the customer promise of long-running mission-critical pipelines that Stream Analytics offers today.

Azure Tutorial and Material, Azure Study Material, Azure Certifications, Azure Learning, Azure Online Exam

Change SUs on a Stream Analytics job while it is running.

C# custom de-serializers

Azure Stream Analytics has always supported input events in JSON, CSV, or AVRO data formats out of the box. However, millions of IoT devices are often programmed to generate data in other formats to encode structured data in a more efficient yet extensible format.

With our current innovations, developers can now leverage the power of Azure Stream Analytics to process data in Protobuf, XML, or any custom format. You can now implement custom de-serializers in C#, which can then be used to de-serialize events received by Azure Stream Analytics.

Extensibility with C# custom code

Azure Stream Analytics traditionally offered SQL language for performing transformations and computations over streams of events. Though there are many powerful built-in functions in the currently supported SQL language, there are instances where a SQL-like language doesn't provide enough flexibility or tooling to tackle complex scenarios.

Developers creating Stream Analytics modules in the cloud or on IoT Edge can now write or reuse custom C# functions and invoke them right in the query through User Defined Functions. This enables scenarios such as complex math calculations, importing custom ML models using ML.NET, and programming custom data imputation logic. Full-fidelity authoring experience is made available in Visual Studio for these functions.

Managed Identity authentication with Power BI

Dynamic dashboarding experience with Power BI is one of the key scenarios that Stream Analytics helps operationalize for thousands of customers worldwide.

Azure Stream Analytics now offers full support for Managed Identity based authentication with Power BI for dynamic dashboarding experience. This helps customers align better with their organizational security goals, deploy their hot-path pipelines using Visual Studio CI/CD tooling, and enables long-running jobs as users will no longer be required to change passwords every 90 days.

While this new feature is going to be immediately available, customers will continue to have the option of using the Azure Active Directory User-based authentication model.

Stream Analytics on Azure Stack

Azure Stream Analytics is supported on Azure Stack via IoT Edge runtime. This enables scenarios where customers are constrained by compliance or other reasons from moving data to the cloud, but at the same time wish to leverage Azure technologies to deliver a hybrid data analytics solution at the Edge.

Rolling out as a preview option beginning January 2020, this will offer customers the ability to analyze ingress data from Event Hubs or IoT Hub on Azure Stack, and egress the results to a blob storage or SQL database on the same.

Debug query steps in Visual Studio

We've heard a lot of user feedback about the challenge of debugging the intermediate row set defined in a WITH statement in Azure Stream Analytics query. Users can now easily preview the intermediate row set on a data diagram when doing local testing in Azure Stream Analytics tools for Visual Studio. This feature can greatly help users to breakdown their query and see the result step-by-step when fixing the code.

Local testing with live data in Visual Studio Code

When developing an Azure Stream Analytics job, developers have expressed a need to connect to live input to visualize the results. This is now available in Azure Stream Analytics tools for Visual Studio Code, a lightweight, free, and cross-platform editor. Developers can test their query against live data on their local machine before submitting the job to Azure. Each testing iteration takes less than two to three seconds on average, resulting in a very efficient development process.

Azure Tutorial and Material, Azure Study Material, Azure Certifications, Azure Learning, Azure Online Exam

Live Data Testing feature in Visual Studio Code

Private preview for Azure Machine Learning


Real-time scoring with custom Machine Learning models

Azure Stream Analytics now supports high-performance, real-time scoring by leveraging custom pre-trained Machine Learning models managed by the Azure Machine Learning service, and hosted in Azure Kubernetes Service (AKS) or Azure Container Instances (ACI), using a workflow that requires users to write absolutely no code.

Users can build custom models by using any popular python libraries such as Scikit-learn, PyTorch, TensorFlow, and more to train their models anywhere, including Azure Databricks, Azure Machine Learning Compute, and HD Insight. Once deployed in Azure Kubernetes Service or Azure Container Instances clusters, users can use Azure Stream Analytics to surface all endpoints within the job itself. Users simply navigate to the functions blade within an Azure Stream Analytics job, pick the Azure Machine Learning function option, and tie it to one of the deployments in the Azure Machine Learning workspace.

Advanced configurations, such as the number of parallel requests sent to Azure Machine Learning endpoint, will be offered to maximize the performance.

Sunday, 29 December 2019

Azure Stack HCI now running on HPE Edgeline EL8000

Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Certifications, Azure Online Exam

Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure's hybrid services such as backup, disaster recovery, update management, monitoring, and security compliance?

Well, Microsoft and HPE have teamed up to validate the HPE Edgeline EL8000 Converged Edge system for Microsoft's Azure Stack HCI program. Designed specifically for space-constrained environments, the HPE Edgeline EL8000 Converged Edge system has a unique 17-inch depth form factor that fits into limited infrastructures too small for other x86 systems. The chassis has an 8.7-inch width which brings additional flexibility for deploying at the deep edge, whether it is in a telco environment, a mobile vehicle, or a manufacturing floor. This Network Equipment-Building System (NEBs) compliant system delivers secure scalability.

HPE Edgeline EL8000 Converged Edge system gives:


◉ Traditional x86 compute optimized for edge deployments, far from the traditional data center without the sacrifice of compute performance.

◉ Edge-optimized remote system management with wireless capabilities based on Redfish industry standard.

◉ Compact form factor, with short-depth and half-width options.

◉ Rugged, modular form factor for secure scalability and serviceability in edge and hostile environments including NEBs level three and American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) level three/four compliance.

◉ Broad accelerator support for emerging edge artificial intelligence (AI) use cases, for field programmable gate arrays or graphics processing units.

◉ Up to four independent compute nodes, which are cluster-ready with embedded networks.

Modular design providing broad configuration possibilities

The HPE Edgeline EL8000 Converged Edge system offers flexibility of choice for compute density or for input/output expansion. These compact, ruggedized systems offer high-performance capacity to support the use cases that matter most, including media streaming, IoT, AI, and video analytics. The HPE Edgeline EL8000 is a versatile platform that enables edge compute transformation so as use case requirements change, the system's flexible and modular architecture can scale to meet them.

Seamless management and security features with HPE Edgeline Chassis Manager

The HPE Edgeline EL8000 Converged Edge system features the HPE Edgeline Chassis Manager which limits downtime by providing system-level health monitoring and alerts. Increase efficiency and reliability by managing the chassis fan speeds for each server blade installed in addition to monitoring the health and status of the power supply. It simplifies firmware upgrade management and implementation with HPE Edgeline Chassis Manager.

Microsoft Azure Stack HCI:


Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.

Achieve industry-leading virtual machine performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote-direct memory access (RDMA) networking.

Help keep apps and data secure with shielded virtual machines, network microsegmentation, and native encryption.

You can take advantage of cloud and on-premises working together with a hyperconverged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services, including:

◉ Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).

◉ Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure – with advanced analytics powered by AI.

◉ Cloud Witness, to use Azure as the lightweight tie breaker for cluster quorum.

◉ Azure Backup for offsite data protection and to protect against ransomware.

◉ Azure Update Management for update assessment and update deployments for Windows virtual machines (VMs) running in Azure and on-premises.

◉ Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site virtual private network (VPN.)

◉ Sync your file server with the cloud, using Azure File Sync.

◉ Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.

By deploying the Microsoft and HPE HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.  

Saturday, 28 December 2019

Advancing Azure Active Directory availability

Azure Tutorial and Materials, Azure Guides, Azure Learning, Azure Study Materials, Azure Online Exam

Our customers trust Azure AD to manage secure access to all their applications and services. For us, this means that every authentication request is a mission critical operation. Given the critical nature and the scale of the service, our identity team’s top priority is the reliability and security of the service. Azure AD is engineered for availability and security using a truly cloud-native, hyper-scale, multi-tenant architecture and our team has a continual program of raising the bar on reliability and security.

Azure AD: Core availability principles


Engineering a service of this scale, complexity, and mission criticality to be highly available in a world where everything we build on can and does fail is a complex task.

Our resilience investments are organized around the set of reliability principles below:

Azure Tutorial and Materials, Azure Guides, Azure Learning, Azure Study Materials, Azure Online Exam

Our availability work adopts a layered defense approach to reduce the possibility of customer visible failure as much as possible; if a failure does occur, scope down the impact of that failure as much as possible, and finally, reduce the time it takes to recover and mitigate a failure as much as possible.

Over the coming weeks and months, we dive deeper into how each of the principles is designed and verified in practice, as well as provide examples of how they work for our customers.

Highly redundant


Azure AD is a global service with multiple levels of internal redundancy and automatic recoverability. Azure AD is deployed in over 30 datacenters around the world leveraging Azure Availability Zones where present. This number is growing rapidly as additional Azure Regions are deployed.

For durability, any piece of data written to Azure AD is replicated to at least 4 and up to 13 datacenters depending on your tenant configuration. Within each data center, data is again replicated at least 9 times for durability but also to scale out capacity to serve authentication load. To illustrate—this means that at any point in time, there are at least 36 copies of your directory data available within our service in our smallest region. For durability, writes to Azure AD are not completed until a successful commit to an out of region datacenter.

This approach gives us both durability of the data and massive redundancy—multiple network paths and datacenters can serve any given authorization request, and the system automatically and intelligently retries and routes around failures both inside a datacenter and across datacenters.

To validate this, we regularly exercise fault injection and validate the system’s resiliency to failure of the system components Azure AD is built on. This extends all the way to taking out entire datacenters on a regular basis to confirm the system can tolerate the loss of a datacenter with zero customer impact.

No single points of failure (SPOF)


As mentioned, Azure AD itself is architected with multiple levels of internal resilience, but our principle extends even further to have resilience in all our external dependencies. This is expressed in our no single point of failure (SPOF) principle.

Given the criticality of our services we don’t accept SPOFs in critical external systems like Distributed Name Service (DNS), content delivery networks (CDN), or Telco providers that transport our multi-factor authentication (MFA), including SMS and Voice. For each of these systems, we use multiple redundant systems configured in a full active-active configuration.

Much of that work on this principle has come to completion over the last calendar year, and to illustrate, when a large DNS provider recently had an outage, Azure AD was entirely unaffected because we had an active/active path to an alternate provider.

Elastically scales


Azure AD is already a massive system running on over 300,000 CPU Cores and able to rely on the massive scalability of the Azure Cloud to dynamically and rapidly scale up to meet any demand. This can include both natural increases in traffic, such as a 9AM peak in authentications in a given region, but also huge surges in new traffic served by our Azure AD B2C which powers some of the world’s largest events and frequently sees rushes of millions of new users.

As an added level of resilience, Azure AD over-provisions its capacity and a design point is that the failover of an entire datacenter does not require any additional provisioning of capacity to handle the redistributed load. This gives us the flexibility to know that in an emergency we already have all the capacity we need on hand.

Safe deployment


Safe deployment ensures that changes (code or configuration) progress gradually from internal automation to internal to Microsoft self-hosting rings to production. Within production we adopt a very graduated and slow ramp up of the percentage of users exposed to a change with automated health checks gating progression from one ring of deployment to the next. This entire process takes over a week to fully rollout a change across production and can at any time rapidly rollback to the last well-known healthy state.

This system regularly catches potential failures in what we call our ‘early rings’ that are entirely internal to Microsoft and prevents their rollout to rings that would impact customer/production traffic.

Modern verification


To support the health checks that gate safe deployment and give our engineering team insight into the health of the systems, Azure AD emits a massive amount of internal telemetry, metrics, and signals used to monitor the health of our systems. At our scale, this is over 11 PetaBytes a week of signals that feed our automated health monitoring systems. Those systems in turn trigger alerting to automation as well as our team of 24/7/365 engineers that respond to any potential degradation in availability or Quality of Service (QoS).

Our journey here is expanding that telemetry to provide optics of not just the health of the services, but metrics that truly represent the end-to-end health of a given scenario for a given tenant. Our team is already alerting on these metrics internally and we’re evaluating how to expose this per-tenant health data directly to customers in the Azure Portal.

Partitioning and fine-grained fault domains


A good analogy to better understand Azure AD are the compartments in a submarine, designed to be able to flood without affecting either other compartments or the integrity of the entire vessel.

The equivalent for Azure AD is a fault domain, the scale units that serve a set of tenants in a fault domain are architected to be completely isolated from other fault domain’s scale units. These fault domains provide hard isolation of many classes of failures such that the ‘blast radius’ of a fault is contained in a given fault domain.

Azure AD up to now has consisted of five separate fault domains. Over the last year, and completed by next summer, this number will increase to 50 fault domains, and many services, including Azure Multi-Factor Authentication (MFA), are moving to become fully isolated in those same fault domains.

This hard-partitioning work is designed to be a final catch all that scopes any outage or failure to no more than 1/50 or ~2% of our users. Our objective is to increase this even further to hundreds of fault domains in the following year.

A preview of what’s to come


The principles above aim to harden the core Azure AD service. Given the critical nature of Azure AD, we’re not stopping there—future posts will cover new investments we’re making including rolling out in production a second and completely fault-decorrelated identity service that can provide seamless fallback authentication support in the event of a failure in the primary Azure AD service.

Think of this as the equivalent to a backup generator or uninterruptible power supply (UPS) system that can provide coverage and protection in the event the primary power grid is impacted. This system is completely transparent and seamless to end users and is now in production protecting a portion of our critical authentication flows for a set of M365 workloads. We’ll be rapidly expanding its applicability to cover more scenarios and workloads.

Thursday, 26 December 2019

New enhancements for Azure IoT Edge automatic deployments

Since releasing Microsoft Azure IoT Edge, we have seen many customers using IoT Edge automatic deployments to deploy workloads to the edge at scale. IoT Edge automatic deployments handle the heavy lifting of deploying modules to the relevant Azure IoT Edge devices and allow operators to keep a close eye on status to quickly address any problems. Customers love the benefits and have given us feedback on how to make automatic deployments even better through greater flexibility and seamless experiences. Today, we are sharing a set of enhancements to IoT Edge automatic deployments that are a direct result of this feedback. These enhancements include layered deployments, deploying marketplace modules from the Azure portal and other UI updates, and module support for automatic device configurations.

Layered deployments


Layered deployments are a new type of IoT Edge automatic deployments that allow developers and operators to independently deploy subsets of modules. This avoids the need to create an automatic deployment for every combination of modules that may exist across your device fleet. Microsoft Azure IoT Hub evaluates all applicable layered deployments to determine the final set of modules for a given IoT Edge device. Layered deployments have the same basic components as any automatic deployment. They target devices based on tags in the device twins and provide the same functionality around labels, metrics, and status reporting. Layered deployments also have priorities assigned to them, but instead of using the priority to determine which deployment is applied to a device, the priority determines how multiple deployments are ranked on a device. For example, if two layered deployments have a module or a route with the same name, the layered deployment with the higher priority will be applied while the lower priority is overwritten.

Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Online Exam

This first illustration shows how all modules need to be included in each regular deployment, requiring a separate deployment for each target group.

Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Online Exam

This second illustration shows how layered deployments allow modules to be deployed independently to each target group, with a lower overall number of deployments.

Revamped UI for IoT Edge automatic deployments


There are updates throughout the IoT Edge automatic deployments UI in the Azure portal. For example, you can now select modules from Microsoft Azure Marketplace from directly within the create deployment experience. The Azure Marketplace features many Azure IoT Edge modules built by Microsoft and partners.

Azure Study Materials, Azure Guides, Azure Tutorial and Material, Azure Online Exam

Automatic configuration for module twins


Automatic device management in Azure IoT Hub automates many of the repetitive and complex tasks of managing large device fleets by using automatic device configurations to update and report status on device twin properties. We have heard from many of you that you would like the equivalent functionality for configuring module twins, and are happy to share that this functionality is now available.

Tuesday, 24 December 2019

How MTA 98-361 Certification Can Make You More Successful

Software development, a basic element of IT, has endured significant changes over the years. This is because every day, there is an increasing demand for the development of new apps aimed at making business systems more powerful.


Microsoft introduced a certification and an exam that qualify new learners and IT professionals with the significant skills that would satisfy this demand. Microsoft Software Development Fundamentals 98-361 exam would lead to a Microsoft Technology Associate (MTA) certification.

These are entry-level exams taken by anyone who wants to measure their skills in creating new software, along with pass the exams to earn the MTA certification. The exam measures your understanding of software testing, computer storage, webpage development, database management systems, web hosting, data structures, and windows services, among others.

MTA 98-361 Exam Objectives

This Microsoft exam is intended to measure the learners’ preparedness and skills in the following areas:
  • Understanding Core Programming (15-20%)
  • Understanding Object-oriented Programming (20-25%)
  • Understanding General Software Development (15-25%)
  • Understanding Web Applications (15-20%)
  • Understanding Desktop Applications 
  • Understanding Databases (15-20%)

Target Audience

98-361 exam is for anyone who wants to build a career in IT and have no hands-on experience in windows networking.
IT professionals who want to build a career in software development
Prerequisites
Fundamental knowledge and experience in C# or Microsoft Visual Basic.NET

Here Are Reasons How MTA 98-361 Certification Can Make You More Successful

In-Demand Skill

Small and large business owners are seeking ways to make their systems run properly and effectively while at the same time-saving expenses. Microsoft’s methods of examining its learners are extensive and thorough. This makes the MTA 98-361 a much-needed skill. Software developers are the leading candidates when it comes to taking care of the IT requirements of any business enterprise.

Systems Productiveness

Hiring skilled professionals in any business environment convert to the performance of systems. Professional software developers assure that they present apps and systems helpful to the success of business enterprises. As an app or web developer, you are responsible for offering these solutions to businesses. When systems are powerful, running costs are lowered, and this indicates more profits to the company.

Great Salary Package

Professional software developers command reasonable rates in the market today. As a skill that is very fundamental to any IT requirements of any business, hiring managers to choose MTA certified individuals. When the demand for skill is high, then the salary packages offered are also top. You become a professional who not only has a skill that helps businesses expand but who has a high probability of being hired and paid highly.

The MTA 98-361 exam assures that applicants are well prepared to manage the increasing need for new software that provides persistent solutions to businesses. Individuals who are MTA certified have a greaterprospect of being hired compared to their counterparts approved by other professional bodies or are not certified at all. This exam is available online, making it within reach of anyone who wishes to achieve this certification in spite of location.

How to Pass Your MTA 98-361 Exam 

The best way to pass your exam is to study thoroughly and understand the syllabus of the exam step by step. Keep in mind; theory only is not sufficient; you must have a hands-on experience. Learn from study guides and 98-361 practice tests to prepare yourself for your exam thoroughly.

MTA 98-361 Exam Details

  • Exam Name: Microsoft Technology Associate (MTA) - Software Development Fundamentals
  • Exam Code: 98-361
  • Exam Price: $127 (USD)
  • Duration: 45 mins
  • Number of Questions: 40-60
  • Passing score: 700 / 1000
  • Exam Registration: Pearson VUE

Here Are Proven and Tested Tips to Help You Pass MTA 98-361 Certification Exam

Plan Your Time Well

Plan your time properly. Assure you allocate sufficient time to study and prepare for your MTA 98-361 exams.

Take Advantage of Study Guides and Practice Tests

You can obtain study materials from the internet. One of them is the book Exam 98-361 MTA Software Development Fundamentals by John Wiley. This book provides you the guidelines you need to study and pass your exams. There are also practice tests available online that you can use for measuring your preparedness for the official MTA 98-361 exam. Some of the exam resources are MTA Student Study Guide Exam 98-361 and the Microsoft Virtual Academy. These Microsoft study guides, practice tests, and resources will give you a brief of what to expect in the exam.

Participate in an Online Community

There are many online communities that are created to help aspirants preparing for exams. In these communities, aspirants and professionals share their experiences on how they prepared and passed their MTA Microsoft Software Development Fundamentals 98-361 exam. The shared experiences will help you know what to expect and what you would need to prepare for and pass your exams.

Conclusion

Certifications look great on your resume. They give you a higher possibility of being selected by hiring managers compared to anyone without any certification. If you are considering moving forward to your career in IT, then the MTA Software Development Fundamentals 98-361 exam is the best place to begin.

It is a certification exam meant for anyone new in the IT industry and looking to become a software developer, a web developer, a database administrator along with a windows developer.

Microsoft is a leader in The Forrester Wave™: Streaming Analytics, Q3 2019

Processing Big data in real-time is an operational necessity for many businesses. Azure Stream Analytics is Microsoft’s serverless real-time analytics offering for complex event processing.

We are excited and humbled to announce that Microsoft has been named a leader in The Forrester Wave™: Streaming Analytics, Q3 2019. Microsoft believes this report truly reflects the market momentum of Azure Stream Analytics, satisfied customers, a growing partner ecosystem and the overall strength of our Azure cloud platform.

Azure Tutorial and Material, Azure Guides, Azure Certifications, Azure Study Materials, Azure Exam Prep, Azure Learning
The Forrester Wave™: Streaming Analytics, Q3 2019

Forrester Wave™: Streaming Analytics, Q3 2019 report evaluated streaming analytics offerings from 11 different solution providers and we are honored to share that that Forrester has recognized Microsoft as a Leader in this category. Azure Stream Analytics received the highest possible score in 12 different categories including Ability to execute, Administration, Deployment, Solution Roadmap, Customer adoption and many more.

The report states, “Microsoft Azure Stream Analytics has strengths in scalability, high availability, deployment, and applications. Azure Stream Analytics is an easy on-ramp for developers who already know SQL. Zero-code integration with over 15 other Azure services makes it easy to try and therefore adopt, making the product the real-time backbone for enterprises needing real-time streaming applications on the Azure cloud. Additionally, through integration with IoT Hub and Azure Functions, it offers seamless interoperability with thousands of devices and business applications.”

Key Differentiators for Azure Stream Analytics


Fully integrated with Azure ecosystem: Build powerful pipelines with few clicks

Whether you have millions of IoT devices streaming data to Azure IoT Hub or have apps sending critical telemetry events to Azure Event Hubs, it only takes a few clicks to connect multiple sources and sinks to create an end-to-end pipeline.

Azure Tutorial and Material, Azure Guides, Azure Certifications, Azure Study Materials, Azure Exam Prep, Azure Learning

Developer productivity

One of the biggest advantages of Stream Analytics is the simple SQL-based query language with its powerful temporal constraints to analyze data in motion. Familiarity with SQL language is enough to author powerful queries. Additionally, Azure Stream Analytics supports language extensibility via C# and JavaScript user-defined functions (UDFs) or user-defined aggregates to perform complex calculations as part of a Stream Analytics query.

Analytics prowess

Stream Analytics contains a wide array of analytic capabilities such as native support for geospatial functions, built-in callouts to custom machine learning (ML) models for real-time scoring, built-in ML models for Anomaly Detection, Pattern matching, and more to help developers easily tackle complex scenarios while staying in a familiar context.

Intelligent edge

Azure Stream Analytics helps bring real-time insights and analytics capabilities closer to where your data originates. Customers can easily enable new scenarios with true hybrid architectures for stream processing and run the same query in the cloud or on the IoT edge.

Best-in-class financially backed SLA by the minute

We understand it is critical for businesses to prevent data loss and have business continuity. Stream Analytics guarantees event processing with a 99.9 percent availability service-level agreement (SLA) at the minute level, which is unparalleled in the industry.

Scale instantly

Stream Analytics is a fully managed serverless (PaaS) offering on Azure. There is no infrastructure to worry about, and no servers, virtual machines, or clusters to manage. We do all the heavy lifting for you in the background. You can instantly scale up or scale-out the processing power from one to hundreds of streaming units for any job.

Mission critical

Stream Analytics guarantees “exactly once” event processing and at least once delivery of events. It has built-in recovery capabilities in case the delivery of an event fails. So, you never have to worry about your events getting dropped.

Sunday, 22 December 2019

Azure Container Registry: preview of repository-scoped permissions

The Azure Container Registry (ACR) team is rolling out the preview of repository scoped role-based access control (RBAC) permissions, our top-voted item on UserVoice. In this release, we have a command-line interface (CLI) experience for you to try and provide feedback.

ACR already supports several authentication options using identities that have role-based access to an entire registry. However, for multi-team scenarios, you might want to consolidate multiple teams into a single registry, limiting each team’s access to their specific repositories. Repository scoped RBAC now enables this functionality.

Here are some of the scenarios where repository scoped permissions might come in handy:

◉ Limit repository access to specific user groups within your organization. For example, provide write access to developers who build images that target specific repositories, and read access to teams that deploy from those repositories.

◉ Provide millions of IoT devices with individual access to pull images from specific repositories.

◉ Provide an external organization with permissions to specific repositories.

In this release, we have introduced tokens as a mechanism to implement repository scoped RBAC permissions. A token is a credential used to authenticate with the registry. It can be backed by username and password or Azure Active Directory(AAD) objects like Azure Active Directory users, service principals, and managed identities. For this release, we have provided tokens backed by username and password. Future releases will support tokens backed by Azure Active Directory objects like Azure Active Directory users, service principals, and managed identities. See Figure 1.

Azure Tutorial and Materials, Azure Certifications, Azure Online Exam, Azure Guides, Azure Learning
*Support for Azure Active Directory (AAD) backed token will be available in a future release.
Figure 1

Figure 2 below describes the relationship between tokens and scope-maps.

◉ A token is a credential used to authenticate with the registry. It has a permitted set of actions which are scoped to one or more repositories. Once you have generated a token, you can use it to authenticate with your registry. You can do a docker login using the following command:

docker login --username mytoken --password-stdin myregistry.azurecr.io.

◉ A scope map is a registry object that groups repository permissions you apply to a token. It provides a graph of access to one or more repositories. You can apply scoped repository permissions to a token or reapply them to other tokens. If you don't apply a scope map when creating a token, a scope map is automatically created for you, to save the permission settings.

A scope map helps you configure multiple users with identical access to a set of repositories.

Azure Tutorial and Materials, Azure Certifications, Azure Online Exam, Azure Guides, Azure Learning
Figure 2

As customers use containers and other artifacts for their IoT deployment, the number of devices can grow into the millions. In order to support the scale of IoT, Azure Container Registry has implemented repository based RBAC, using tokens (figure 3). Tokens are not a replacement for service principals or managed identities. You can add tokens as an additional option providing scalability of IoT deployment scenarios.

This article shows how to create a token with permissions restricted to a specific repository within a registry. With the introduction of token-based repository permissions, you can now provide users or services with scoped and time-limited access to repositories without requiring an Azure Active Directory identity. In the future, we will support tokens backed by Azure Active Directory objects. 

Azure Tutorial and Materials, Azure Certifications, Azure Online Exam, Azure Guides, Azure Learning
Figure 3

Saturday, 21 December 2019

Azure Lighthouse: The managed service provider perspective

Azure Lighthouse became generally available in July this year and we have seen a tremendous response from Azure managed service provider communities who are excited about the scale and precision of management that the Azure platform now enables with cross tenant management. Similarly, customers are empowered with architecting precise and just enough access levels to service providers for their Azure environments. Both customers and partners can decide on the precise scope of the projection.

Azure Lighthouse enables partners to manage multiple customer tenants from within a single control plane, which is their environment. This enables consistent application of management and automation across hundreds of customers and monitoring and analytics to a degree that was unavailable before. The capability works across Azure services (that are Azure Resource Manager enabled) and across licensing motion. Context switching is a thing of the past now.

In this article, we will answer some of the most commonly asked questions:

◉ How can MSPs perform daily administration tasks across different customers in their Azure tenant from a single control plane?

◉ How can MSPs secure their intellectual property in the form of code?

Let us deep dive into a few scenarios from the perspective of a managed service provider.

Azure Automation


Your intellectual property is only yours. Service providers, using Azure delegated resource management, are no longer required to create Microsoft Azure Automation runbooks under customers’ subscription or keep their IP in the form of runbooks in someone else’s subscription. Using this functionality, Automation runbooks can now be stored in a service provider's subscription while the effect of the runbooks will be reflected on the customer's subscription. All you need to do is ensure the Automation account's service principal has the required delegated built-in role-based access control (RBAC) role to perform the Automation tasks. Service providers can create Azure Monitor action groups in customer's subscriptions that trigger Azure Automation runbooks residing in a service provider's subscription.

Microsoft Tutorial and Materials, Microsoft Learning, Azure Study Materials, Azure Online Exam

Azure Monitor alerts


Azure Lighthouse allows you to monitor the alerts across different tenants under the same roof. Why go through the hassle of storing the logs ingested by different customer's resources in a centralized log analytics workspace? This helps your customers stay compliant by allowing them to keep their application logs under their own subscription while empowering you to have a helicopter view of all customers.

Microsoft Tutorial and Materials, Microsoft Learning, Azure Study Materials, Azure Online Exam

Azure Resource Graph Explorer


With Azure delegated resource management, you can query Azure resources from Azure Resource Graph Explorer across tenants. Imagine a scenario where your boss has asked you for a CSV file that would list the existing Azure Virtual Machines across all the customers’ tenants. The results of the Azure Resource Graph Explorer query now include the tenant ID, which makes it easier for you to identify which Virtual Machine belongs to which customer.

Microsoft Tutorial and Materials, Microsoft Learning, Azure Study Materials, Azure Online Exam

Azure Security Center


Azure Lighthouse provides you with cross-tenant visibility of your current security state. You can now monitor compliance to security policies, take actions on security recommendations, monitor the secure score, detect threats, execute file integrity monitoring (FIM), and more, across the tenants.

Microsoft Tutorial and Materials, Microsoft Learning, Azure Study Materials, Azure Online Exam

Microsoft Tutorial and Materials, Microsoft Learning, Azure Study Materials, Azure Online Exam

Azure Virtual Machines


Service providers can perform post-deployment tasks on different Azure Virtual Machines from different customer's tenants using Azure Virtual Machine extensions, Azure Virtual Machine Serial Console, run PowerShell commands using Run command option, and more in the Azure Portal. Most administrative tasks on Azure Virtual Machines across the tenants can now be performed quickly since the dependency on taking remote desktop protocol (RDP) access to the Virtual Machines lessens. This also solves a big challenge since admins now do not require to log on to different Azure Subscriptions in multiple browser tabs just to get to the Virtual Machine’s resource menu.

Microsoft Tutorial and Materials, Microsoft Learning, Azure Study Materials, Azure Online Exam

Managing user access


Using Azure delegated resource management, MSPs no longer need to create administrator accounts (including contributor, security administrator, backup administrator, and more) in their customer tenants. This allows them to manage the lifecycle of delegated administrators right within their own Microsoft Azure Active Directory (AD) tenant. Moreover, MSPs can add user accounts to the user group in their Azure Active Directory (AD) tenant, while customers make sure those groups have the required access to manage their resources. To revoke access when an employee leaves the MSP’s organization, it can simply be removed from the specific group the access has been delegated to.

Added advantages for Cloud Solution Providers


Cloud Solution Providers (CSPs) can now save on administration time. Once you’ve set up the Azure delegated resource management for your users, there is absolutely no need for them to log in to the Partner Center (found by accessing Customers, Contoso, and finally All Resources) to administer customers’ Azure resources.

Also, Azure delegated resource management happens outside the boundaries of the Partner Center portal. Instead, the delegated user access is managed directly under Azure Active Directory. This means subscription and resource administrators in Cloud Solution Providers are no longer required to have the 'admin agent' role in the Partner Center. Therefore, Cloud Solutions Providers can now decide which users in their Azure Active Directory tenant will have access to which customer and to what extent.

Friday, 20 December 2019

Untold Reasons to Achieve MCSA 70-483 Programming in C# Certification


Microsoft is one of the most prominent organizations in the IT industry. It is globally esteemed for its excellent inventions such as Microsoft Azure, Outlook and OneDrive, SharePoint, Skype for Business, to list a minimum number of its comprehensively used products. Most of the Microsoft high-tech applications and innovations are intended to streamline the complexities of business and communication.

Intending to support to the work efficiency, Microsoft confirms its professionals’ expertise through a definite certification process. Some of the certifications that concerned professionals can be awarded include the following:
  • MCSA Windows Server 2016
  • MCSA Universal Windows Platform
  • MCSA SQL 2016 Database Development
  • MCSD App Builder
  • MCSA Web Applications

Important Details about Microsoft 70-483 Exam

Keep in mind that to earn the valid MCSA: Web Applicationscertification, applicants will have to take and pass 70-480, or 70-483 exams. The second part of the Microsoft Certified Solutions Associate certifying path would be passing the 70-486 exam.

Still, this article is devoted to the 70-483 exam, which is the core of the below analysis.
MCSA 70-483 is a certification exam that emphasizes on programming with C#. It is intended for applicants who already have at least one year of experience in programming business applications for both software and hardware platforms utilizing C# programming language. This exam comprises 40-60 questions that must be completed within two hours. The professionals who are engaged in taking the Microsoft 70-483 exam are expected to pay a registration fee of $165.

So, the exam comprised of the following exam topics:
  • Manage program flow 25-30%
  • Create and use types 25-30%
  • Debug applications and implement security 25-30%
  • Implement data access 25-30%

Reasons to Achieve MCSA 70-483 Programming in C# Certification


The 70-483 Programming in C# certification exams come with a lot of benefits to one’s professional advancement. Most of the opportunities in the world of technology demand aspiring professionals to have evidence of their industry skills. This way, Microsoft 70-483 becomes one of the exams that isgoing to take your C# programmer career to the new heights. Here are some of the benefits that come with passing this promising MCSA exam.

Sharpen Your Programming Skills with Programming in C# Certification

Software engineering or web application development is all about the expertise to implement solutions to real-world problems. 70-483 exam covers C# programming, which is an Object-Oriented Programming (OOP) language through simulation of real objects to offer useful solutions. Therefore, the exam’s concepts qualify you with the skill to address advanced difficulties at your work successfully.

Win You Advanced Jobs or Projects

Once getting the skills to come up with the best applications using C#, you will be acknowledged, thanks to developing stunning applications. Let’s take a case study. Suppose a C# professional creates a website for a business organization. This way, it’s common to win many projects as he/she will have introduced themselves to the whole world through this site. Furthermore, by being certified in Web Applications, it’s easy for you to get selected for premium projects, therefore, demonstrating your mastery in C# programming.

Top-End Opportunities

Since C# is a popular and broadly used programming language, many projects are executes employing it. So, employers will always give you an opportunity to spearhead such projects, enabling you to earn an extra income eventually. Therefore, Microsoft 70-483 exam qualifies you to lead a team of developers.

Getting Promotion

In every business frame-up, employers tend to give promotions to professionals showing interest in their careers. Passing the Microsoft 70-483 exam shows your interest in delivering highly scalable business solutions using the skills gained. Through advanced proposals, you will get an opportunity to portray your employer on many occasions and events that are meant to broaden the business scope practicing technology. As a result, you will get a chance to expand the knowledge of web application development.

Chance to Attend Microsoft Events and Programs

Microsoft often arranges events and seminars focused on enhancing the performance of existing technologies or introducing new techniques to the industry. Being Microsoft certified is a significant opportunity, thus enabling you to join a group of certified elites possessing ultimate skills to satisfy the highest business demands.

Preparation Options for Microsoft 70-483 Certification

To ace Microsoft exams, you will need a precise guideline for the preparation.
First of all, go through all the vendor options and study their content as Microsoft specialists have thoroughly developed them for you. These are instructor-led learning sessions, participate in community preparation backed by Microsoft Press Store books, and training video and practice tests.
Related Article: 7 Golden Keys to Open Career Door of C# Developer with Programming in C# Certification

With this in mind, another secret of making the foundation effective is finding the best practice tests from renowned websites. Practice tests will give aspiring programmers constantly updated and accurate practice questions.


Summary
Now, the Microsoft 70-483 Programming in C# exam should become your first step for establishing a career in the web applications field. Make sure to use all the appropriate vendor-offered and preparation options to advance your career up by attempting an extremely beneficial 70-483 exam.

In this regard, the best choice would be taking the practice tests, learning every tip and trick, and ultimately deliver effective solutions to a business environment by gaining the MCSA: Web Applications badge!

Thursday, 19 December 2019

New features in Azure Monitor Metrics Explorer based on your feedback

A few months ago, we posted a survey to gather feedback on your experience with metrics in Azure Portal. Thank you for participation and for providing valuable suggestions!

We want to share some of the insights we gained from the survey and highlight some of the features that we delivered based on your feedback. These features include:

◉ Resource picker that supports multi-resource scoping.

◉ Splitting by dimension allows limiting the number of time series and specifying sort order.

◉ Charts can show a large number of datapoints.

◉ Improved chart legends.

Resource picker with multi-resource scoping


One of the key pieces of feedback we heard was about the resource picker panel. You said that being able to select only one resource at a time when choosing a scope is too limiting. Now you can select multiple resources across resource groups in a subscription.

Microsoft Tutorial and Material, Microsoft Certifications, Microsoft Learning, Microsoft Online Exam, Azure Study Materials

Ability to limit the number of timeseries and change sort order when splitting by dimension


Many of you asked for the ability to configure the sort order based on dimension values, and for control over the maximum number of timeseries shown on the chart. Those who asked explained that for some metrics, including available memory and remaining disk space, they want to see the timeseries with smallest values, while for other metrics, including CPU utilization or count of failures, showing the timeseries with highest values make more sense. To address your feedback, we expanded the dimension splitter selector with Sort order and Limit count inputs.

Microsoft Tutorial and Material, Microsoft Certifications, Microsoft Learning, Microsoft Online Exam, Azure Study Materials

Charts that show a large number of datapoints


Charts with multiple timeseries over the long period, especially with short time grain are based on queries that return lots of datapoints. Unfortunately, processing too many datapoints may slow down chart interactions. To ensure the best performance, we used to apply a hard limit on the number of datapoints per chart, prompting users to lower the time range or to increase the time grain when the query returns too much data.

Some of you found the old experience frustrating. You said that occasionally you might want to plot charts with lots of datapoints, regardless of performance. Based on your suggestions, we changed the way we handle the limit. Instead of blocking chart rendering, we now display a message that suggests that the metrics query will return a lot of data, but will let you proceed anyways (with a friendly reminder that you might need to wait longer for the chart to display).

Microsoft Tutorial and Material, Microsoft Certifications, Microsoft Learning, Microsoft Online Exam, Azure Study Materials

High-density charts from lots of datapoints can be useful to visualize the outliers, as shown in this example:

Microsoft Tutorial and Material, Microsoft Certifications, Microsoft Learning, Microsoft Online Exam, Azure Study Materials

Improved chart legend


A small but useful improvement was made based on your feedback that the chart legends often wouldn’t fit on the chart, making it hard to interpret the data. This was almost always happening with the charts pinned to dashboards and rendered in the tight space of dashboard tiles, or on screens that have a smaller resolution. To solve the problem, we now let you scroll the legend until you find the data you need:

Microsoft Tutorial and Material, Microsoft Certifications, Microsoft Learning, Microsoft Online Exam, Azure Study Materials

Feedback


Let us know how we're doing and what more you'd like to see. Please stay tuned for more information on these and other new features in the coming months. We are continuously addressing pain points and making improvements based on your input.

If you have any questions or comments before our next survey, please use the feedback button on the Metrics blade. Don’t feel shy about giving us a shout out if you like a new feature or are excited about the direction we’re headed. Smiles are just as important in influencing our plans as frowns.

Microsoft Tutorial and Material, Microsoft Certifications, Microsoft Learning, Microsoft Online Exam, Azure Study Materials