Saturday 30 March 2019

Umanis lifts the hood on their AI implementation methodology

Given the ever-increasing speed of change in technology, along with the variety of sectors and industries Umanis works in, they focused on building a methodology that could be standardized across AI implementations from project to project. This methodology follows an iterative cycle: assimilate, learn, and act, with the goal of adding value with each iteration.

The Azure platform acts as an enabler of this methodology as seen in the image below.

Azure Learning, Azure Certifications, Azure Guides, Azure Tutorial and Material

In most data and artificial intelligence (AI) projects implemented at Umanis, several trends are gaining momentum and are likely to amplify in 2019:

◈ More unstructured, big, and real-time data.
◈ An increased need for fast and reliable AI solutions to scale up.
◈ Increasing expectations from customers.

In this blog post, we will explain how you can address these kinds of projects, and how Umanis maps their approach to the Azure offering to deliver solutions that are easy to use, operationalize, and maintain.

The 3 phases of the AI implementation methodology


1. Assimilate


In this initial phase, you can be hit by anything. From the good to the big, bad, and ugly: databases, text, logs, telemetry, images, videos, social networks, and more are flowing in. The challenge is to make sense of everything, so you can serve the next phase (Learn) successfully. By assimilating, we mean:

◈ Ingest: The performance of an algorithm depends on the quality of the data. We consider “ingesting” to be checking the quality of the data, the quality of the transmission, and building the pipelines to feed the subsequent parts.

◈ Store: Since the data will be used by highly demanding algorithms (I/O, processing power) that will mix data from various sources, you need to store the data in the most efficient way for future access by algorithms or data visualizations.

◈ Structure: Finally, you’ll need to prepare the data for an algorithms’ consumption and execute as many transformations, preprocessing, and cleaning tasks as you can to speed up the data scientists’ activities and algorithms.

2. Learn


This is the heart of any AI project: Creating, deploying, and managing models.

◈ Create: Data scientists use available data to design algorithms, train their models, and compare the results. There are two key points to this:

1. Don’t make them wait for results! Data scientists are rare resources and their time is precious.
2. Allow any language or combination of languages. On that perspective, Azure Databricks is a great solution as it addresses this natively by allowing different languages to be used in a single block of code.

Azure Learning, Azure Certifications, Azure Guides, Azure Tutorial and Material

◈ Use: Once algorithms are deployed as APIs and consumed, the need for parallelization goes up. SLAs and testing the performance of the sending, processing, and receiving pipeline is crucial.

◈ Refine: Refining the quality of algorithms ensures reliable results over time. The easy part of this activity is automatic re-training on a regular basis. The less obvious one is what we call the “human in the loop” activity. In short, a Power BI report showing the results of predictions that a human can re-classify quickly as needed, and the machine uses this human expertise to get better at its task.

3. Act


All of the above phases are useless unless you actually make good use of the algorithm’s added value.

◈ Inform: Any mistake in code, misunderstanding in requirements, or bug can be devastating as first user impressions are crucial. Therefore, instead of a “big bang” of visualizations, start very small, iterate very quickly, and make a few key users on-board to secure adoption before widening the audience.

◈ Connect: Systems that use the information from algorithms need to be plugged in. This is called RPA, IPA, or automation in general, and the architectures can vary greatly on each project. Don’t overlook the need for human monitoring of this activity. Consider the impact of the most wrong answer from an algorithm, and you will get a good feel of the need for human supervision.

◈ Dialog: When dealing with human interaction, so much comes into play that to be successful, the scope of the interaction needs to be narrowed down to the actions that really add value and are not trivial. (This is not easily possible via classic interfaces.)

Related Posts

1 comment: