Creating an advanced conversational system is now a simple task with the powerful tools integrated into Microsoft’s Language Understanding Service (LUIS) and Bot Framework. LUIS brings together cutting-edge speech, machine translation, and text analytics on the most enterprise-ready platform for creation of conversational systems. In addition to these features, LUIS is currently GDPR, HIPPA, and ISO compliant enabling it to deliver exceptional service across global markets.
Bots and conversational AI systems are quickly becoming a ubiquitous technology enabling natural interactions with users. Speech remains one of the most widely used input forms that come natural when thinking of conversational systems. This requires the integration of speech recognition within the Language Understanding in conversational systems. Individually, speech recognition and language understanding are amongst the most difficult problems in cognitive computing. Introducing the context of Language Understanding improves the quality of speech recognition. Through intent-based speech priming, the context of an utterances is interpreted using the language model to cross-fertilize the performance of both speech recognition and language understanding. Intent based speech recognition priming uses the utterances and entity tags in your LUIS models to improve accuracy and relevance while converting audio to text. Incorrectly recognized spoken phrases or entities could be rectified by adding the associated utterance to the LUIS model or correctly labeling the entity.
In this release, we have simplified the process of integrating speech priming into LUIS. You no longer must use multiple keys or interact through other middleware. This more streamlined integration also reduces the latency that your users would experience when using speech as an input to your conversational system. All you need to do is to enable speech priming in the publish setting of your LUIS application. Speech priming will be invoked on the same subscription key used in LUIS and transferred to the speech APIs seamlessly.
Talk or text?
Bots and conversational AI systems are quickly becoming a ubiquitous technology enabling natural interactions with users. Speech remains one of the most widely used input forms that come natural when thinking of conversational systems. This requires the integration of speech recognition within the Language Understanding in conversational systems. Individually, speech recognition and language understanding are amongst the most difficult problems in cognitive computing. Introducing the context of Language Understanding improves the quality of speech recognition. Through intent-based speech priming, the context of an utterances is interpreted using the language model to cross-fertilize the performance of both speech recognition and language understanding. Intent based speech recognition priming uses the utterances and entity tags in your LUIS models to improve accuracy and relevance while converting audio to text. Incorrectly recognized spoken phrases or entities could be rectified by adding the associated utterance to the LUIS model or correctly labeling the entity.
In this release, we have simplified the process of integrating speech priming into LUIS. You no longer must use multiple keys or interact through other middleware. This more streamlined integration also reduces the latency that your users would experience when using speech as an input to your conversational system. All you need to do is to enable speech priming in the publish setting of your LUIS application. Speech priming will be invoked on the same subscription key used in LUIS and transferred to the speech APIs seamlessly.
Text Analytics: Understand your text?
LUIS continues to bring together different technologies to help understand your user. It already includes the power of Bing Spell Check and now we are adding functionality from the Text Analytics Cognitive Service. Integrating text analytics into your LUIS model enabling sentiment detection on utterances is a simple configuration. Through this integration your bot can tell you if your customer is happy or sad. Text analytics also enables the detection of key phrases within utterances without requiring labeling or training. These advanced natural language processing tools enable better, more personalized interaction with your customers.
The JSON object returns the sentiment of the utterance as a value from 0 – 1, with values closer to 1 are more positive while closer to 0 are more negative. Additionally, adding key phrases pre-built entity enables identifying key phrases in the returned object.
Create a Global Bot, without being "lost in translation"
Are you worried about the effort it would take you to design a Bot that speaks multiple languages? With the integration of Machine Translation middleware into the Bot Framework, you don't need to worry any more. Using the Bot created in one language across 60+ languages in a few lines of code makes it much simple to build and improve models. With personalization and customization included in the middleware, enabling a global Bot is a simple task. Combining translation middleware with LUIS and QnA maker is simple and includes passing the utterance after translation. The middleware also specifies if the response should be translated back to the user language or returned in the bot native language.
The translation middleware also includes the ability to identify patterns that shouldn’t be translated to the target language (such as names of locations or entities that are meaningful in their own terms). An example if the user says “My name is …” in a non-native language for the bot you want to avoid translating this name using a pattern in every language. The code snippet reflects this in pattern for French.
The middleware also includes a localization converter for currencies and dates to distinct cultures by adding LocaleConverterMiddleware. The Machine Translation middleware is currently released as preview with Bot Framework SDK V4.
Generalizing the Model using Regex and Pattern features
In this release LUIS is introducing two features that enable an improved language understanding of entities and intents. Regex entities allows the identification of an entity in the utterance based on a regular expression. For example, a flight number regular expression includes two or three characters and then 4 digits. The Delta Airlines flight regular expression could be expressed as DL[0-9]{4}. Defining this regular expression entity will allow LUIS to extract matching entities from an utterance.
Patterns, on the other hand, allow developers to define intents they could represent effectively without the need to provide extensive utterances. This is especially effective for forms that could capture a wide variety of common ways of expressing an utterance. Consider for example a shopping application, the pattern “add the {Shopping.Item} to my {Shopping.Cart}” is a common way of expressing the intent Shopping.AddToCart.
Patterns are especially useful when there are similarities between utterances that reflect different intents. Take for example the utterances in a human resource domain “Who does Mike Jones report to?” and “Who reports to Mike Jones?”. The two statements contain the same tokens yet reflect different intents. This could be captured by either introducing many utterances, or by simply expressing these common utterances by their respective patterns “Who does {Employee} report to?” and “Who reports to “Employee}?”. Introducing these patterns, alongside the utterances allow LUIS to identify the most suitable intent to fit to an utterance. Moreover, patterns could also capture the distinct roles of entities. As an example, the pattern “Book a ticket from {Location:origin} to {Location:Destination}” in a flight booking application. This pattern captures the distinct roles of the locations included in the utterance.
Additionally, patterns could encompass entities with variable length represented as Patterns.any entities. The entities are detected first, prior to the matching of the pattern. In turn the pattern “Where is the {FormName} and who needs to sign it after I read it?” will match form names that could extent multiple tokens.
Involve your personal data scientist to help improve your model
The tools provided in Cognitive Services enable developers without a machine learning (ML) background to develop conversational systems. Once the bot is built, developers are faced with multiple options to further improve their models. Some of these options are rooted in ML, and in turn developers with limited ML experience might not fully explore these options.
In this release of LUIS and Bot Framework, we have taken our goal of democratizing ML further. We are incorporating tools to provide personalized data science guidance to developers on the existing applications. This includes identifying the areas where the current model falls short and providing suggestion to help improve the trained model. It also allows for automatically generating different architectures of multiple conversational models, which include LUIS and QnA maker, through the dispatcher tool. This might be creating a hierarchical architecture with a dispatching model or consolidating multiple models into one LUIS model. Collectively, these tools use LUIS to realize the different architectures and dissect the models to help provide the most suitable guidance to developers. The dispatch tool is currently released in preview on Bot Framework SDK V4.
These are some of the highlights of the features that have been introduced to LUIS and Bot Framework. Through integrating the different tools, and compliance with GDPR, HIPPA, and ISO, LUIS and Bot Framework are distinguishing themselves as the most enterprise-ready platform. These new additions make understanding customers and reaching new markets and users a few lines of code away. It also makes bot interactions more natural to your users.
0 comments:
Post a Comment