Alan AI® News and UpdatesConversational AI

Neural Networks for Spoken Language Understanding (SLU)

By February 28, 2018January 22nd, 2024No Comments
wave

Yesterday, our CTO, Andrey Ryabov shared our learnings in developing a Neural Network for Spoken Language Understanding (SLU) with students at Stanford University. It is part of the AI course on Neural Networks and Deep Learning. Here is a brief on the topics covered.

Spoken language is quite different from the written language in the following ways

  1. When speaking, people don’t always follow grammar, use punctuation, and often split their sentences.
  2. Automatic Speech Recognition (ASR) introduces errors.
  3. Users tend to use more anaphoras.
  4. When writing, a person can go back and edit sentences, but for a speaker, it’s not possible, corrections are appended to the sentence.

These specifics of spoken language have to be considered when developing Natural Language systems. Many classical NLP models trained on datasets of written language don’t perform well on spoken language.

How does one develop a Voice AI Platform that converts speech to the meaning and offers human-like conversations? Here are some tricks we shared on Spoken Language Understanding.

  1. Develop Word Vectors for sentences using classical NLP training set.
  2. Augment the LSTM to use word positions and context information.
  3. Use Attention to make only the important words contribute more.
  4. Use Augmented Dataset and Transfer Learning to better train the Neural Network.

The above are some of the Neural Network enhancements we shared and will soon be released in our Alan Platform. If the spoken language understanding problem appeals to you and you are an engineer, email jobs@alan.app to learn more. If you are an enterprise that wants to deploy voice interfaces, please contact us at Alan.

Leave a Reply

Discover more from Alan Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading