Building a voice interface from scratch for an enterprise app and making it responsive to the app’s UI requires lots of effort and dependencies.
With Alan’s advanced AI capabilities, developers can quickly deploy a new voice AI model in one click for their app without having to make a new app build. The Alan platform takes care of spoken language modeling, spoken language understanding (real-time speech recognition, intent, and entity classification), and ML-ops.
Overlaying Voice Interface in 3 steps
Add Alan SDK to the Client App
Add the Alan SDK to the client app, this handles assistant wake word detection, Voice and App UI sync, capturing and sending voice input to Alan Cloud.
Create Voice Script in the Alan Studio
Create the Voice Script in Alan Studio to map the voice actions with the App’s interface and business logic. The voice scripts are programmed in JavaScript, and you can find the supported voice actions methods here.
Deploy and Iterate the Voice Model
Once you’re done with all the business logic for your assistant and the voice interface mapping, click save. Alan’s backend will quickly create and train the Voice AI model for your app.
You can later easily iterate your voice model by analyzing how users are interacting with your voice assistant inside the Alan Studio.
Iterating the Voice Scripts does not require a new app build, just make changes to the voice script and save them. Alan’s backend will train the new model for you and new changes will be reflected in your enterprise app. Alan platform takes care of the deployment on the cloud for the users to instantaneously have access to the new voice dialogs and the real-time user behavior analytics.
Highlights of the Alan Platform
The Alan Platform takes care of language modeling, voice processing, intent and entity classification, ML-ops, and all the other complexities:
- The Alan Platform’s advanced NER (Named-Entity Recognition) engine detects dialogs, utterances, and sequence segmentation with high accuracy.
- Alan’s SLU (Spoken Language Understanding) engine decides intent action, domain, and classifies entities based on input and current context.
- No need for language datasets or training data for the AI model for your app, Alan AI takes care of it. Deploy your Voice AI model for your app by just writing a dialog script in Alan Studio.
- Alan’s Serverless Architecture approach makes voice interaction much smoother without any lag or drop in voice responses.
- Get access to real-time user analytics from the Alan Studio, and analyze how users interact with your voice interface. Easily make iterations to your model without changing the app in prod.
With push-button performance, Alan Platform enables effortless ML deployment and fast iterations of the voice interfaces by allowing developers to address any new voice command rollout with cohorts while achieving instant deployments without even having to redeploy the application.
Initially focused on mission-critical business applications for hands-free user experience, Alan AI is led by technologists and business veterans, backed by top investors committed to helping customers bring voice interfaces to Business Apps in days.