API Reference¶
Commands and responses¶
Intent¶
Runtime version: v3.x or later
intent()
is a predefined function to define a voice or text command. In the function, you specify expected user inputs — patterns, conditions on when the command must be available for the user — visual
filters, and actions that must occur when the user’s input matches one of the patterns. For details, see Intent.
Syntax
Function syntax
intent([filter,] [noctx,] pattern1 [, pattern2, …, patternN], action)
Parameters
Name |
Type |
Description |
---|---|---|
|
function |
Defines the conditions on when the intent can be invoked. |
|
function |
Signals that the intent must not switch the current context. For details, see noContext intents. |
|
string |
Comma separated strings, each represents a pattern invoking the intent. |
|
function |
Defines what actions must be taken when one of the intent patterns is matched. Either an anonymous arrow function or the reply() function. |
Play¶
Runtime version: v3.x or later
play()
is a predefined function used to provide voice or text responses or to send JSON
commands to the web/mobile client app. If more than one response is passed to the play()
function, only one response will be played at random. For details, see play().
Syntax
Function syntax
play([voice(lang),] response1 [, response2, …, responseN] [, opts(options)])
Or
Function syntax
play(command [, opts(options)])
Parameters
Name |
Type |
Description |
---|---|---|
|
function |
Settings defining how the voice of the assistant must be customized.
For details, see Voice settings. |
|
string/number |
Comma separated strings or numbers, each represents a pattern of a voice response from Alan AI. |
|
object |
An arbitrary JSON object used to send commands to the client app. |
|
function |
Options defining how the
For details, see Specifying play options. |
Reply¶
Runtime version: v3.x or later
reply()
is a predefined action function providing the specified voice or text response to the user. If more than one response is passed to the reply()
function, only one response will be played at random. For details, see reply().
Syntax
Function syntax
reply(response1 [, response2, …, responseN])
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
Comma separated strings, each represents a pattern of a response from Alan AI. |
Contexts¶
Then¶
Runtime version: v3.x or later
then()
is a predefined function used to activate the context. If you need to share data between the current context and the activated one, you can pass this data with the state object. For details, see Activating contexts.
Syntax
Function syntax
then(context[, state])
Parameters
Name |
Type |
Description |
---|---|---|
|
function |
Represents the variable name with which context is defined. |
|
object |
Predefined object that exists in every context. |
Resolve¶
Runtime version: v3.x or later
resolve()
is a predefined function used to manually deactivate the current context and return to its parent context. You can pass any data to it, this data will be available in the parrent context. For details, see Exiting contexts.
Syntax
Function syntax
resolve([returnValue])
Parameters
Name |
Type |
Description |
---|---|---|
|
object |
Represents the object returned to the parent context. |
Fallback¶
Runtime version: v3.x or later
For each context, you can define a fallback response which will be activated if this context is active and no intent from this context has been matched. For details, see Error handling and re-prompts.
Syntax
Function syntax
fallback(pattern1 [, pattern2, …, patternN])
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
Comma separated strings, each represents a pattern of a response from Alan AI. |
noContext¶
Runtime version: v3.x or later
The noContext
function wraps all intents that must not switch the current context, for example, general questions in the dialog. For details, see noContext intents.
Syntax
Function syntax
noContext(() => {intent1 [, intent2, …, intentN]});
Parameters
Name |
Type |
Description |
---|---|---|
|
function |
An intent that must not switch the current context when invoked. |
State¶
Runtime version: v3.x or later
Each context has a special predefined object — state
. You can access it via p.state
. This object should be treated as the knowledge base that is available to Alan AI in the current conversational context. You can store any data in it. For details, see state.
onEnter¶
Runtime version: v3.x or later
onEnter()
is a special predefined callback activated each time the script enters a context. For details, see Using onEnter() function.
Syntax
Function syntax
onEnter(action)
Parameters
Name |
Type |
Description |
---|---|---|
|
function |
Defines what actions must be taken when the context is activated. |
Title¶
Runtime version: v3.x or later
title()
is a special predefined function used to label a context. For details, see Labeling contexts.
Syntax
Function syntax
title(contextName)
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
Represents the name of a context that will be shown in logs |
resetContext¶
Runtime version: v3.x or later
resetContext()
is a special predefined function used to deactivate all current contexts at a time. For details, see Resetting contexts.
Syntax
Function syntax
resetContext()
Session-specific objects and methods¶
Session-specific objects and methods are available via the predefined p
object. These methods and objects persist during a user session, until the session is terminated. The user session is terminated after 30 minutes of inactivity or if the user quits the app.
userData¶
Runtime version: v3.x or later
p.userData
is a runtime object that can be used to store any data. You can access it at any time from any script of your project regardless of the context. Take a note that the data stored in p.userData
is available only during the given user session. For details, see userData.
authData¶
Runtime version: v3.x or later
p.authData
is a runtime object that can be used to provide static device- or user-specific data, such as the user’s credentials, to Alan AI. If you need to receive dynamic data from the app, use the visual state instead.
For details, see authData.
visual¶
Runtime version: v3.x or later
p.visual
is a runtime object that can contain an arbitrary JSON object. It should be used to provide any dynamic data of the app state to Alan’s script or project. For details, see Visual state.
Global objects and methods¶
project¶
Runtime version: v3.x or later
project
is a global object that can be used to store any data you may need in separate dialog scripts of your project. When Alan AI builds the dialog model for your project, it loads scripts in the order defined in the scripts panel, from top to bottom. The project
object will be available in any script that is below the script where it is defined.
// Script 1
project.config = {initValue: 1};
// Script 2
console.log(`Init value is ${project.config.initValue}`);
project API¶
Runtime version: v3.x or later
The project API can be used if you want to send data from the client app to the dialog script or perform some script logic without a voice or text command from the user. This can be done by setting up the logic for the project API and then calling it with Alan AI SDK method — callProjectApi(). For details, see Project API.
Syntax
Function syntax
projectAPI.functionName = function(p, data, callback) {}
Parameters
Name |
Type |
Description |
---|---|---|
|
object |
Predefined object containing the user session data and exposing Alan AI’s methods. |
|
object |
An object containing the data you want to pass to your script. |
|
function |
A callback function used to receive data back to the app. |
Example
projectAPI.setToken = function(p, param, callback) {
if (!param || !param.token) {
callback("error: token is undefined");
}
p.userData.token = param.token;
callback();
};
Generative AI¶
You can use the following functions and APIs to let your AI assistant query the data corpus and provide a response to the user:
corpus¶
Runtime version: v3.x or later
corpus
is a predefined function that allow specifying from which websites and resources to retrieve content to conduct a conversation between the AI assistant and users. For details, see Q&A virtual assistant.
Syntax
Function syntax
corpus(resource1 [, resource2, …, resourceN])
Parameters
Name |
Type |
Description |
---|---|---|
|
object or string |
Defines:
|
Example
corpus(
{url: "https://alan.app/", depth: 2, maxPages: 50},
{url: "https://alan.app/docs/", depth: 2, maxPages: 100},
);
corpus(`
Alan AI is a complete Actionable AI platform to build, deploy and manage AI Assistants in a few days.
With Alan AI, a conversational experience for your app can be built by a single developer, rather than a team of Machine Learning and DevOps experts.
`);
generator API¶
Runtime version: v4.1 or later
The generator API
allows generating a structured response — report, product descrription and so on — based on the JSON-formatted data provided to it. For details, see Generator API.
Syntax
Function syntax
api.generator_v1({format, formatting});
generator.generate({data, query});
Name |
Type |
Description |
---|---|---|
|
format type |
|
|
string |
Prompt on how to format the response, for example: |
|
object |
Object with data used for response generation, for example: |
|
string |
Prompt on what to include in the response |
Output
The generator API
returns an output data stream with the response to the query.
Methods
The gather()
method is used to collect (gather) the final response.
Example
In the example below, when the user says: Tell me about iPhone 13 Pro, include only its characteristics and image
, Alan AI generates a structured response in a form of a product card containing the product characteristics and image provided in the productList
array.
let productList = [
{
"name": "iPhone 13 Pro",
"min_price":"$999",
"vendor":"Apple",
"spec":"A15 Bionic chip, Super Retina XDR display, ProMotion technology, Triple 12MP camera system, 5G capability, Face ID, Ceramic Shield front cover, and Water and dust resistance.",
"image":"https://www.apple.com/newsroom/images/product/iphone/standard/Apple_iPhone-13-Pro_iPhone-13-Pro-Max_09142021_inline.jpg"
},
{
"name": "Samsung Galaxy S21 Ultra",
"min_price":"$1,199",
"vendor":"Samsung",
"spec":"Exynos 2100 or Snapdragon 888 processor, Dynamic AMOLED 2X display, Quad HD+ resolution, 108MP quad camera system, 5G capability, In-display fingerprint sensor, S Pen support, and Water and dust resistance.",
"image":"https://image-us.samsung.com/us/smartphones/galaxy-s21/galaxy-s21-5g/v6/models/images/galaxy-s21-plus-5g_models_colors_phantom-violet.png"
}
]
let generator = api.generator_v1({
format: 'markdown',
formatting: 'highlight headers and bold key entities such as names, dates, addresses etc.'
});
intent(
`Tell me about $(MODEL iPhone 13 Pro|Samsung Galaxy S21 Ultra), (include|) $(R* .+)`,
async p=> {
let productInfo = productList.find(el => el.name === p.MODEL.value);
let instruct = 'product details: ' + p.R.value;
let stream = generator.generate({data: productInfo, instruct});
p.play(`Retrieving product data...`);
p.play(stream);
let answer = await stream.gather();
console.log(answer.answer);
});
query API¶
Runtime version: v4.1 or later
The query API
allows querying corpus data programmatically to retrieve a response that can be further used in the dialog flow. For details, see Query API.
Syntax
Function syntax
api.query_v1({corpus, query})
Parameters
Name |
Type |
Description |
---|---|---|
|
text corpus |
Corpus data to be queried, for example: |
|
string |
Query to be run against the corpus, for example: |
Output
The query API
returns a response object with the following data:
Name |
Description |
---|---|
|
Summary of the answer generated by the query API |
|
Detailed answer generated by the query API |
Methods
The gather()
method is used to collect (gather) the final response.
Example
In the example below, when the user asks a question: Can you tell me why my smartphone battery is draining quickly?
, Alan AI queries the FAQ
corpus and collects the final response with the gather()
method. The final response is played to the user, the response summary and detailed answer are written to Alan Studio logs.
let FAQ = corpus(`
Why is my smartphone battery draining quickly? Possible causes include excessive app usage, background processes, high screen brightness, push notifications, and weak cellular signals. To address this, close unnecessary apps, reduce screen brightness, disable push notifications for non-essential apps, and keep your device updated.
How do I fix a frozen or unresponsive smartphone? Try a soft reset by holding the power button for 10 seconds. If that fails, perform a force restart by holding the power and volume down buttons for 10-15 seconds.
`);
intent("Can you tell me $(QUERY* .+)", async p => {
p.play("Just a second...");
let answer = await api.query_v1({corpus: FAQ, question: p.QUERY.value});
if (answer) {
p.play(answer);
let final = await answer.gather();
console.log(final.summary);
console.log(final.answer);
} else {
p.play(`Sorry, I cannot help you with that`);
}
});
Semantic search¶
You can use the following API for semantic search: table API.
table API¶
Runtime version: v4.1 or later
The table API
allows creating a table with data passed to it and enables semantic search across this data. For details, see Table API.
Syntax
Function syntax
api.table_v1({tableName, columns});
table.update({columnList});
table.commit();
table.search({description: {q, score, top_k}});
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
Table name |
|
object |
List of columns and value types, for example: Value parameters:
|
|
object |
List of table columns mapped to data keys of the passed object from which data is retrieved, for example: |
|
string |
Query to be run against the table with data |
|
decimal |
Degree of similarity in the range between 0 (no similarity) to 1 (the query is identical to the column value) |
|
decimal |
Maximum number of top results matching the query to be returned |
Note
If you need to add table columns after the table has been created, update the tableName
as well.
Output
The table API
returns an array of records in the JSON format that match the query run against the table.
Example
In the example below, when the user asks: Find a model with an improved camera
, Alan AI understands the conceptual meaning behind the search query and delivers relevant results that are displayed as product cards in the chat. Even though the words from the query – improved camera
– may not be directly mentioned in the product description, it retrieves the list of products whose description contains information about camera enhancements.
let productList = [
{
"id": 1,
"name": "iPhone 13 Pro",
"min_price":"$999",
"vendor":"Apple",
"spec":"A15 Bionic chip, Super Retina XDR display, ProMotion technology, Triple 12MP camera system, 5G capability, Face ID, Ceramic Shield front cover, and Water and dust resistance.",
"productDetails": "Major upgrades over its predecessor include improved battery life, improved cameras and computational photography, rack focus for video in a new 'Cinematic Mode' at 1080p 30 fps, Apple ProRes video recording, a new A15 Bionic system on a chip, and a variable 10–120 Hz display, marketed as ProMotion.",
},
{
"id": 2,
"name": "Samsung Galaxy S21 Ultra",
"min_price":"$1,199",
"vendor":"Samsung",
"spec":"Exynos 2100 or Snapdragon 888 processor, Dynamic AMOLED 2X display, Quad HD+ resolution, 108MP quad camera system, 5G capability, In-display fingerprint sensor, S Pen support, and Water and dust resistance.",
"productDetails": "The Galaxy S21 Ultra comes in a sleeker design and offers faster performance from Qualcomm's Snapdragon 888 chip. And, unlike the regular Galaxy S21, you don't have to make nearly as many trade-offs. You get a better main 108MP camera, a glass back (instead of plastic), more RAM and a higher-res display.",
},
{
"id": 3,
"name": "Google Pixel 6 Pro",
"min_price":"$899",
"vendor":"Google",
"spec":"Google Tensor chip, 6.7-inch QHD+ OLED display, 50MP main camera, 12MP ultra-wide camera, 5G capability, Titan M2 security chip, In-display fingerprint sensor, and Android 12 operating system.",
"productDetails":"With its unique design and high-quality build, impressive display and good battery life, and its consistently good cameras and feature-rich software, the Google Pixel 6 Pro stands toe-to-toe with the very best from Apple and Samsung. This is the Google flagship we've been waiting for.",
},
];
// Create a table
let table = null;
async function reloadTable() {
table = api.table_v1({
tableName: 'products',
columns: {id: 'str,unique', name: 'str', price: 'str', description: 'str,semantic', specification: 'str,semantic'}
});
// Populate the table
let products = productList;
for (let p of products) {
table.update({
id: p.id,
name: p.name,
price: p.min_price,
description: p.productDetails,
specification: p.spec
});
}
// Commit changes to the table
await table.commit();
setTimeout(reloadTable, 5000);
}
reloadTable();
intent(`Find a model with $(QUERY* .+)`, async p => {
p.play("Please give me a moment...")
let query = p.QUERY.value;
let products = await table.search({description: {q: query, score: 0.3, top_k: 3}});
if (products.length) {
p.play(`Here is a list of models you may like:`);
for (let product of products) {
p.play(`**Name**: ${product.name} \n\n **Price**: ${product.price} \n\n **Details**: ${product.description}`, opts({markdown:true, audio:false}));
}
} else {
p.play(`Nothing found`);
}
});
Data transformation¶
You can use the following function for data preprocessing and transformation: transforms.
transforms¶
Runtime version: v4.2 or later
The transforms()
function can be used to pre-process and format the input data to the required format using the defined template. For details, see Data transformation.
Syntax
Function syntax
transforms.transformName({input, query})
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
Name of the transform created in the AI assistant project. |
|
object |
An object containing the data input passed to the transform. |
|
function |
An object containing the transform query. |
Example
In the example below, the text in the input is formatted by the rules specified in the format
transform:
Common query:
The input contains user data in JSON, the query contains fields description, the result contains formatted text
Input: JSON,
{"name": "Jerry Welsh", "age": "16", "address": "3456 Oak Street"}
Query:
Name is the user's name, age is the user's age, address is the user's address
Result: Markdown
## User: user's name
- **Name**: user's name
- **Age**: user's age
- **Address**: user's address
intent(`Show user data`, async (p)=> {
let u = await transforms.format({
input: {"name": "John Smith", "age": "56", "address": "1234 Main Street"},
query: "Name is the user's name, age is the user's age, address is the user's address"
});
p.play(u);
});
Speech recognition biasing¶
Recognition hints¶
Runtime version: v3.x or later
recognitionHints
allow you to provide a list of hints for the AI assistant to help it recognize specific terms, words or phrase patterns that are not used in everyday life but are expected in the user’s input. Use it to avoid frequent errors in speech recognition.
Syntax
Function syntax
recognitionHints(recognitionHint1 [, recognitionHint2, …, recognitionHintN])
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
Comma separated strings (hints), each represents a pattern of the user’s input. |
Example
recognitionHints("Open ICIK");
intent('Open $(PARAM~ ICIK~ICIK)', p => {
p.play(`Opening ${p.PARAM.label}`);
});
Homophone¶
Runtime version: v3.x or later
homophone()
helps eliminate ambiguity and improve matching of intents containing terminology or domain-specific words. In the homophone
function, you must specify a term expected in the user’s input and a list of its homophones — words that are pronounced in a similar way but are different in meaning. This will prevent the AI assistant from failing to match the intent when the user mispronounces or mistypes this term.
Note
To create a list of homophones for a term, you can check unrecognized phrases in Alan AI analytics.
Syntax
Function syntax
homophone(term, homophone1 [, homophone2, …, homophoneN] )
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
The expected term to be matched. |
|
string |
List of homophones for the term. |
Example
homophone('flour', 'flower', 'floor');
homophone('cereal', 'serial');
intent('Add $(ITEM flour|cereal) to my shopping list', p => {
p.play(`Sure, how much ${p.ITEM.value} would you like to add?`)
});
Predefined callbacks¶
After you create an AI assistant with Alan AI, the dialog goes through several states: the project is created, the user connects to the app and so on. When the dialog state changes, you may want to perform activities that are appropriate to this state. For example, when a new user connects to the dialog session, it may be necessary to set user-specific data.
To perform actions at different stages of the dialog lifecycle, you can use the following predefined callback functions:
onCreateProject¶
Runtime version: v3.x or later
onCreateProject
is invoked when the dialog model for the dialog script is created. You can use this function to perform activities required immediately after the dialog model is built, for example, for any data initialization.
Syntax
Function syntax
onCreateProject(()=> {action})
Parameters
Name |
Type |
Description |
---|---|---|
|
function |
Defines what actions must be taken when the dialog model is created on the server in Alan AI Cloud. |
Example
In the example below, onCreateProject
is used to define values for project.drinks
.
onCreateProject(() => {
project.drinks = "green tea|black tea|oolong";
});
onCreateUser¶
Runtime version: v3.x or later
onCreateUser
is invoked when a new user starts a dialog session. You can use this function to set user-specific data.
Syntax
Function syntax
onCreateUser(p => {action})
Parameters
Name |
Type |
Description |
---|---|---|
|
object |
Predefined object containing the user session data and exposing Alan AI’s methods. |
|
function |
Defines what actions must be taken when a new user starts a dialog session. |
Example
In the example below, the onCreateUser
function is used to assign the value to p.userData.favorites
:
onCreateUser(p => {
p.userData.favorites = "nature|landscape|cartoons";
});
onCleanupUser¶
Runtime version: v3.x or later
onCleanupUser
is invoked when the user session ends. You can use this function for any cleanup activities.
Syntax
Function syntax
onCleanupUser(p => {action})
Parameters
Name |
Type |
Description |
---|---|---|
|
object |
Predefined object containing the user session data and exposing Alan AI’s methods. |
|
function |
Defines what actions must be taken when the user session ends. |
Example
In the example below, the onCleanupUser
function is used to reset p.userData.favorites
value:
onCleanupUser(p => {
p.userData.favorites = "";
});
onVisualState¶
Runtime version: v3.x or later
onVisualState
is invoked when the visual state object is set. You can use this function, for example, to process data passed in the visual state.
Syntax
Function syntax
onVisualState((p, s) => {action})
Parameters
Name |
Type |
Description |
---|---|---|
|
object |
Predefined object containing the user session data and exposing Alan AI’s methods. |
|
object |
JSON object passed with the visual state. |
|
function |
Defines what actions must be taken when the visual state is sent. |
Example
In the example below, the onVisualState
function is used to define values for dynamic slots created with the help of the visual state.
Setting the visual state in the app:
<script>
function myFunction() {
let values = ["pasta", "burger", "quesadilla"]
alanBtnInstance.setVisualState({values});
}
</script>
Retrieving values and making the items list in the dialog script
onVisualState((p, s) => {
if (s.values) {
p.visual.menu = { en: ""};
p.visual.menu.en = s.values.join('|');
}
});
intent('Give me a $(ITEM~ v:menu)', p => {
p.play(`Adding one ${p.ITEM.value} to your order`);
});
onUserEvent¶
Runtime version: v3.x or later
onUserEvent
is invoked when Alan AI emits a specific event driven by users’ interactions with the AI assistant. For the events list, see User events.
Syntax
Function syntax
onUserEvent((p, e) => {action})
Parameters
Name |
Type |
Description |
---|---|---|
|
object |
Predefined object containing the user session data and exposing Alan AI’s methods. |
|
object |
Event fired by Alan AI. |
|
function |
Defines what actions must be taken when the event is fired. |
Example
In the example below, the AI assistant listens to the firstActivate
event and, if the user activates the assistant for the first time, plays a greeting to the user.
onUserEvent((p, e)=> {
console.log(e);
if (e.event === "firstActivate") {
p.play('Hi, this is Alan, your AI assistant! You can talk to me, ask questions and perform tasks with voice or text commands.');
}
});
Debugging¶
console.log¶
Runtime version: v3.x or later
The console.log()
function is used to write all sort of info messages to Alan AI Studio logs. With it, you can check objects, variables and slots values, capture events and so on.
Syntax
Function syntax
console.log(message)
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
Message to be logged. |
Example
intent('Save this location as my $(L home|office|work) (address|)', p => {
console.log(p.L.value);
p.play('The location is saved');
})
console.error¶
Runtime version: v3.x or later
The console.error()
function is used to write error messages to Alan AI Studio logs. With it, you can troubleshoot and debug your dialog script.
Syntax
Function syntax
console.error(message)
Parameters
Name |
Type |
Description |
---|---|---|
|
string |
Message to be logged. |
Example
try {
// your code
}
catch (e) {
console.error(e)
}