Building a voice assistant for a React Native app

With Alan AI SDK for React Native, you can create a voice assistant or chatbot and embed it to your React Native app. The Alan AI Platform provides you with all the tools and leverages the industry’s best Automatic Speech Recognition (ASR), Spoken Language Understanding (SLU) and Speech Synthesis technologies to quickly build an AI assistant from scratch.

In this tutorial, we will create a simple React Native app with Alan AI voice and test drive it on the simulator. The app users will be able to tap the voice assistant button and give custom voice commands, and the AI assistant will reply to them.

What you will learn

  • How to add a voice interface to a React Native app

  • How to write simple voice commands for a React Native app

What you will need

To go through this tutorial, make sure the following prerequisites are met:

  • You have signed up to Alan AI Studio.

  • You have created a project in Alan AI Studio.

  • You have set up the React Native environment and it is functioning properly. For details, see React Native documentation.

  • You have installed CocoaPods to manage dependencies for an Xcode project:

    Terminal
     sudo gem install cocoapods
    

Note

When you sign up to Alan AI Studio, Alan AI adds free interactions to your balance to let you get started. To get additional interactions to your balance, link your Alan AI account with your GitHub account and give stars to Alan AI repositories. For details, see Adding free interactions.

Step 1: Create a React Native app

For this tutorial, we will be using a simple React Native app. Let’s create it.

  1. On your machine, navigate to the folder in which the app will reside and run the following command:

    Terminal
    npx react-native init myApp
    
  2. Run the app:

    Terminal
    cd myApp
    npx react-native run-ios
    
../../../_images/create-app.png

Step 2: Integrate the app with Alan AI

Now we will add the Alan AI button to our app. First, we will install the Alan AI React Native plugin:

  1. In the app folder, run the following command:

    Terminal
    npm i @alan-ai/alan-sdk-react-native --save
    
  2. Open the App.js file. At the top of the file, add the import statement:

    App.js
    import { AlanView } from '@alan-ai/alan-sdk-react-native';
    
  3. To the function component, add the Alan AI button:

    App.js
    const App: () => Node = () => {
      return (
        <SafeAreaView style={backgroundStyle}>
          <StatusBar barStyle={isDarkMode ? 'light-content' : 'dark-content'} />
          <ScrollView
            contentInsetAdjustmentBehavior="automatic"
            style={backgroundStyle}>
            <Header />
            <View
              style={{
                backgroundColor: isDarkMode ? Colors.black : Colors.white,
              }}>
            </View>
          </ScrollView>
          // Adding the Alan AI button
          <View><AlanView projectid={''}/></View>
        </SafeAreaView>
      );
    };
    
  4. In projectid, specify the Alan AI SDK key for your Alan AI Studio project. To get the key, in Alan AI Studio, at the top of the code editor, click Integrations and copy the value from the Alan SDK Key field.

    App.js
    const App: () => Node = () => {
      return (
        <SafeAreaView style={backgroundStyle}>
          <StatusBar barStyle={isDarkMode ? 'light-content' : 'dark-content'} />
          <ScrollView
            contentInsetAdjustmentBehavior="automatic"
            style={backgroundStyle}>
            <Header />
            <View
              style={{
                backgroundColor: isDarkMode ? Colors.black : Colors.white,
              }}>
            </View>
          </ScrollView>
          // Providing the Alan AI SDK key
          <View><AlanView projectid={'f19e47b478189e4e7d43485dbc3cb44a2e956eca572e1d8b807a3e2338fdd0dc/stage'}/></View>
        </SafeAreaView>
      );
    };
    
  5. In the Terminal, navigate to the ios folder, open the Podfile and change the minimum iOS version to 11:

    Podfile
    platform :ios, '11.0'
    
  6. Run the following command to install dependencies for the project:

    Terminal
    pod install
    
  7. In iOS, the user must explicitly grant permission for an app to access the microphone and camera. Alan AI will use the microphone for voice interactions and camera for testing Alan AI projects on mobile. We need to add special keys with the description for the user why the app requires access to these resources.

    1. In the ios folder, open the generated XCode workspace file: myApp.xcworkspace.

    2. In Xcode, go to the Info tab.

    3. In the Custom iOS Target Properties section, hover over any key in the list and click the plus icon to the right.

    4. From the list, select Privacy - Microphone Usage Description.

    5. In the Value field to the right, provide a description for the added key. This description will be displayed to the user when the app is launched.

    6. Repeat the steps above to add the Privacy - Camera Usage Description key.

    ../../../_images/permission.png
  8. Run the app:

    Terminal
    npx react-native run-ios
    

The app will be built and launched. When accessing the device microphone, Alan AI will display a message we have specified for the Privacy - Microphone Usage Description key.

../../../_images/alan-button.png

In the bottom left corner, tap the Alan AI button and say: Hello world.

However, if you try to ask: How are you doing? The AI assistant will not give an appropriate response. This is because the dialog script in Alan AI Studio does not contain the necessary voice commands so far.

Step 3: Add voice commands

Let’s add some voice commands so that we can interact with Alan AI. In Alan AI Studio, open the project and in the code editor, add the following intents:

Dialog script
intent('What is your name?', p => {
    p.play('It is Alan, and yours?');
});

intent('How are you doing?', p => {
    p.play('Good, thank you. What about you?');
});

Now tap the Alan AI button in the app and ask: What is your name? and How are you doing? The AI assistant will give responses we have provided in the added intents.

Note

After integration, you may get a warning related to the onButton state. You can safely ignore it: the warning will be removed as soon as you add the onCommand handler to your app. For details, see the next tutorial: Sending commands to the app.

What’s next?

You can now proceed to building a voice interface with Alan AI. Here are some helpful resources: