Documentation

Powered by Algolia

Building a voice assistant for a React Native app

You can create a voice assistant or chatbot and embed it to your React Native app with Alan's voice assistant SDK for React Native. In this tutorial, we will create a simple React Native app with Alan voice and test drive it on the iOS simulator. The app users will be able to tap the Alan button and give custom voice commands, and Alan will reply to them.

What you will learn

  • How to add a voice interface to a React Native app
  • How to write simple voice commands for a React Native app

What you will need

To go through this tutorial, make sure the following prerequisites are met:

When you sign up to Alan Studio, Alan adds free interactions to your balance to let you get started. To get additional interactions to your balance, link your Alan account with your GitHub account and give stars to Alan repositories. For details, see Adding free interactions.

Step 1: Create a React Native app

For this tutorial, we will be using a simple React Native app. Let's create it.

  1. On your machine, navigate to the folder in which the app will reside and run the following command:

    npx react-native init myApp
  2. Run the app:

    cd myApp
    npx react-native run-ios

Step 2: Integrate the app with Alan

Now we will add the Alan button to our app. First, we will install the Alan React Native plugin:

  1. In the app folder, run the following command:

    npm i @alan-ai/alan-sdk-react-native --save
  2. Open the App.js file. At the top of the file, add the import statement:

    import { AlanView } from './AlanSDK.js';
  3. To the function component, add the Alan button:

    const App: () => Node = () => {
      ...
      return (
    	<SafeAreaView style={backgroundStyle}>
    	  <StatusBar barStyle={isDarkMode ? 'light-content' : 'dark-content'} />
    	  <ScrollView
    		contentInsetAdjustmentBehavior="automatic"
    		style={backgroundStyle}>
    		<Header />
    		<View
    		  style={{
    			backgroundColor: isDarkMode ? Colors.black : Colors.white,
    		  }}>
    		  ...
    		  // Adding the Alan button
    		  <View><AlanView projectid={''}/></View>
    		</View>
    	  </ScrollView>
    	</SafeAreaView>
      );
    };
  4. In projectid, specify the Alan SDK key for your Alan Studio project. To get the key, in Alan Studio, at the top of the code editor, click Integrations and copy the value from the Alan SDK Key field.

    const App: () => Node = () => {
      ...
      return (
    	<SafeAreaView style={backgroundStyle}>
    	  <StatusBar barStyle={isDarkMode ? 'light-content' : 'dark-content'} />
    	  <ScrollView
    		contentInsetAdjustmentBehavior="automatic"
    		style={backgroundStyle}>
    		<Header />
    		<View
    		  style={{
    			backgroundColor: isDarkMode ? Colors.black : Colors.white,
    		  }}>
    		  ...
    		  // Providing the Alan SDK key
    		  <View><AlanView projectid={'f19e47b478189e4e7d43485dbc3cb44a2e956eca572e1d8b807a3e2338fdd0dc/stage'}/></View>
    		</View>
    	  </ScrollView>
    	</SafeAreaView>
      );
    };
  5. In the Terminal, navigate to the ios folder, open the Podfile and change the minimum iOS version to 11:

    platform :ios, '11.0'
  6. Run the following command to install dependencies for the project:

    pod install
  7. In iOS, the user must explicitly grant permission for an app to access the microphone and camera. Alan will use the microphone for voice interactions and camera for testing Alan projects on mobile. We need to add special keys with the description for the user why the app requires access to these resources.

    a. In the ios folder, open the generated XCode workspace file: myApp.xcworkspace.

    b. In Xcode, go to the Info tab.

    c. In the Custom iOS Target Properties section, hover over any key in the list and click the plus icon to the right.

    d. From the list, select Privacy - Microphone Usage Description.

    e. In the Value field to the right, provide a description for the added key. This description will be displayed to the user when the app is launched.

    f. Repeat the steps above to add the Privacy - Camera Usage Description key.

  8. Run the app:

    npx react-native run-ios

The app will be built and launched. When accessing the device microphone, Alan will display a message we have specified for the Privacy - Microphone Usage Description key.

In the bottom left corner, tap the Alan button and say: Hello world.

However, if you try to ask: How are you doing? Alan will not give an appropriate response. This is because the voice script in Alan Studio does not contain the necessary voice commands so far.

Step 3: Add voice commands

Let's add some voice commands so that we can interact with Alan. In Alan Studio, open the project and in the code editor, add the following intents:

intent ('What is your name?', p => {
    p.play('It is Alan, and yours?');
});

intent ('How are you doing?', p => {
    p.play('Good, thank you. What about you?');
});

Now tap the Alan button in the app and ask: What is your name? and How are you doing? Alan will give responses we have provided in the added intents.

After integration, you may get a warning related to the onButton state. You can safely ignore it: the warning will be removed as soon as you add the onCommand handler to your app. For details, see the next tutorial: Sending commands to the app.

What's next?

You can now proceed to building a voice interface with Alan. Here are some helpful resources: