Contents Menu Expand
Logo
Web (React, Angular, etc.) Objective-C Swift Kotlin Java Flutter Ionic React Native
Go to docs
Logo
Logo

Getting Started

  • About Alan AI
  • Quick start
    • Review the project scope
    • Sign up for Alan AI Studio
    • Create a static corpus
    • Create a dynamic corpus
    • Understand AI reasoning
    • Adjust AI reasoning and output
    • Integrate with the app
    • Customize the Agentic Interface look and feel
    • Add actionable links
    • Analyze user queries
  • Recent Alan AI Platform updates

Alan AI Deployment

  • Alan AI Platform
  • Deployment options
    • SaaS deployment
    • Private cloud/on-premises deployment
      • Alan AI Cloud deployment
      • Troubleshooting
      • Alan AI Helm installation package
      • Alan AI Cloud configuration
        • Alan AI configuration options
        • Enabling GitHub integration
        • Switching the AI model

Alan AI Studio

  • Agentic Interface projects
  • Dialog scripts
    • Managing scripts
    • Exporting and importing scripts
    • Using shortcuts
    • Customizing the code editor
  • Versions and environments
  • Resources
  • Collaboration and CI/CD tools
    • Sharing and keeping scripts in GitHub
    • Setting up CI/CD workflow
  • Testing and debugging
    • Debugging Chat
    • Tools to simulate in-app behavior
    • Test View
    • Alan AI Studio logs
    • In-app testing
  • Agentic Interface Analytics
    • Projects dashboard
    • Analytics View
  • Cohorts
  • Billing and subscriptions
    • Adding funds
    • Applying promo codes
    • Creating organizations
  • Alan AI Browser Plugin

Alan AI Agentic Interface

  • Alan AI Agentic Interface
  • Customization options
  • Likes and dislikes feedback setup
  • Agentic Interface history
  • Agentic Interface activation

Server API

  • Data corpuses
    • Static corpus
    • Dynamic corpus
    • Puppeteer crawler
    • Crawling depth
    • Corpus priority
    • Corpus filtering
    • Corpus includes and excludes
    • Protected resources
    • Crawler tasks
    • Corpus Explorer
  • Transforms
    • Transform configuration
    • Transform instructions and examples
    • Static corpus transforms
    • Dynamic corpus transforms
    • Puppeteer transforms
    • Intent transforms
    • Function import
    • Transforms Explorer
  • Action Transformer
    • UI context
    • Action execution
    • Data merging
    • Action Transformer API
  • Automated UI generation
    • Chart generation
  • Explainable AI
    • Visual graphs
  • Semantic search
    • Table API
  • Error handling and fallbacks
    • Fallback function
    • Fallback transforms
  • Intent-driven dialogs
    • User commands
      • Patterns
      • Intent matching
      • Play options
      • Voice settings
    • Slots
    • Contexts
    • Predefined script objects
    • User data
      • User events
      • Client object
    • Lifecycle callbacks
  • Sending data from the app
    • authData
    • Visual state
    • Project API
  • Built-in JavaScript libraries
  • UI widgets
    • Chat cards
    • Chat buttons
    • Alan AI agentic interface popups
  • API reference

Integration

  • Integration overview
  • Web frameworks
    • React
    • Angular
    • Vue
    • Ember
    • JavaScript
    • Electron
    • Cross-platform solutions
    • Server-side rendering
  • iOS
  • Android
  • Cross-platform frameworks
    • Flutter
    • Ionic
      • Ionic React
      • Ionic Angular
      • Ionic Vue
      • iOS and Android deployment
      • Communication between components
      • Troubleshooting
    • React Native
    • Apache Cordova

Alan AI SDK Toolkit

  • Client API methods
  • Alan AI handlers
    • onCommand handler
    • onButtonState handler
    • onConnectionStatus handler
    • onEvent handler

Samples & Tutorials

  • How-tos
  • Tutorials
    • Alan AI Agentic Interface
      • Create an Agentic Interface for a website
      • Add a greeting to the Agentic Interface
      • Use buttons in the Agentic Interface
    • Web
      • Building a voice Agentic Interface for web
      • Making a Web API call from the dialog script
    • Web frameworks
      • Building a voice Agentic Interface for a React app
      • Building a voice Agentic Interface for an Angular app
      • Building a voice Agentic Interface for a Vue app
      • Building a voice Agentic Interface for an Ember app
      • Building a voice Agentic Interface for an Electron app
    • iOS
      • Building a voice Agentic Interface for an iOS app
      • Navigating between views
      • Passing the app state to the dialog script
      • Highlighing items with voice
      • Triggering dialog script actions without commands
      • Playing a greeting in an app
    • Android
      • Building a voice Agentic Interface for an Android Java or Kotlin app
      • Navigating in an Android app with voice (Kotlin)
      • Passing the app state to the dialog script (Kotlin)
      • Sending data from the app to the dialog script (Kotlin)
    • Flutter
      • Building a voice Agentic Interface for a Flutter app
      • Navigating between screens
      • Passing the app state to the dialog script
      • Sending data to the dialog script
    • Ionic
      • Building a voice Agentic Interface for an Ionic Angular app
      • Navigating between tabs (Ionic Angular)
      • Passing the app state to the dialog script (Ionic Angular)
      • Building a voice Agentic Interface for an Ionic React app
      • Navigating between tabs (Ionic React)
    • React Native
      • Building a voice Agentic Interface for a React Native app
      • Sending commands to the app
      • Passing the app state to the dialog script
      • Triggering activities without voice commands
      • Navigating between screens with voice
  • FAQ

Navigating between screens with voice (React Native)¶

If your React Native app has several screens, you can add voice commands to navigate between them. For example, you can let the user open the details screen and then go back to the main screen with voice.

In this tutorial, we will create an app with two screens and add voice commands to navigate forward and back.

What you will learn¶

  • How to navigate between screens of a React Native app with voice

  • How to send commands to a React Native app

  • How to handle commands on the React Native app side

What you will need¶

To go through this tutorial, make sure the following prerequisites are met:

  • You have completed all steps from the following tutorial: Building a voice Agentic Interface for a React Native app.

  • You have set up the React Native environment and it is functioning properly. For details, see React Native documentation.

Step 1. Add two screens to the app¶

First, let’s update our app to add two screens to it.

  1. In the Terminal, navigate to the app folder and install the required navigation components:

    Terminal¶
    npm install @react-navigation/native @react-navigation/native-stack
    npm install react-native-screens react-native-safe-area-context
    
  2. Install pods to complete the installation:

    Terminal¶
    cd ios
    pod install
    cd ..
    
  3. Update the App.js file to the following:

    App.js¶
    import * as React from 'react';
    import { Button, View, Text } from 'react-native';
    import { NavigationContainer } from '@react-navigation/native';
    import { createNativeStackNavigator } from '@react-navigation/native-stack';
    
    import { navigationRef } from './components/RootNavigation';
    import * as RootNavigation from './components/RootNavigation';
    
    function HomeScreen({ navigation: { navigate } }) {
      return (
        <View style={{ flex: 1, alignItems: 'center', justifyContent: 'center' }}>
          <Text>This is the home screen of the app</Text>
          <Button title="Go to Profile" onPress={() => RootNavigation.navigate('Profile')} />
        </View>
      );
    }
    
    function ProfileScreen({ navigation: { navigate } }) {
      return (
        <View style={{ flex: 1, alignItems: 'center', justifyContent: 'center' }}>
          <Text>Profile details</Text>
          <Button title="Go back" onPress={() => RootNavigation.navigate('Home')} />
        </View>
      );
    }
    
    const Stack = createNativeStackNavigator();
    
    const App = () => {
      return (
        <NavigationContainer ref={navigationRef}>
          <Stack.Navigator initialRouteName="Home">
            <Stack.Screen name="Home" component={HomeScreen} />
            <Stack.Screen name="Profile" component={ProfileScreen} />
          </Stack.Navigator>
        </NavigationContainer>
      );
    }
    
    export default App;
    
  4. In your app, create the components folder and put the RootNavigation.js file to it:

    RootNavigation.js¶
    import { createNavigationContainerRef } from '@react-navigation/native';
    
    export const navigationRef = createNavigationContainerRef()
    
    export function navigate(name, params) {
      if (navigationRef.isReady()) {
        navigationRef.navigate(name, params);
      }
    }
    
  5. To the App.js file, add the Alan AI agentic interface as described in the Building a voice Agentic Interface for a React Native app tutorial:

    • Import AlanView

    • Add the Alan AI agentic interface to NavigationContainer

You can test it: run the app and try navigating between screens using the buttons.

Step 2. Add navigation commands to the script¶

Let’s add navigation commands to the dialog script. In the code editor in Alan AI Studio, add the following:

Dialog script¶
intent('Open profile details', p => {
    p.play('Opening the profile page');
    p.play({command:'goForward'});
});

intent('Go back', p => {
    p.play('Going back');
    p.play({command:'goBack'});
});

Here, in each command, we have two p.play() functions:

  • One to play a response to the user

  • The other one to send the command to the client app. In the second play() function, we have specified a JSON object with the name of the command to be sent.

You can try the commands in the Debugging Chat. Notice that together with the answer, Alan AI now sends the command we have defined.

Step 3. Handle commands in the app¶

We need to handle these commands on the app side. To do this, we will add the Alan AI’s onCommand handler to the app.

  1. In the App.js file, add the import statement for the Alan AI’s events listener:

    App.js¶
    import { NativeEventEmitter, NativeModules } from 'react-native';
    
  2. Create a new NativeEventEmitter object:

    App.js¶
    const App = () => {
      const { AlanEventEmitter } = NativeModules;
      const alanEventEmitter = new NativeEventEmitter(AlanEventEmitter);
    }
    
  3. Add the import statement for the useEffect hook:

    App.js¶
    import { useEffect } from 'react';
    
  4. And add the useEffect hook to subscribe to the dialog script events:

    App.js¶
    const App = () => {
      useEffect(() => {
        alanEventEmitter.addListener('onCommand', (data) => {
          if (data.command == 'goForward') {
            RootNavigation.navigate('Profile');
          } else if (data.command == 'goBack') {
            RootNavigation.navigate('Home');
          }
        })
      }, []);
    }
    

Now, when the app receives a command, the necessary screen will be open.

You can try it: in the app, tap the Alan AI agentic interface and say: Open profile details and Go back.

Next
Frequently Asked Questions
Previous
Triggering activities without voice commands (React Native)
®2025 Alan AI, Inc. All rights reserved.
On this page
  • Navigating between screens with voice (React Native)
    • What you will learn
    • What you will need
    • Step 1. Add two screens to the app
    • Step 2. Add navigation commands to the script
    • Step 3. Handle commands in the app