Passing the app state to the voice script

An important feature of a robust voice assistant is context awareness. To provide relevant responses to users, the voice assistant must know what is happening in the app: what screen is currently open, what options are enabled and so on.

To get information about the app state, you can use Alan’s visual state functionality. The visual state allows you to send an arbitrary JSON object informing about the current app context to the voice script. You can access this data in the script through the p.visual runtime variable.

In this tutorial, we will let our voice assistant know what screen is currently open in the app. On the script side, we will use the passed state to do the following:

  • Add a voice command that will let Alan reply differently depending on the screen open in the app

  • Create screen-specific commands that can be matched only if the necessary screen is open

Note

If you are a visual learner, watch this tutorial on Alan AI YouTube Channel.

What you will learn

  • How to pass the app state from a Flutter app to the voice script with the visual state

  • How to access the data passed with the visual state in the voice script

  • How to filter intents by the app state

What you will need

To go through this tutorial, make sure the following prerequisites are met:

  • In this tutorial, we will continue using the Flutter app created in the previous tutorials. You can also use your own app with several screens. Make sure you have completed all steps from the previous tutorials:

  • You have set up the Flutter environment and it is functioning properly. For details, see Flutter documentation.

  • The device on which you are planning to test drive the Flutter app is connected to the Internet. The Internet connection is required to let the Flutter app communicate with the voice script run in the Alan Cloud.

Step 1. Add RouteObserver

First, let’s add RouteObserver to our app to keep track of screen transitions.

  1. Register RouteObserver in the app and in the MaterialApp constructor:

    /// Register RouteObserver
    final RouteObserver<PageRoute> routeObserver = RouteObserver<PageRoute>();
    
    class MyApp extends StatelessWidget {
    
      @override
      Widget build(BuildContext context) {
        return MaterialApp(
          title: 'Flutter Demo',
          theme: ThemeData(
            primarySwatch: Colors.blue,
          ),
          home: MyHomePage(title: 'Flutter Demo Home Page'),
          /// Register RouteObserver
          navigatorObservers: [routeObserver],
          initialRoute: '/',
          routes: {
            '/second': (context) => const SecondPage(),
          }
        );
      }
    }
    
  2. Implement RouteAware in the second widget’s state and subscribe it to the RouteObserver:

    /// Implement RouteAware
    class _SecondPageState extends State<SecondPage> with RouteAware {
    
      /// Subscribe to RouteObserver
      @override
      void didChangeDependencies() {
        super.didChangeDependencies();
        routeObserver.subscribe(this, ModalRoute.of(context) as PageRoute);
      }
    
      @override
      void dispose(){
        routeObserver.unsubscribe(this);
        super.dispose();
      }
    
      @override
      void didPush() {}
    
      @override
      void didPop() {}
    
      ...
    }
    

Step 2: Send the app state to the voice script

Now we need to send information about the screen that is currently open in the app to the voice script.

  1. In the second widget’s state, add the setVisuals() function. Here we use the setVisualState() client API method to pass the object containing information about the current screen.

    ...
    void setVisuals(String screen) {
      var visual = "{\"screen\":\"$screen\"}";
      AlanVoice.setVisualState(visual);
    }
    ...
    
  2. Call setVisuals() in the didPush() and didPop() methods and pass the necessary screen name:

    ...
    @override
    void didPush() {
      setVisuals("second");
    }
    
    @override
    void didPop() {
      setVisuals("first");
    }
    ...
    
  3. In _MyHomePageState, add the setVisuals() function and call it to set the visual state when the first screen is initially open:

    class _MyHomePageState extends State<MyHomePage> {
      ...
      void initState() {
        super.initState();
        WidgetsBinding.instance!.addPostFrameCallback((_) => setVisuals("first"));
      }
    
      void setVisuals(String screen) {
        var visual = "{\"screen\":\"$screen\"}";
        AlanVoice.setVisualState(visual);
      }
      ...
    }
    

Step 3: Add a voice command with different answers for screens

Imagine we want to provide the user with a possibility to ask What screen is this? in the app, and Alan must reply differently depending on the screen open. Let’s go back to Alan Studio and add a new intent to it:

intent(`What screen is this?`, p => {
    let screen = p.visual.screen;
    switch (screen) {
        case "first":
            p.play('This is the first screen');
            break;
        case "second":
            p.play('This is the second screen');
            break;
        default:
            p.play('This is a voice-enabled Flutter app');
    }
});

Here we are using the p.visual.screen variable to access data passed with the visual state. Depending on the variable value, Alan plays back different responses.

You can test it: run the app, say Open the second screen and ask: What screen is this? Alan will get back with This is the second screen. Then say Go back and ask: What screen is this? Alan will answer: This is the first screen.

Step 4: Create screen-specific commands

If an app has several screens, it may be necessary to create voice commands that will work for specific screens only. In Alan, you can create screen-specific commands with the help of filters added to intents. The filter in an intent defines conditions in which the intent can be matched. In our case, we can use information about the open screen as the filter.

Let’s modify our navigation commands to apply filters to them. The filter comes as the first parameter in the intent. In Alan Studio, update the intents to the following:

const firstScreen = visual(state => state.screen === "first");
intent(firstScreen, 'Open the second screen', p => {
    p.play({command: 'forward'});
    p.play('Opening the second screen');
});

const secondScreen = visual(state => state.screen === "second");
intent(secondScreen, 'Go back', p => {
    p.play({command: 'back'});
    p.play('Going back');
});

You can test it: run the app and say: Go back. Alan will not be able to match this intent. This intent will be matched only if the second screen is open.

What’s next?

Have a look at the next tutorial: Sending data to the voice script.