#slu – Alan AI Blog https://alan.app/blog/ Follow the most recent Generative AI articles Tue, 23 Jan 2024 07:30:10 +0000 en-US hourly 1 https://i0.wp.com/synqqblog.wpcomstaging.com/wp-content/uploads/2019/10/favicon-32x32.png?fit=32%2C32&ssl=1 #slu – Alan AI Blog https://alan.app/blog/ 32 32 111528672 Why Marketers Turn to Chatbots and Voice Interfaces https://alan.app/blog/why-marketers-turn-to-chatbots-and-voice-assistants/ https://alan.app/blog/why-marketers-turn-to-chatbots-and-voice-assistants/#respond Thu, 20 May 2021 09:55:17 +0000 https://alan.app/blog/?p=4885 Chatbots and voice assistants weren't created with marketers first in mind. Both are task-oriented products at their cores, serving users when actual human beings can't, which is quite often. So it should come as no surprise that both support an estimated 5 billion global users combined. And now marketers are showing up in droves. ]]>

Chatbots and voice assistants weren’t created with marketers first in mind. Both are task-oriented products at their cores, serving users when actual human beings can’t, which is quite often. So it should come as no surprise that both support an estimated 5 billion global users combined.

And now marketers are showing up in droves. 

Rise of Chatbot Marketing

Online stores turned to chatbots to fight cart abandonment. Their automated service around-the-clock provided a safety net late into user journeys. Unlike traditional channels broadcasting one-way messages, chatbots (like Drift and Qualified) fostered two-way interactions between websites and users. Granting consumers a voice boosted engagement. As it turned out, messaging a robot became much more popular than calling customer service.

Successful marketing chatbots rely on three things: 1) Scope, 2) Alignment, and 3) KPIs. 

Conversational A.I. needs to get specific. Defining a chatbot’s scope — or how well it solves a finite number of problems — makes or breaks its marketing potential. A chief marketing officer (CMO) at a B2C retail brand probably would not benefit from a B2B chatbot that displays SaaS terminology in its UX. Spotting mutual areas of expertise is quite simple. The hard part requires evaluating how well the chatbot aligns with the strategies you and your team deploy. If there is synergy between conversational A.I. and a marketer, then they must choose KPIs that best measure the successes and failures yet to come. Some of the most common include the amounts of active users, sessions per user, and bot sessions initiated.

Chatbots vs. Voice Assistants

Chatbots and voice assistants have a few things in common. Both are constantly learning more about their respective users, relying on newer data to improve the quality of interactions. And both use automation to communicate instantly and make the user experience (UX) convenient.

Chatbots carry out fewer tasks repetitively. Despite their extensive experience, they require a bit more supervision. Voice assistants, meanwhile, oversee entire user journeys. If needed, they can solve problems independently with a more versatile skill set. 

Voice assistants are just getting started when it comes to marketing. Upon taking the global market by storm for the last decade, there is still room for voice products to reach new customers. But the B2B space presents a larger addressable market. The same way voice assistants gained traction rather quickly with millions of users is how they may pop up across organizations. 

Rise of Voice Assistant Marketing

Many marketers considered voice a “low priority” back in 2018. Lately, the tides have changed: 28 percent of marketers in a Voicebot.ai survey find voice assistants “extremely important.” Why? Because they enjoy immediate access to a mass audience. Once voice products earn more public trust, they can leverage bubbling user relationships to introduce products and services. 

Voice assistants are stepping beyond their usual duties and tapping into their marketing powers. Amazon Alexa enables brands to develop product skills that alleviate user and customer pains. Retail rival Wal-Mart launched Walmart Stories, an Alexa skill showcasing customer and employee satisfaction initiatives. 

Amazon created a dashboard to gauge individual Alexa skill performance in 2017. For example, marketers can see how many unique customers, plays, sessions, and utterances a skill has. Multiple metrics can be further broken down by type of user action, thus indicating which moments are best suited for engagement.

Google Assistant also amplifies brands through an “Actions” feature similar to Alexa’s skills. TD Ameritrade launched an action letting users view their financial portfolios via voice command. 

The Bottom Line

Chatbots aren’t going anywhere. According to GlobeNewswire, they form a $2.9B market predicted to be worth $10.5B in 2026. Automation’s strong tailwinds almost guarantee chatbots won’t face extinction anytime soon. They are likely staying in their lane, building upon their current capabilities instead of adding drastically different ones. 
Meanwhile, voice e-commerce will become a $40B business by 2022. Over 4 billion voice assistants are in use worldwide. By 2024, that number will surpass 8 billion. It’s hard to bet against voice assistants dominating this decade’s MarTech landscape. Their future ubiquity, easy access to hands-free searches, and the increased likelihood A.I. improves its effectiveness will leave plenty of room for growth. If future voice products address recurring user pains, they will innovate with improved personalization and privacy features. 

If you’re looking for a voice platform to bring your application, get started with the Alan AI platform today.

References

  1. The Future is Now – 37 Fascinating Chatbot Statistics (smallbizgenius)
  2. 2020’s Voice Search Statistics – Is Voice Search Growing? (Review 42)
  3. 10 Ways to Measure Chatbot Program Success (CMSWire)
  4. Virtual assistants vs Chatbots: What’s the Difference & How to Choose the Right One? (FreshDesk) 
  5. Digiday Research: Voice is a low priority for marketers (Digiday)
  6. Marketers Assign Higher Importance to Voice Assistants as a Marketing Channel in 2021 – New Report (Voicebot.ai)
  7. Use Alexa Skill Metrics Dashboard to Improve Smart Home and Flash Briefing Skill Engagement (Amazon.com)
  8. The global Chatbot market size to grow from USD 2.9 billion in 2020 to USD 10.5 billion by 2026, at a Compound Annual Growth Rate (CAGR) of 23.5% (GlobeNewsWire)
  9. Chatbot Market by Component, Type, Application, Channel Integration, Business Function, Vertical And Region – Global Forecast to 2026 (Report Linker)
  10. Voice Shopping Set to Jump to $40 Billion By 2022, Rising From $2 Billion Today (Compare Hare)
  11. Number of digital voice assistants in use worldwide 2019-2024 (Statista)
  12. The Future of Voice Technology (OTO Systems, Inc.)
]]>
https://alan.app/blog/why-marketers-turn-to-chatbots-and-voice-assistants/feed/ 0 4885
4 Things Your Voice UX Needs to Be Great https://alan.app/blog/4-things-your-voice-ux-needs-to-be-great/ https://alan.app/blog/4-things-your-voice-ux-needs-to-be-great/#respond Wed, 28 Apr 2021 09:08:46 +0000 https://alan.app/blog/?p=4779 The future arrived a long time ago. Problems at work and in life are now more complex. Settling for linear solutions will not suffice. So how do we know what separates a high-quality modern voice user experience (UX) from the rest? ]]>

After a decade of taking the commercial market by storm, it’s official: voice technology is no longer the loudest secret in Silicon Valley. The speech and voice recognition market is currently worth over $10 billion. By 2025, it is projected to surpass $30 billion. No longer is it monumental or unconventional to simply speak to a robot not named WALL-E and hold basic conversations. Voice products already oversee our personal tech ecosystems at home while organizing our daily lives. And they bring similar skill sets to enterprises in hopes of optimizing project management efforts. 

The future arrived a long time ago. Problems at work and in life are now more complex. Settling for linear solutions will not suffice. So how do we know what separates a high-quality modern voice user experience (UX) from the rest? 

Navigating an ever-changing voice technology landscape does not require a fancy manual or a DeLorean (although we at Alan proudly believe DeLoreans are pretty cool). A basic understanding of which qualities make the biggest difference on a modern voice product’s UX can catch us up to warp speed and help anyone better understand this market. Here are four features each user experience must have to create win-win scenarios for users and developers. 

MULTIPLE CONTEXTS

Pitfalls in communication between people and voice technology exist because limited insights were available until a decade ago. For instance, voice assistants were forced to play a guessing game upon finally rolling out to market and predict user behavior in their first-ever interactions. They lacked experience engaging back and forth with human beings. Even when we communicate verbally, we still rely on subtle cues to shape how we say what we mean. Without any prior context, voice products struggled to grasp the world around us. 

By gathering Visual, Dialog, and Workflow contexts, it becomes easier to understand user intent, respond to inquiries, and engage in multi-stage conversations. Visual contexts are developed to spark nonverbal communication through physical tools like screens. This does not include scenarios where a voice product collects data from a disengaged user. Dialog contexts process long conversations requiring a more advanced understanding. And Workflow contexts improve accuracy for predictions made by data models. Overall, user dialogue can be understood by voice products more often. 

When two or more contexts work together, they are more likely to help support multiple user interfaces. Multimodal UX unites two or more interfaces into a voice product’s UX. Rather than take a one-track-minded approach and place all bets on a single UX that may fail alone, this strategy aims to maximize user engagement. Together, different interfaces — such as a visual and voice — can flex their best qualities while covering each of their weaknesses. In turn, more human senses are interacted with. Product accessibility and functionality improve vastly. And higher-quality product-user relationships are produced. 

WORKFLOW CAPABILITIES

Developers want to design a convenient and accurate voice UX. This is why using multiple workflows matters — it empowers voice technology to keep up with faster conversations. In turn, optimizing personalization feels less like a chore. The more user scenarios a voice product is prepared to resolve quickly, the better chance it has to cater to a diverse set of user needs across large markets. 

There is no single workflow matrix that works best for every UX. Typically, voice assistants combine two types: task-oriented and knowledge-oriented. Task-oriented workflows complete almost anything a user asks their device to do, such as setting alarms. Knowledge-oriented workflows lean on secondary sources like the internet to complete a task, such as searching for a question about Mt. Everest’s height.

SEAMLESS INTEGRATION

Hard work that goes into product development can be wasted if the experience curated cannot be shared with the world. This mantra applies to the notion of developing realistic contexts and refining workflows without ensuring the voice product will seamlessly integrate. While app integrations can result in system dependencies, having an API connect the dots between a wide variety of systems saves stress, time, and money during development and on future projects. Doing so allows for speedier and more interactive builds to bring cutting-edge voice UX to life. 

PRIVACY 

Voice tech has notoriously failed to respect user privacy — especially when products have collected too much data at unnecessary times. One Adobe survey reported 81% of users were concerned about their privacy when relying on voice recognition tools. Since there is little to no trust, an underlying paranoia defines these negative user experiences far too often.

Enterprises often believe their platforms are designed well-enough to side-step past these user sentiments. Forward-thinking approaches to user privacy must promote transparency on matters regarding who owns user data, where that data is accessible, whether it is encrypted, and for how long. A good UX platform will take care of computer infrastructure and provide each customer with separate containers and their own customized AI model.

If you’re looking for a voice platform to bring your application, get started with the Alan AI platform today.

]]>
https://alan.app/blog/4-things-your-voice-ux-needs-to-be-great/feed/ 0 4779
Spoken Language Understanding (SLU) and Intelligent Voice Interfaces https://alan.app/blog/slu-101/ https://alan.app/blog/slu-101/#respond Wed, 28 Apr 2021 09:08:41 +0000 https://alan.app/blog/?p=4772 It’s no secret voice tech performs everyday magic for users. By now, basic voice product capabilities and features are well-known to the public. Common knowledge is enough to tell us what this technology does. Yet we fail to consider what factors and mechanisms behind the scenes enable these products to work. ]]>

It’s no secret voice tech performs everyday magic for users. By now, basic voice product capabilities and features are well-known to the public. Common knowledge is enough to tell us what this technology does. Yet we fail to consider what factors and mechanisms behind the scenes enable these products to work. Multiple frameworks oversee different methods that people and products communicate with. But as a concept, the frequent lifeblood of the user experience — Spoken Language Understanding (SLU) — is quite concrete.  

As the name hints, SLU takes what someone tells a voice product and tries to understand it. Doing so involves detecting signs in speech, coding inferences correctly, and navigating complexities as an intermediary between human voices and scripts featuring calligraphy. Since typed and spoken language form sentences differently, self-corrections and hesitations recur. SLUs leverage different tools to navigate user messages through traffic. 

The most established is Automatic Speech Recognition (ASR), a technology that transcribes user speech at the system’s front end. By tracking audio signals, spoken words convert to text. Similar to the first listener in an elementary school game of telephone, ASR is most likely to precisely understand what the original caller whispered. Conversely, Natural Language Understanding (NLU) determines user intent at the back end. Both ASR and NLU are used in tandem since they typically complement each other well. Meanwhile, an End-to-End SLU cuts corners by deciphering utterances without transcripts. 

This Law & Order: SLU series gets juicier once you know all that Spoken Language Understanding is up against. One big challenge SLU systems face is the fact ASR has a complicated past. The number of transcription errors ASRs have committed is borderline criminal — not in a court of law — but the product repairs and false starts that resulted left a polarizing effect on users. ASRs usually operate at than the speed of sound or slower. And the icing on the cake? The limited scope of domain knowledge across early SLU systems hampered their appeal across targeted audiences. Relating to different niches was difficult as jargon was scarce. 

Let’s say you are planning a vacation. After deciding your destination will be New York, you are ready to book a flight. You tell your voice assistant, “I want to fly from San Francisco to New York.” 

That request is sliced and diced into three pieces: Domain, Intent, and Slot Labels

1. DOMAIN (“Flight”)

Before accurately determining what a user said, SLU systems figure out what subject they talked about. A domain is the predetermined area of expertise that a program specializes in. Since many voice products are designed to appeal broadly, learning algorithms can classify various query subjects by categorizing incoming user data. In the example above, the domain is just “Flight.” No hotels were mentioned. Nor did the user ask to book a cruise. They simply preferred to fly to the Big Apple.    

Domain classification is a double-edged sword SLU must use wisely. Without it, these systems can miss the mark, steering someone in need of one application into another. 

The SLU has to guess if the user referred to flights or not. Otherwise, the system could produce the wrong list of travel options. Nobody should prompt the user to accidentally book a rental car for a cross-country road trip they never wanted. 

2. INTENT (“Departure”)

Tracking down the subject a speaker talks about matters. However, if a voice product cannot pin down why that person spoke, how could it solve their problem? Carrying out a task would then become unnecessary. 

Once the domain is selected, the SLU identifies user intent. Doing so goes one step further and traces why that person communicated with the system. In the example above, “Departure” is the intent. When someone asks about flying, the SLU has enough information to believe the user is likely interested in leaving town. 

3. SLOT LABELS (Departure: “San Francisco”, Arrival: “New York”)

Enabling an SLU system to set domains and grasp intent is often not enough. Sure, we already know the user is vouching to leave on a flight. But the system is yet to officially document where they want to go.   

Slots capture the specifics of a query once the subject matter and end goal are both determined. Unlike their high-stakes Vegas counterparts, these slots do not rack up casino winnings. Instead, they take the domain and intent and apply labels to them. Within the same example, departure and arrival locations must be accounted for. The original query includes both: “San Francisco” (departure) and “New York” (arrival). 

IMPACT

Spoken Language Understanding (SLU) provides a structure that establishes, stores, and processes nomenclature that allows voice products to find their identity. When taking the needed leap to classify and categorize queries, SLU systems collect better data and personalize voice experiences. Products then become smarter and channel more empathy. And they are empowered to anticipate user needs and solve problems quickly. Therefore, SLU facilitates efficient workflow design and raises the ceiling on how well people can share and receive information to accomplish more.

If you’re looking for a voice platform to bring your application, get started with the Alan AI platform today.

]]>
https://alan.app/blog/slu-101/feed/ 0 4772