Business Operations – Alan AI Blog https://alan.app/blog/ Follow the most recent Generative AI articles Thu, 22 Aug 2024 09:01:29 +0000 en-US hourly 1 https://i0.wp.com/synqqblog.wpcomstaging.com/wp-content/uploads/2019/10/favicon-32x32.png?fit=32%2C32&ssl=1 Business Operations – Alan AI Blog https://alan.app/blog/ 32 32 111528672 Enterprises Need Business Outcomes from GenAI https://alan.app/blog/the-future-of-enterprise-ai-is-here-are-you-ready-to-unleash-its-full-potential/ https://alan.app/blog/the-future-of-enterprise-ai-is-here-are-you-ready-to-unleash-its-full-potential/#respond Wed, 21 Aug 2024 18:57:57 +0000 https://alan.app/blog/?p=6532 In a world where most AI implementations are still stuck in the realm of basic chatbots, true innovation is waiting to be unlocked. At Alan AI, we’ve moved beyond the limitations of traditional AI chatbots, empowering enterprises to achieve 15x ROI with our advanced GenAI Agents. Imagine AI agents that...]]>

In a world where most AI implementations are still stuck in the realm of basic chatbots, true innovation is waiting to be unlocked. At Alan AI, we’ve moved beyond the limitations of traditional AI chatbots, empowering enterprises to achieve 15x ROI with our advanced GenAI Agents.

Imagine AI agents that don’t just answer questions but transform how your organization operates—by automating complex workflows, integrating with diverse data sources, and providing immersive user experiences.

Our latest whitepaper, “Unleashing the Full Potential of Generative AI,” delves deep into how Alan AI is setting a new standard in Enterprise AI, moving far beyond chatbots to deliver real, measurable value, i.e. business outcomes

Curious to know more? Download our whitepaper and discover how we’re helping enterprises harness the true power of AI to drive productivity, streamline operations, and stay ahead of the competition.

Download the Whitepaper Now

]]>
https://alan.app/blog/the-future-of-enterprise-ai-is-here-are-you-ready-to-unleash-its-full-potential/feed/ 0 6532
Alan AI Announces Full Private AI Capabilities for Enterprise GenAI Adoption https://alan.app/blog/alan-ai-announces-full-private-ai-capabilities-for-enterprise-genai-adoption/ https://alan.app/blog/alan-ai-announces-full-private-ai-capabilities-for-enterprise-genai-adoption/#respond Mon, 20 May 2024 09:19:39 +0000 https://alan.app/blog/?p=6392 “With our new private AI functionality, Alan AI empowers enterprises to fully harness generative AI, accelerating productivity and driving transformative insights while maintaining data sovereignty within secure environments. By leveraging rapid advancements in both open-source and closed-source Large Language Models, we ensure our customers benefit from the latest in AI...]]>

“With our new private AI functionality, Alan AI empowers enterprises to fully harness generative AI, accelerating productivity and driving transformative insights while maintaining data sovereignty within secure environments. By leveraging rapid advancements in both open-source and closed-source Large Language Models, we ensure our customers benefit from the latest in AI technology.” 

Srikanth Kilaru, Head of Product, Alan AI

Alan AI is excited to announce that its platform is now fully available in a pure private cloud environment or a Virtual Private Cloud (VPC), expanding beyond its previous SaaS offerings. As the only comprehensive platform within a private cloud setting in the market, Alan AI enables customers to build, deploy, and manage universal agents effortlessly. Our platform’s patented technology allows agents to adapt to new use cases, integrate seamlessly with private APIs and proprietary data, and deliver responses that are not only explainable and controllable but also enriched by immersive App GUI integrations, ensuring an unmatched user experience.

Two configurations for fully private AI deployments

The entire Alan AI platform for developing, testing, deploying, and managing universal agents is now available in two configurations for fully private AI deployments:

  • Independent configuration: The AI platform can be deployed in an Independent private cloud where the entire life cycle of an agent from creation to management as well as the user data are contained inside a private cloud. 
  • Hub & Spoke configuration: In this configuration, the development, testing, deployment, and management of Agents are done from a private cloud instance designated as a ‘Hub,’ and all the other private clouds where the agents are deployed and used are designated as ‘Spokes.’

Complete Data Control

With our platform, every bit of enterprise data stays within the private cloud. This control is crucial for enterprises concerned with data sovereignty and privacy, as it allows them to build, deploy, and manage versatile agents without external data exposure.

Future-Proof Technology

The evolving landscape of Large Language Models (LLMs) presents a challenge for enterprises looking to stay current. Recent announcements, from Google Gemini, OpenAI’s GPT-4o, Anthropic Claude, Mistral to Meta’s LLama 3, highlight the need for a flexible platform. Alan AI’s platform is uniquely designed to accommodate this dynamic environment by abstracting the LLM layer. This ensures that agents and their reasoning capabilities remain effective and relevant, leveraging the updates in underlying LLM technologies.

Private AI deployments with Alan AI

Alan AI is setting a new standard for GenAI implementation in the enterprise sector by offering a platform that protects data integrity and adapts to technological advancements.

Interested? Download the Private AI white paper on our website.

]]>
https://alan.app/blog/alan-ai-announces-full-private-ai-capabilities-for-enterprise-genai-adoption/feed/ 0 6392
Exploring the Rise of Generative AI: How Advanced Technology is Reshaping Enterprise Workflows https://alan.app/blog/exploring-the-rise-of-generative-ai-how-advanced-technology-is-reshaping-enterprise-workflows/ https://alan.app/blog/exploring-the-rise-of-generative-ai-how-advanced-technology-is-reshaping-enterprise-workflows/#respond Thu, 07 Mar 2024 11:40:54 +0000 https://alan.app/blog/?p=6335 By Emilija Nikolic at Beststocks.com Generative AI technology is rapidly transforming the landscape of enterprise workflows across various industries. This advanced technology, leveraging the power of artificial intelligence (AI) and machine learning (ML), is reshaping how businesses interact with their applications, streamline processes, and enhance user experiences. In this article,...]]>

By Emilija Nikolic at Beststocks.com

Generative AI technology is rapidly transforming the landscape of enterprise workflows across various industries. This advanced technology, leveraging the power of artificial intelligence (AI) and machine learning (ML), is reshaping how businesses interact with their applications, streamline processes, and enhance user experiences. In this article, we will delve into the rise of generative AI and its profound impact on modern enterprise workflows.

The Evolution of Generative AI

Generative AI represents a significant advancement in AI technology, particularly in its ability to generate human-like text, images, and other content autonomously. Unlike traditional AI models that rely on pre-defined rules and data, generative AI models are trained on vast datasets and learn to generate new content based on the patterns and information they’ve absorbed.

One of the key features of generative AI is its versatility across various applications and industries. From natural language processing to image generation and beyond, generative AI has shown remarkable potential in transforming how businesses interact with their data and applications.

Reshaping Enterprise Workflows

Generative AI is revolutionizing enterprise workflows by streamlining processes, enhancing productivity, and delivering immersive user experiences. In sectors ranging from finance and healthcare to manufacturing and government, businesses are leveraging generative AI to automate repetitive tasks, analyze complex data sets, and make data-driven decisions.

One of the primary ways generative AI is reshaping enterprise workflows is through its ability to provide personalized and context-aware interactions. By understanding user intent and context, generative AI systems can deliver tailored responses and recommendations, significantly improving user engagement and satisfaction.

Future Implications

The future implications of generative AI in enterprise workflows are vast and promising. As the technology continues to evolve, businesses can expect to see increased automation, improved decision-making processes, and enhanced user experiences. Generative AI has the potential to revolutionize how businesses interact with their applications, enabling more intuitive and efficient workflows.

Moreover, generative AI opens up opportunities for innovation and creativity, enabling businesses to explore new ways of engaging with customers, optimizing processes, and driving growth. From personalized recommendations to automated content generation, the possibilities are endless.

Enhancing Enterprise Efficiency with Advanced AI Solutions

Alan AI, a provider of Generative AI technology for enhancing user interactions in enterprise applications, has recently achieved “Awardable” status within the Chief Digital and Artificial Intelligence Office’s Tradewinds Solutions Marketplace, an integral component of the Department of Defense’s suite of tools designed to expedite the adoption of AI/ML, data, and analytics capabilities.

Unlike traditional chat assistants, Alan AI’s solution offers a pragmatic approach by providing immersive user experiences through its integration with existing Graphical User Interfaces (GUIs) and transparent explanations of AI decision-making processes, per a recent press release

The cornerstone of Alan AI’s offering lies in its ‘Unified Brain’ architecture, which seamlessly connects enterprise applications, Application Programming Interfaces (APIs), and diverse data sources to streamline workflows across a spectrum of industries, ranging from Government and Manufacturing to Energy, Aviation, Higher Education, and Cloud Operations.

Recognized for its innovation, scalability, and potential impact on Department of Defense (DoD) missions, Alan AI’s comprehensive solution revolutionizes natural language interactions within enterprise applications on both mobile and desktop platforms. Through its support for text and voice interfaces, Alan AI empowers businesses to navigate complex workflows with ease, ultimately driving efficiency and productivity gains across various sectors.

Conclusion

In conclusion, the rise of generative AI is reshaping enterprise workflows in profound ways, driving efficiency, innovation, and user engagement across industries. As businesses continue to harness the power of generative AI, it is crucial to navigate the evolving landscape thoughtfully, addressing challenges while maximizing the transformative potential of this advanced technology. With proper governance, training, and ethical considerations, generative AI holds the promise of revolutionizing how businesses operate and interact with their applications in the years to come.

]]>
https://alan.app/blog/exploring-the-rise-of-generative-ai-how-advanced-technology-is-reshaping-enterprise-workflows/feed/ 0 6335
The Strategic Imperative of Generative AI in Government Agencies https://alan.app/blog/the-strategic-imperative-of-generative-ai-in-government-agencies/ https://alan.app/blog/the-strategic-imperative-of-generative-ai-in-government-agencies/#respond Mon, 14 Aug 2023 15:00:03 +0000 https://alan.app/blog/?p=6020 The Dawning Age: Generative AI in Government’s Digital Transformation A New Horizon In the brave new world of technology, one frontier stands as an epitome of innovation: generative AI. This revolutionary concept isn’t limited to commercial or scientific applications but has firmly planted its roots within government agencies. The integration...]]>

The Dawning Age: Generative AI in Government’s Digital Transformation

A New Horizon

In the brave new world of technology, one frontier stands as an epitome of innovation: generative AI. This revolutionary concept isn’t limited to commercial or scientific applications but has firmly planted its roots within government agencies. The integration of generative AI in government is paving the way for improved efficiency and elevated public service.

Enabling Progress

Generative AI models, trained to learn and adapt, offer governmental bodies the power to analyze massive data sets, predict outcomes, and even create content. This fosters a level of agility and creativity previously unattainable, enhancing decision-making processes and operations.

The Architecture of Innovation: Building Generative AI Systems

Creating Foundations

The integration of generative AI in government begins with crafting a robust and scalable architecture. This entails a blend of cutting-edge technology, algorithms, data management, and skilled professionals. Governments are now channeling resources to build these systems to fortify their technological landscapes.

Safety Measures

But innovation does not come without risks. Ensuring data integrity and system security is paramount. Government agencies are investing in cybersecurity and risk management to guard against potential threats, maintaining the confidence and trust of the public.

The Beacon of Efficiency: Streamlining Workflows

Reshaping Bureaucracy

The adoption of generative AI within government agencies promotes a move towards less bureaucratic and more streamlined processes. Gone are the days of slow-moving paperwork; now, automation and predictive analysis shape the way governments operate, providing timely and effective services.

The Environmental Impact

Moreover, the transition towards a paperless system significantly reduces the environmental footprint. Generative AI in government is not only a step towards efficiency but also a stride towards sustainability, reflecting a greater consciousness of global responsibility.

Smart Governance: Policy Formation & Implementation

Strategic Development

Governments are utilizing generative AI to craft better policies and regulations. Through deep analysis and intelligent forecasting, officials can shape policies that are more aligned with public needs and future trends, resulting in more impactful governance.

Implementation Mastery

Implementation is where generative AI truly shines. By offering insights and automating complex processes, government agencies can deploy resources more effectively, ensuring the smooth execution of policies and projects.

Citizen Engagement: A New Era of Interaction

Accessible Services

The application of generative AI in government allows for the development of user-friendly platforms, providing citizens with accessible and transparent services. The barrier between the government and its people is dissolving, paving the way for a more engaged populace.

Real-time Feedback

Generative AI in government also enables real-time feedback and response systems. Public opinions and needs can be monitored and acted upon promptly, reflecting a government that listens and responds.

The Ethical Compass: Navigating Moral Terrain

Responsible Innovation

With the rise of generative AI in government comes the essential duty to uphold ethical principles. Governments are addressing concerns over privacy, bias, and misuse by implementing clear guidelines and regulations.

Transparency and Accountability

Maintaining transparency and accountability within generative AI applications in government ensures that these technologies are used for the greater good, fostering public trust and adherence to ethical standards.

The Global Perspective: International Collaborations

A Unifying Force

Generative AI has become a unifying force in international collaborations. Governments are now working together, sharing insights, and building joint projects that transcend borders. This global perspective enhances worldwide progress and innovation.

Standardization and Harmonization

International cooperation also leads to the standardization and harmonization of practices and regulations, providing a cohesive approach to leveraging generative AI in government.

Future Prospects: What Lies Ahead

The Road to Excellence

The journey towards integrating generative AI in government is filled with potential. Continuous improvement, learning, and adaptation are the paths to excellence. With persistent efforts, the future promises an even more vibrant synergy between technology and governance.

Challenges and Solutions

As with any innovation, challenges are inevitable. However, with focused research, development, and collaboration, these challenges can be overcome, turning obstacles into opportunities for growth.

Conclusion: The Tapestry of Transformation

An Ongoing Endeavor

The integration of generative AI in government is not an end but an ongoing endeavor. It’s a dance of technology and human intellect that will continue to evolve, reflecting the dynamic nature of society and governance. Every government agency would like to improve the citizen services without increasing the costs of these services. A personalized generative AI solution is the way to leverage the current innovation without increasing the costs.

Deployments with Security and Governance

In the panorama of technological advancements, generative AI stands as the pinnacle of innovation. Its integration within government agencies demonstrates a commitment to progress, efficiency, and public service. The tapestry of transformation has been unraveled, and the path towards a more connected and responsive governance is illuminated.

]]>
https://alan.app/blog/the-strategic-imperative-of-generative-ai-in-government-agencies/feed/ 0 6020
The Game-Changing Potential of Generative AI: Transforming Economies and Industries https://alan.app/blog/the-game-changing-potential-of-generative-ai-transforming-economies-and-industries/ https://alan.app/blog/the-game-changing-potential-of-generative-ai-transforming-economies-and-industries/#respond Fri, 04 Aug 2023 20:09:22 +0000 https://alan.app/blog/?p=6010 In the realm of artificial intelligence, generative AI has emerged as a groundbreaking technology with the power to revolutionize industries, transform economies, and redefine the nature of work. This blog post delves into the key insights from recent research on generative AI, exploring its vast economic impact, concentration of value...]]>

In the realm of artificial intelligence, generative AI has emerged as a groundbreaking technology with the power to revolutionize industries, transform economies, and redefine the nature of work. This blog post delves into the key insights from recent research on generative AI, exploring its vast economic impact, concentration of value in key areas, and its implications for industries and the workforce.

1. Generative AI’s Potential Economic Impact

The potential economic impact of generative AI is staggering. According to research, this transformative technology has the capacity to contribute between $2.6 trillion and $4.4 trillion annually across various use cases. To put this into perspective, this value is comparable to the entire GDP of the United Kingdom in 2021. Moreover, the integration of generative AI could boost the impact of artificial intelligence as a whole by 15% to 40%, with the potential to double this estimate if generative AI is seamlessly embedded into existing software beyond the identified use cases.

2. Value Concentration in Key Areas

The research further reveals that approximately 75% of the value generated by generative AI use cases is concentrated in four key areas:

a) Customer Operations: Generative AI can enhance customer interactions, leading to improved customer satisfaction and retention.

b) Marketing and Sales: The technology can create creative and personalized content for marketing and sales campaigns, thus driving better engagement and conversion rates.

c) Software Engineering: Generative AI has the potential to revolutionize software development by generating complex code based on natural-language prompts, expediting the development process and reducing human errors.

d) Research and Development (R&D): In the realm of R&D, generative AI can assist researchers in generating hypotheses, exploring potential solutions, and speeding up the innovation process.

3. Wide-Ranging Impact Across Industries

Generative AI is not confined to a single industry; it is poised to have a significant impact across all sectors. Particularly noteworthy effects are anticipated in the banking, high tech, and life sciences industries:

a) Banking: Fully implemented, generative AI could potentially deliver an additional $200 billion to $340 billion annually to the banking industry.

b) High Tech: The high tech sector can leverage generative AI for innovations, product development, and automating various processes, leading to substantial economic gains.

c) Life Sciences: In the life sciences field, generative AI is expected to streamline drug discovery, optimize clinical trials, and revolutionize personalized medicine, significantly impacting the industry’s growth.

Moreover, the retail and consumer packaged goods industries stand to benefit immensely, with potential impact ranging from $400 billion to $660 billion per year.

4. Augmenting Work Activities

Generative AI has the potential to augment human workers by automating specific activities, fundamentally reshaping the nature of work. The current capabilities of generative AI and related technologies enable the automation of tasks that occupy 60% to 70% of employees’ time. This accelerated automation potential is primarily attributed to the technology’s improved natural language understanding, making it particularly suitable for automating work activities that account for 25% of total work time.

Occupations that involve knowledge-intensive tasks, higher wages, and educational requirements are more susceptible to this transformation. However, it’s essential to recognize that while some tasks will be automated, new opportunities will also emerge, necessitating a focus on reskilling and upskilling the workforce.

5. Accelerated Workforce Transformation

With the increasing potential for technical automation, the pace of workforce transformation is expected to accelerate significantly. According to updated adoption scenarios, as much as 50% of current work activities could be automated between 2030 and 2060, with a midpoint estimate around 2045. This projection is approximately a decade earlier than previous estimates, highlighting the urgency for preparing the workforce for these imminent changes.

6. Impact on Labor Productivity

Generative AI has the power to significantly enhance labor productivity across various sectors. However, realizing its full potential requires investments to support workers in transitioning to new work activities or changing jobs. Depending on the rate of technology adoption and the effective redeployment of worker time, generative AI could enable labor productivity growth of 0.1% to 0.6% annually until 2040. When combined with other technologies, the overall impact of work automation could contribute an additional 0.2% to 3.3% annual growth in productivity.

To harness this productivity growth effectively, it is crucial to implement strategies to manage worker transitions and mitigate risks, ensuring a smooth and inclusive transformation of economies.

7. Early Stage of Generative AI

While the promise of generative AI is undeniable, it’s essential to recognize that realizing its full benefits will take time and effort. Business leaders and society as a whole face significant challenges, including managing inherent risks, identifying the skills and capabilities required for the workforce, and reimagining core business processes to facilitate retraining and skill development.

As we navigate these challenges, the era of generative AI holds immense promise for future advancements. Embracing this technology responsibly and proactively is essential for harnessing its potential and fostering a sustainable, inclusive, and prosperous world.

Conclusion

Generative AI represents a transformative force that has the potential to reshape economies, revolutionize industries, and redefine work as we know it. The estimated trillions of dollars in annual value addition to the global economy are just the tip of the iceberg, as this technology continues to advance and permeate various sectors. Leaders across industries must embrace generative AI with a vision for responsible and inclusive adoption, ensuring that its benefits are accessible to all and that the workforce is prepared to thrive in the changing landscape of AI-driven economies. As we embark on this exciting journey, the potential rewards are vast, and the possibilities for progress and innovation are limitless.

]]>
https://alan.app/blog/the-game-changing-potential-of-generative-ai-transforming-economies-and-industries/feed/ 0 6010
Fine-tuning language models for the enterprise: What you need to know https://alan.app/blog/fine-tuning-language-models-for-the-enterprise-what-you-need-to-know/ https://alan.app/blog/fine-tuning-language-models-for-the-enterprise-what-you-need-to-know/#respond Mon, 17 Apr 2023 17:54:20 +0000 https://alan.app/blog/?p=5889 The media is abuzz with news about large language models (LLM) doing things that were virtually impossible for computers before. From generating text to summarizing articles and answering questions, LLMs are enhancing existing applications and unlocking new ones. However, when it comes to enterprise applications, LLMs can’t be used as...]]>

The media is abuzz with news about large language models (LLM) doing things that were virtually impossible for computers before. From generating text to summarizing articles and answering questions, LLMs are enhancing existing applications and unlocking new ones.

However, when it comes to enterprise applications, LLMs can’t be used as is. In their plain form, LLMs are not very robust and can make errors that will degrade the user experience or possibly cause irreversible mistakes. 

To solve these problems, enterprises need to adjust the LLMs to remain constrained to their business rules and knowledge base. One way to do this is through fine-tuning language models with proprietary data. Here is what you need to know.

The hallucination problem

LLMs are trained for “next token prediction.” Basically, it means that during training, they take a chunk from an existing document (e.g., Wikipedia, news website, code repositories), and try to predict the next word. Then they compare their prediction with what actually exists in the document and adjust their internal parameters to improve their prediction. By repeating this process over a very large corpus of curated text, the LLM develops a “model” of the language and model contained in the documents. It can then produce long stretches of high-quality text.

However, LLMs don’t have working models of the real world or the context of the conversation. They are missing many of the things that humans possess, such as multi-modal perception, common sense, intuitive physics, and more. This is why they can get into all kinds of trouble, including hallucinating facts, which means they can generate text that is plausible but factually incorrect. And given that they have been trained on a very wide corpus of data, they can start making up very wild facts with high confidence. 

Hallucination can be fun and entertaining when you’re using an LLM chatbot casually or to post memes on the internet. But when used in an enterprise application, hallucination can have very adverse effects. In healthcare, finance, commerce, sales, customer service, and many other areas, there is very little room for making factual mistakes.

Scientists and researchers have made solid progress in addressing the hallucination problem. But it is not gone yet. This is why it is important that app developers take measures to make sure that the LLMs that power their AI Assistants are robust and remain true to the knowledge and rules that they set for them.

Fine-tuning large language models

One of the solutions to the hallucination problem is to fine-tune LLMs on application-specific data. The developer must curate a dataset that contains text that is relevant to their application. Then they take a pretrained model and give it a few extra rounds of training on the proprietary data. Fine-tuning improves the model’s performance by limiting its output within the constraints of the knowledge contained in the application-specific documents. This is a very effective method for use cases where the LLM is applied to a very specific application, such as enterprise settings. 

A more advanced fine-tuning technique is “reinforcement learning from human feedback” (RLHF). In RLHF, a group of human annotators provide the LLM with a prompt and let it generate several outputs. They then rank each output and repeat the process with other prompts. The prompts, outputs, and rankings are then used to train a separate “reward model” which is used to rank the LLM’s output. This reward model is then used in a reinforcement learning process to align the model with the user’s intent. RLHF is the training process used in ChatGPT.

Another approach is to use ensembles of LLMs and other types of machine learning models. In this case, several models (hence the name ensemble) process the user input and generate the output. Then the ML system uses a voting mechanism to choose the best decision (e.g., the output that has received the most votes).

While mixing and fine-tuning language models is very effective, it is not trivial. Based on the type of model or service used, developers must overcome technical barriers. For example, if the company wants to self-host its own model, it must set up servers and GPU clusters, create an entire MLOps pipeline, curate the data from across its entire knowledge base, and format it in a way that can be read by the programming tools that will be retraining the model. The high costs and shortage of machine learning and data engineering talent often make it prohibitive for companies to fine-tune and use LLMs.

API services reduce some of the complexities but still require large efforts and manual labor on the part of the app developers.

Fine-tuning language models with Alan AI Platform

Alan AI is committed to providing high-quality and easy-to-use actionable AI platform for enterprise applications. From the start, our vision has been to create AI Platform that makes it easy for app developers to deploy AI solutions to create the next-generation user experience. 

Our approach ensures that the underlying AI system has the right context and knowledge to avoid the kind of mistakes that current LLMs make. The architecture of the Alan AI Platform is designed to combine the power of LLMs with your existing knowledge base, APIs, databases, or even raw web data. 

To further improve the performance of the language model that powers the Alan AI Platform, we have added fine-tuning tools that are versatile and easy to use. Our general approach to fine-tuning models for the enterprise is to provide “grounding” and “affordance.” Grounding means making sure the model’s responses are based on real facts, not hallucinations. This is done by keeping the model limited within the boundaries of the enterprises knowledge base and training data as well as the context provided by the user. Affordance means knowing the limits of the model and making sure that it only responds to the prompts and requests that fall within its capabilities.

You can see this in the Q&A Service by Alan AI, which allows you to add an Actionable AI assistant on top of the existing content.

The Q&A service is a useful tool that can provide your website with 24/7 support for your visitors. However, it is important that the AI assistant is truthful to the content and knowledge of your business. Naturally, the solution is to fine-tune the underlying language model with the content of your website.

To simplify the fine-tuning process, we have provided a simple function called corpus, which developers can use to provide the content on which they want to fine-tune their AI model. You can provide the function with a list of plain-text strings that represent your fine-tuning dataset. To further simplify the process, we also support URL-based data. Instead of providing raw text, you can provide the function with a list of URLs that point to the pages where the relevant information is located. These could be links to documentation pages, FAQs, knowledge bases, or any other content that is relevant to your application. Alan AI automatically scrapes the content of those pages and uses them to fine-tune the model, saving you the manual labor to extract the data. This can be very convenient when you already have a large corpus of documentation and want to use it to train your model.

During inference, Alan AI uses the fine-tuned model with the other proprietary features of its Actionable AI platform, which takes into account visuals, user interactions, and other data that provide further context for the assistant.

Building robust language models will be key to success in the coming wave of Actionable AI innovation. Fine-tuning is the first step we are taking to make sure all enterprises have access to the best-in-class AI technologies for their applications.

]]>
https://alan.app/blog/fine-tuning-language-models-for-the-enterprise-what-you-need-to-know/feed/ 0 5889
In the age of LLMs, enterprises need multimodal conversational UX https://alan.app/blog/why-now-is-the-time-to-think-about-multimodal-conversational-ux/ https://alan.app/blog/why-now-is-the-time-to-think-about-multimodal-conversational-ux/#respond Wed, 22 Feb 2023 20:15:10 +0000 https://alan.app/blog/?p=5785 In the past few months, advances in large language models (LLM) have shown what could be the next big computing paradigm. ChatGPT, the latest LLM from OpenAI, has taken the world by storm, reaching 100 million users in a record time. Developers, web designers, writers, and people of all kinds...]]>

In the past few months, advances in large language models (LLM) have shown what could be the next big computing paradigm. ChatGPT, the latest LLM from OpenAI, has taken the world by storm, reaching 100 million users in a record time.

Developers, web designers, writers, and people of all kinds of professions are using ChatGPT to generate human-readable text that previously required intense human labor. And now, Microsoft, OpenAI’s main backer, is trialing a version of its Bing search engine that is enhanced by ChatGPT, posing the first real threat to Google’s $283-billion monopoly in the online search market.

Other tech giants are not far behind. Google is taking hasty measures to release Bard, its rival to ChatGPT. Amazon and Meta are running their own experiments with LLMs. And a host of tech startups are using new business models with LLM-powered products.

We’re at a critical juncture in the history of computing, which some experts compare to the huge shifts caused by the internet and mobile. Soon, conversational interfaces will become the norm in every application, and users will become comfortable with—and in fact, expect—conversational agents in websites, mobile apps, kiosks, wearables, etc.

The limits of current AI systems

As much as conversational UX is attractive, it is not as simple as adding an LLM API on top of your application. We’ve seen this in the limited success of the first generation of voice assistants such as Siri and Alexa, which tried to build one solution for all needs.

Just like human-human conversations, the space of possible actions in conversational interfaces is unlimited, which opens room for mistakes. Application developers and product managers need to build trust with their users by making sure that they minimize room for mistakes and exert control over the responses the AI gives to users. 

We’re also seeing how uncontrolled use of conversational AI can damage the user’s experience and the developer’s reputation as LLM products are going through their growing pains. In Google’s Bard demo, the AI produced untruthful facts about the James Webb telescope. Microsoft’s ChatGPT-powered Bing has been caught making egregious mistakes. A reputable news website had to retract and correct several articles that were written by an LLM after they were found to be factually wrong. And numerous similar cases are being discussed on social media and tech blogs every day.

The limits of current LLMs can be boiled down to the following:

  • They “hallucinate” and can state wrongful facts with high confidence 
  • They become inconsistent in long conversations
  • They are hard to integrate with existing applications and only take a textual input prompt as context
  • Their knowledge is limited to their training data and updating them is slow and expensive
  • They can’t interact with external data sources
  • They don’t have analytics tools to measure and enhance user experience

Multimodal conversational UX

We believe that multimodal conversational AI is the way to overcome these limits and bring trust and control to everyday applications. As the name implies, multi-modal conversational AI brings together voice, text, and touch-type interactions with several sources of information, including knowledge bases, GUI interactions, user context, and company business rules and workflows. 

This multi-modal approach makes sure the AI system has a more complete user context and can make more precise and explainable decisions.

Users can trust the AI because they can see exactly how and why the AI decided and what data points were involved in the decision-making. For example, in a healthcare application, users can make sure the AI is making inferences based on their health data and not just on its own training corpus. In aviation maintenance and repair, technicians using multi-modal conversational AI can trace back suggestions and results to specific parts, workflows, and maintenance rules. 

Developers can control the AI and make sure the underlying LLM (or other machine learning models) remains reliable and factful by integrating the enterprise knowledge corpus and data records into the training and inference processes. The AI can be integrated into the broader business rules to make sure it remains within the boundaries of decision constraints.

Multi-modality means that the AI will surface information to the user not only through text and voice but also through other means such as visual cues.

The most advanced multimodal conversational AI platform

Alan AI was developed from the ground up with the vision of serving the enterprise sector. We have designed our platform to use LLMs as well as other necessary components to serve applications in all kinds of domains, including industrial, healthcare, transportation, and more. Today, thousands of developers are using the Alan AI Platform to create conversational user experiences ranging from customer support to smart assistants on field operations in oil & gas, aviation maintenance, etc.

Alan AI is platform agnostic and supports deep integration with your application on different operating systems. It can be incorporated into your application’s interface and tie in your business logic and workflows.

Alan AI Platform provides rich analytics tools that can help you better understand the user experience and discover new ways to improve your application and create value for your users. Along with the easy-to-integrate SDK, Alan AI Platform makes sure that you can iterate much faster than the traditional application lifecycle.

As an added advantage, the Alan AI Platform has been designed with enterprise technical and security needs in mind. You have full control of your hosting environment and generated responses to build trust with your users.

Multimodal conversational UX will break the limits of existing paradigms and is the future of mobile, web, kiosks, etc. We want to make sure developers have a robust AI platform to provide this experience to their users with accuracy, trust, and control of the UX. 

]]>
https://alan.app/blog/why-now-is-the-time-to-think-about-multimodal-conversational-ux/feed/ 0 5785
Productivity and ROI with in-app Assistants https://alan.app/blog/productivity-and-roi-with-in-app-assistants/ https://alan.app/blog/productivity-and-roi-with-in-app-assistants/#respond Mon, 21 Nov 2022 20:32:23 +0000 https://alan.app/blog/?p=5689 The world economy is clearly headed for “stormy waters”, and companies are bracing for a recession. Downturns always bring change and a great deal of uncertainty. How serious will the pending recession be – mild and short-lived or severe and prolonged? How can the business prepare and adapt? When getting...]]>

The world economy is clearly headed for “stormy waters”, and companies are bracing for a recession. Downturns always bring change and a great deal of uncertainty. How serious will the pending recession be – mild and short-lived or severe and prolonged? How can the business prepare and adapt?

When getting through hard times, some market players choose to be more cash-conservative and halt all new investment decisions. Others, on the contrary, believe the crisis is the best time to turn to new technology and opportunities.

What’s the right move?

A recession can be tough for a lot of things, but not for the customer experience (CX). Whether the moment is good or bad, CX teams have to keep the focus on internal and external SLAs, satisfaction scores, and churn reduction. In an economic slowdown, delighting customers and delivering an exceptional experience is even more crucial.

When in cost-cutting mode, CX departments find themselves under increasing pressure to do more with less. As before, existing systems and products require high-level support and training, new solutions brought in-house add to the complexity – but scaling the team and hiring new resources is out of the question.

And this is where technology comes to the fore. To maintain flexibility and remain recession-proof, businesses have started looking towards AI-powered conversational assistants being able to digitize and modernize the CX service.

Re-assessing investments in Al and ML

Over the last few years, investments in business automation, AI, and ML have been at the top of priority lists. Successful AI adoption brought significant benefits, high returns, and increased customer satisfaction. This worked during financially sound times – but now investments in AI/ML projects need to be reassessed.

There are several important things to consider:

  • Speed of adoption: for many companies, the main AI adoption challenge rests in significant timelines involved in the project development and launch, which affects ROI. The longer the life cycle is, the more time it will take to start reaping the benefits from AI solutions – if they ever come through.
  • Ease of integration: an AI solution needs to be easily laid on top of existing IT systems so that the business can move forward, without suffering operational disruptions.
  • High accuracy level: in mission-critical industries where knowledge and data are highly nuanced, the terminology is complex and requirements to the dialog are stringent, accuracy is paramount. AI-powered assistants must be able to support contextual conversations and learn fast.
  • Personalized CX: to exceed customer expectations, the virtual assistant should provide human-like personalized conversations based on the user’s data.

Increasing productivity with voice and text in-app assistants

Alan AI enables enterprises to easily address business bottlenecks in productivity and knowledge share. In-app (IA) assistants built with the Alan AI Platform can be designed and implemented fast – in a matter of days – with no disruption to existing business systems and infrastructure.

Alan’s IA assistants are built on top of the existing applications, empowering customers to interact through voice, text, or both. IA assistants continuously learn from the organization’s data and its domain to become extremely accurate over time and leverage the application context to provide highly contextual, personalized conversations.

With both web and mobile deployment options, Alan AI assistants help businesses and customers with:

  •  Always-on customer service: provide automated, first-class support with virtual agents available 24/7/365 and a self-help knowledge base; empower users to find answers to questions and learn from IA.
  • Resolving common issues without escalation: let IA resolve common issues immediately, without involving live agents from CX or support teams.
  • Onboarding and training: show the users how to complete tasks and find answers, guiding them through the application and updating visuals as the dialog is being held.
  •  Personalized customer experience: build engaging customer experiences in a friendly conversational tone becoming an integral part of the company’s brand.

Although it may seem the opposite, a recession can be a good time to increase customer satisfaction, reduce overhead and have a robust ROI. So, consider investing in true AI and intelligence with voice and text IA assistants by Alan AI.

]]>
https://alan.app/blog/productivity-and-roi-with-in-app-assistants/feed/ 0 5689
Ramco Systems and Alan AI bringing Voice AI Capabilities to the Aviation Industry https://alan.app/blog/ramco-systems-and-alan-ai-bringing-voice-ai-capabilities-to-the-aviation-industry/ https://alan.app/blog/ramco-systems-and-alan-ai-bringing-voice-ai-capabilities-to-the-aviation-industry/#respond Thu, 08 Sep 2022 16:25:25 +0000 https://alan.app/blog/?p=5634 We’re proud to announce Ramco Systems in partnership with Alan AI bringing a live webinar on the launch of voice AI solutions for the aviation industry. Industry leaders Mark Schulz, Ramu Sunkara from Alan AI, and Michael Clark from Ramco Systems will be answering your questions about cost savings, operational...]]>

We’re proud to announce Ramco Systems in partnership with Alan AI bringing a live webinar on the launch of voice AI solutions for the aviation industry. Industry leaders Mark Schulz, Ramu Sunkara from Alan AI, and Michael Clark from Ramco Systems will be answering your questions about cost savings, operational efficiency, and hands-free use with voice AI.

Technicians can complete line maintenance hands-free in just minutes to determine air worthiness (have 50mins to complete inspections each time a flight/helicopter lands), file discrepancies, and retrieve any content from fault isolation manuals with voice commands. Airlines will derive immediate cost savings, get things done right the first time, and have faster onboarding and training of technicians with the voice AI deployments.

The webinar is scheduled for 12:00 IST on September 15th, 2022, register here.

In the webinar, you will see the future of aviation maintenance operations with Ramco aviation applications and Alan AI intelligent voice assistants.

We’re looking forward to answering your questions in the webinar.

]]>
https://alan.app/blog/ramco-systems-and-alan-ai-bringing-voice-ai-capabilities-to-the-aviation-industry/feed/ 0 5634
Voice AI: 100 Use Cases of Alan’s Deployment Into Business Apps Today https://alan.app/blog/voice-ai-100-use-cases-of-alans-deployment-into-business-apps-today/ https://alan.app/blog/voice-ai-100-use-cases-of-alans-deployment-into-business-apps-today/#respond Thu, 04 Aug 2022 14:45:51 +0000 https://alan.app/blog/?p=5587 As time goes on, more companies actively integrate voice assistants into their platforms. As a result, businesses are taking steps forward and incorporating Alan’s artificial intelligence into their applications. With voice, you experience a shift in daily operations, resolutions, productivity, efficiency, customer loyalty, and communication. Alan AI allows improvements that...]]>

As time goes on, more companies actively integrate voice assistants into their platforms. As a result, businesses are taking steps forward and incorporating Alan’s artificial intelligence into their applications. With voice, you experience a shift in daily operations, resolutions, productivity, efficiency, customer loyalty, and communication. Alan AI allows improvements that help companies grow while supplying them with the materials to add voice and produce high-quality content for all who engage with conversational AI.

As the voice interface platform, we have generated 100+ use cases. Here is a selection that highlights how voice can change your enterprise for the better:

Healthcare: 

  1. Rapidly deploy results from tests/scans for patients to understand their diagnosis. 
  2. Schedule medical appointments. 
  3. Check essential doctor’s notes and messages. 
  4. Regulate patient prescription and dose amount. 
  5. Monitor patient recovery, programs, and exercises. 
  6. Correctly enter patient history and pre-op information. 
  7. Patients can swiftly find specific doctors that cater to their needs.
  8. Virtually listen to patient symptoms and requests.
  9. Hands-free navigation while commuting to a secondary appointment. 
  10. Doctore can provide an immediate review of patient diagnoses before meeting with them. 
  11. Conduct scheduled mental health check-ins. 
  12. Increase inclusivity amongst patients who better understand through a conversational experience. 
  13. Conscientious workers can generate a germs-free environment by providing hands-free technology for patients. 
  14. Scale patient engagement after each appointment. 
  15. Hands-free referral management for providers. 
  16. Navigate your way to the nearest pharmacy to pick up prescriptions. 
  17. Conveniently check appointment times that work for you. 
  18. Log symptoms before meeting with healthcare specialists. 
  19. Virtually send emergency images of scars, rashes, stitches, etc.
  20. Check patient immunization records on the spot with zero hassle. 

Healthcare Use Cases: Healthcare | Alan AI  

Education Technology: 

  1. Students can easily navigate their way around large textbooks. 
  2. Virtually create prep for upcoming exams and quizzes. 
  3. Check availability of course textbooks. 
  4. Learn how to solve mathematical equations and gauge understanding of class concepts.
  5. Teachers can quickly assess who submitted homework and who was late. 
  6. Virtually read aloud books and stories to students. 
  7. Personalize the learning experience by catering to the learning styles of each student. 
  8. Set continuous reminders for upcoming study groups. 
  9. Students can organize class materials based on class syllabi.
  10. Accommodate lesson plans for students who speak multiple languages with the help of spoken language understanding. 
  11. Access extensive data and records located within your platform. 
  12. Organize notes based on subject and importance. 
  13. Increase routine learning by memorizing spelling, common phrases, languages, etc. 
  14. Reschedule meetings with teachers or administrators. 
  15. Conduct interactive surveys on students’ best learning practices. 
  16. Alleviate teacher’s workload by providing tips and solutions for navigating a problem. 
  17. Plan a better curriculum based on students’ strengths and weaknesses. 
  18. Better understand the pronunciation of different languages from a human-like voice.
  19. Customize exams and quizzes based on students’ needs. 
  20. Students can revisit the teacher’s instructions on projects and assignments.

Education Technology Use Case: EdTech | Alan AI

Related Blogs: Voice Interface: Educational Institution Apps

Manufacturing and Logistics: 

  1. Complete work orders right from a mobile device. 
  2. Immediately check in at worksites by getting convenient directions to different locations. 
  3. Log daily activities while drilling and servicing sites. 
  4. Decrease language barriers by providing spoken language understanding. 
  5. Gain access to the highest priority incidents that need assistance first. 
  6. View in-depth details on problems on the job and ask questions to resolve the issue. 
  7. Plan future production projects seamlessly with zero distractions. 
  8. First responders stay up to date on safety guidelines for upcoming emergencies. 
  9. Receive detailed feedback on how to troubleshoot manufacturing errors. 
  10. Submit a scaffolding ticket when hands are preoccupied with another task. 
  11. Report discrepancies that occur on a job site. 
  12. Hands-free access to invoices and requests for quotes. 
  13. Provide directions for those who are visually impaired. 
  14. Schedule field operations quickly in advance. 
  15. Supply contractor estimation software to employees. 
  16. Quickly reroute based on cancellations or rescheduling. 
  17. Message clients on the go about your estimated time of arrival. 
  18. Check production status on new manufacturing tools. 
  19. First responders can accessible navigate their way around disastrous complications. 
  20. Check incident status as you’re on the way to an emergency while keeping your eyes on the road.

Manufacturing and Logistics Use Cases: Manufacturing | Alan AI  and Logistics | Alan AI

Related Blog: Intelligent Voice Interfaces: Higher Productivity in MRO

Food and Restaurants: 

  1. Customers can self-order from an in-store kiosk or display system. 
  2. Release suggestions from previous orders submitted. 
  3. Adjust orders based on dietary restrictions. 
  4. Make suggestions from consumers’ cravings.
  5. Easily order cultural food in the native language with the assistance of spoken language understanding. 
  6. Consumers can locate where the closes restaurant are for requested food options.
  7. Pay for food/delivery on the go. 
  8. Customers can access delivery order time and check how fast they can receive food services.
  9. Food services can measure feedback from interactive survey questions.  
  10. Acquire recommendations based on the popularity of the product. 
  11. Generate order suggestions based on special ingredients. 
  12. Ask and receive questions on how to take advantage of app rewards before purchasing.
  13. Search for food based on the highest-rated restaurants on delivery services. 
  14. Rate customer satisfaction after ordering food.
  15. Look up which pizza is available for delivery and which are for pickup.
  16. Replace/add food or beverages before heading to check out. 
  17. Track daily invoices and production progress from the previous month.
  18. Advertise new food items and virtually gauge interest from consumers.  
  19. Generate daily operational checklist for restaurant employees.
  20. Create store-specific grocery lists while driving to the store. 

Food and Restaurant Use Cases: Food Ordering | Alan AI 

Others: 

  1. Employees can make quick phone calls to superiors. 
  2. Consumers can review credit scores from banking applications. 
  3. Personalize users’ experiences by catering to their specific needs. 
  4. Learn how to increase your credit score and how long it will take. 
  5. Move between pages on your apps. 
  6. Sign consumers up for promotional offers that boost revenue.
  7. Suggest potential gifts for customers based on likes and dislikes. 
  8. Report malfunctions from appliances and request new shipments. 
  9. Decrease onboarding errors from newly hired employees by equipping them with the same procedures.
  10. Receive tips on how to improve customer satisfaction based on survey questions given to users.
  11. Generate 24/7 customer support straight from your application. 
  12. Sign up for promotions within brands. 
  13. Schedule package pickups from home or in the office. 
  14. Search real estate based on special locations.
  15. Easy access to staff applications located in numerous files. 
  16. Efficiently log and access daily reports. 
  17. Order a new credit card on the spot. 
  18. Receive quick updates on patient news or updates. 
  19. Receive directions on troubleshooting through errors. 
  20. Reduce inventory errors with voice confirmation. 

Other Use Cases: Onboarding and Adoption Use Case | Alan AI

If you are looking for a voice-based solution for your enterprise, the team at Alan AI will be able to deliver precisely that. Email us at sales@alan.app

Alan AI has patent protections for its unique contextual Spoken Language Understanding (SLU) technology to accurately recognize and understand the human voice within a given context. Alan’s SLU transcoder leverages the context to convert voice directly to meaning by using raw input from speech recognition services, imparting the accuracy required for mission-critical enterprise deployments and enabling human-like conversations rather than robotic ones. Voice-based interactions, coupled with the ability to allow users to verify the entered details without having the system reiterate inputs, provide an unmatched end-user experience.

]]>
https://alan.app/blog/voice-ai-100-use-cases-of-alans-deployment-into-business-apps-today/feed/ 0 5587
Alan AI brings intelligent Voice Interface User Experience to Ramco Systems https://alan.app/blog/alan-ai-brings-next-gen-user-experience-to-ramco-systems/ https://alan.app/blog/alan-ai-brings-next-gen-user-experience-to-ramco-systems/#respond Wed, 04 May 2022 02:47:42 +0000 https://alan.app/blog/?p=5303 Voice is no longer just for consumers. Alan’s Voice Assistants deployed in Ramco’s key enterprise business applications scale user productivity and deliver ROI.  Alan AI and global enterprise software provider Ramco Systems have announced a key partnership to deploy in-app voice assistants for key applications. In its initial stages of...]]>

Voice is no longer just for consumers. Alan’s Voice Assistants deployed in Ramco’s key enterprise business applications scale user productivity and deliver ROI. 

Alan AI and global enterprise software provider Ramco Systems have announced a key partnership to deploy in-app voice assistants for key applications. In its initial stages of partnership, the organizations will primarily focus on building business use cases for Ramco’s Aviation, and Aerospace & Defense sector, followed by those for other industry verticals including Global Payroll and HR, ERP, and Logistics.

Alan’s voice assistant technology works seamlessly with Ramco’s applications, as a simple overlay over the existing UI.  Alan provides enterprise grade accuracy of understanding spoken language for daily operations, synchronization of voice with existing graphical interfaces, and a hands-free app experience which will truly delight the user, from the very first interaction. Alan’s Voice UX also enables rapid and continuous iterations based on real-time user feedback via the analytics feature- a huge improvement over the painstakingly slow process of software development and release cycles for graphical user interfaces. Alan’s AI rapidly learns the nuances of the app’s domain language and can be deployed in a matter of days. 

 Commenting on the Alan AI-Ramco partnership, Ramesh Sivasubramanian, Vice-President – Technology & Innovation, Ramco Systems, said, “Voice recognition is a maturing technology and has been witnessing huge adoption socially, in our day-to-day personal lives. However, its importance in enterprise software has been a real breakthrough and a result of multitudinous innovations. We are excited to enable clients with this voice user interface along with Alan AI, thereby ensuring a futuristic digital enterprise”.

Alan’s voice interface leverage the user context and existing UI of applications, a key to understanding responses for next-gen human voice conversations. Alan has patent protections for its unique contextual Spoken Language Understanding (SLU) technology to accurately recognize and understand human voice, within a given context. Alan’s SLU transcoder leverages the context to convert voice directly to meaning by using raw input from speech recognition services, imparting the accuracy required for mission-critical enterprise deployments and enabling human-like conversations, rather than robotic ones. Voice based interactions, coupled with the ability to allow users to verify the entered details without having the system to reiterate inputs, provides an unmatched convenient end-user experience.

Maintenance, Repair, and Operation (MRO) employees in aviation and other industries increasingly use mobile and other device-based apps to plan projects, write reports based on their observations, research repair issues, and write logs to databases etc. This is exactly where Alan’s voice interface can help- with a hands-free option to increase productivity and support safety, thereby eliminating the distraction of touch and type while working on a task.

For example, Alan’s intelligent voice interface responds to spoken human language commands such as:

User: “Hey Alan, can you help me record a discrepancy?”

Alan: “Hi Richard, sure! Navigating to the ‘Discrepancy Screen’.”

User: “Enter description- ‘Motor damage’.”

Alan: “Updated ‘Motor damaged’ in the description field”

User: “Enter corrective action- ‘Motor replaced’.”

Alan: “Updated ‘Motor replaced’ in corrective action field.”

User: “Set action as closed.”

Alan: “Updated ‘Closed’ in quick action field.”

User: “Go ahead and record the discrepancy.”

Alan: “Sure, Richard. Creating the discrepancy… You’re done. Discrepancy has been registered against the task. Please review this at the bottom of the screen.”

Alan enables friendly conversations between humans and software. It helps to create outstanding outcomes by allowing users to go hands-free as well as  error-free with the ability to instantly review generated actions.

 Alan plans to continuously augment the voice experience to improve employee productivity with voice in their daily operations. Voice can now support a vision of a hands free, productive, and safe environment for humans.

Please view and share the Alan-Ramco partnership announcement on LinkedIn and Twitter

]]>
https://alan.app/blog/alan-ai-brings-next-gen-user-experience-to-ramco-systems/feed/ 0 5303
Intelligent Voice Interfaces: Higher Productivity in MRO https://alan.app/blog/voice-assistants-increase-productivity-for-mro-workers/ https://alan.app/blog/voice-assistants-increase-productivity-for-mro-workers/#respond Tue, 26 Apr 2022 22:52:10 +0000 https://alan.app/blog/?p=5266 Smart technology is changing the way work gets done, regardless of the industry. It enables simple requests and provides efficient services for various industries, including the maintenance, repair, and operations (MRO) industry.  The equipment in the MRO industry needs regular servicing. Industries such as aviation have particularly complex maintenance procedures. ...]]>

Smart technology is changing the way work gets done, regardless of the industry. It enables simple requests and provides efficient services for various industries, including the maintenance, repair, and operations (MRO) industry. 

The equipment in the MRO industry needs regular servicing. Industries such as aviation have particularly complex maintenance procedures.  Servicing them requires organized knowledge of the user guides and manuals. The procedures might be different for each unit, and it asks for patience, thoroughness, and the right set of skills– all in plenty. 

For example, finding the correct manual or the right procedure might not be easily possible, especially when you are strapped for time. These processes require the complete attention and focus of the technician or engineer. 

Now, how does having an intelligent vouce interface sound? What if it is your voice that can be used to request for information? Voice interfaces are ripe to go mainstream with advances in technology. The technician can say “Walk me through the inspection of X machine” to the voice assistant and get a guided workflow. They can get the work done in peace without wondering if they are following the right steps. 

Industry stats indicate that deploying voice interfaces in MRO apps result in a 2X increase in productivity, 50% reduction in unplanned downtime, and a significant 20% increase in revenue stream.

How Voice AI helps the maintenance, repair and operations industry: 

  1. Increases productivity:

When maintenance workers engage with handsfree apps, they are capable of accomplishing tasks faster and are presented with an opportunity to multitask. The overall business productivity will increase in leaps and bounds. Moreover, voice enables smoother and faster onboarding and gets the new employee to be productive in a shorter span of time.

2. Allows a wide range of MRO activities:

Voice interfaces have a device-based implementation- it allows workers to be distant and still be able to collect data or listen to guided workflows. It includes laptops, smartphones, tablets, and other smart devices that can install and run a mobile application. 

The ability to have a voice interface on these devices, regardless of the connectivity, allows voice enabled applications to fit a wide range of MRO deployments in the field. 

3. Provides detailed troubleshooting:

One more critical advantage of using voice interfaces in the MRO industry is how speech recognition provides detailed error messages. The voice assistant warns when the data being input falls out of ranges that are not acceptable. It can even pre-load information collected in the previous screens and provides detailed instructions for new screens. 

4. Allows for smoother operations:

Voice assistants seamlessly integrate responses within a maintenance or inspection procedure. It is capable of doing this while following the updated guidelines. The technical operator gets additional information during the complicated repair process. Since voice assistants can provide the information in the form of audio, there is no interruption. 

5. Eradicate language barriers:

Some technicians might not be fully versed with the language that the maintenance procedure handbook is written in. It can be a barrier in getting the work done properly. Doing maintenance work in a faulty manner without following the procedures exactly as it is can result in problems. Listening to the instructions via voice can ease the stress of trying to read and make sense and allow for better comprehension.

6. Immediate solutions:

When the operator uses an intelligent voice interface, they can simply ask for any of the information that is already fed in the voice assistant, and the corresponding content will be provided by it. You will get exactly what you asked for. It eliminates the need for manual search, thereby even reducing the time taken for the procedure. 

7. Better training opportunities: 

Apart from providing assistance to service personnel, the voice assistants can also act as a great training tool for new operators. The newly hired operators can learn to operate the machine while listening to audio that can be synchronized with  visual instructions from the voice assistants. 

Wrapping up:

The advantages of using voice assistants in the MRO industry are multiple. The flexibility and capability that voice assistants offer enables greater attention to work, helps focus on the job, and reduces the time that is usually wasted by moving between applications and errors. Give your workers an error free, productive and safer environment with intelligent voice assistants. 

If industrial enterprises are looking for a voice-based solution that will make operations safer and more effective, the Alan Platform is the right solution for you. Check out the Ramco Systems testimonial on their partnership with Alan AI for enterprise MRO software apps.

The team at Alan AI will be more than happy to assist you with any questions or provide a personalized demo of our intelligent voice assistant platform. Just email us at sales@alan.app

]]>
https://alan.app/blog/voice-assistants-increase-productivity-for-mro-workers/feed/ 0 5266
Ramco Systems partners with Alan AI to deploy Intelligent Voice Interfaces to 1K Enterprises https://alan.app/blog/ramco-systems-partners-with-alan-ai-to-enhance-next-gen-enterprise-user-experience/ https://alan.app/blog/ramco-systems-partners-with-alan-ai-to-enhance-next-gen-enterprise-user-experience/#respond Fri, 08 Apr 2022 16:24:39 +0000 https://alan.app/blog/?p=5244 Bolstering its enterprise applications with intelligent voice interfaces SUNNYVALE, CA 94582, USA, March 29, 2022 /EINPresswire.com/ —  Alan AI, a Silicon Valley company enabling the next generation of voice interfaces for Enterprise Apps, has announced a strategic partnership with Ramco Systems1, a leading global cloud enterprise software provider, to embed intelligent...]]>

Bolstering its enterprise applications with intelligent voice interfaces SUNNYVALE, CA 94582, USA, March 29, 2022 /EINPresswire.com/ — 

Alan AI, a Silicon Valley company enabling the next generation of voice interfaces for Enterprise Apps, has announced a strategic partnership with Ramco Systems1, a leading global cloud enterprise software provider, to embed intelligent voice interfaces for its enterprise offerings. In its initial stages of partnership, the organizations will primarily focus on building business use cases for the Aviation, Aerospace & Defense sector, followed by use cases for other industry verticals.

Ramco Systems offers an integrated and smart platform engineered to develop robust and scalable solutions, thereby offering a competitive edge to its end users. By embedding Alan AI’s voice interface, Ramco’s customers will be able to interact with their applications with natural human language and receive intelligent responses for daily workflows. Features such as accuracy of understanding spoken language, synchronization of voice with existing graphical interfaces, and a hands-free app experience will truly delight the user- from the very first interaction. Voice Assistant will drive smoother app onboarding, higher user engagement and scale adoption and loyalty.

Commenting on the partnership, Ramesh Sivasubramanian, Vice-President – Technology & Innovation, Ramco Systems, said, “Voice recognition is a maturing technology and has been witnessing huge adoption socially, in our day-to-day personal lives. However, its importance in enterprise software has been a real breakthrough and a result of multitudinous innovations. We are excited to enable clients with this voice user interface along with Alan AI, thereby ensuring a futuristic digital enterprise”.

“We are so excited to be able to help support Ramco’s applications and empower their customers with intelligent voice interfaces. Our advanced Voice AI Platform enables enterprises to deploy and manage intelligent and contextual voice interfaces for their Applications in days, not months/ years” said Blake Wheale, Chief Revenue Officer, Alan AI. “This partnership is a great testament of how voice can support a vision of a hands free, productive and safe environment for humans”.
Learn more about Alan AI

#VoiceAssistant #VoiceAI #RamcoSystems #AlanAI

]]>
https://alan.app/blog/ramco-systems-partners-with-alan-ai-to-enhance-next-gen-enterprise-user-experience/feed/ 0 5244
SLU: The Intelligence Foundation for Your Business https://alan.app/blog/slu-the-intelligence-foundation-for-your-business/ https://alan.app/blog/slu-the-intelligence-foundation-for-your-business/#respond Thu, 15 Oct 2020 09:54:28 +0000 https://alan.app/blog/?p=4067 Imagine a future where AI has advanced to comprehend what we say, what we need to do next, and then provide the needed assistance to expeditiously complete tasks to produce the desired outcomes. In order to accomplish this, we need a "Spoken Language Understanding" (SLU), the Intelligence Foundation for every business.]]>

Every business and individual is unique. They have their own unique spoken languages and workflows for their daily operations. When you visit a doctor, you speak in specific ways to complete a set of workflows. When you visit an eCommerce site, you follow the workflows of that site to complete your shopping.

In each of these businesses, employees and customers use unique combinations of language and workflows as they go through their transactions. Over time, employees of an enterprise get smarter with cumulative knowledge of how things are done, and they transfer this accumulated knowledge to newcomers.

Now, imagine a future where AI has advanced to comprehend what we say, foresee what we need to do next, and then provide the needed assistance to expeditiously complete these tasks and produce the desired outcomes. In order to accomplish this, we need Spoken Language Understanding (SLU).

Here of some of the key abilities of SLU:

  1. Support Unique Terminologies: In any business context, we use terminologies that are only understood by colleagues and other people in this specific environment. For example at a doctor’s office, when a patient engages with the front-desk, where the doctor’s assistant enquires about various facets and symptoms of the patient’s health. The assistant uses terminologies, both words and abbreviations, that are understood by the patient as well as by doctors and other medical staff. However, saying these same terms outside of the office will only confuse people. 
  2. Assist Unique Workflows: Each business defines its own set of workflows for its daily operations and the answers to specific business-related questions define subsequent workflows. However, these answers can differ, with each variation leading to a different step in succeeding workflows. The entire workflow is unique to a user, based on context, answers, and needs. For example, in a doctor’s office, if a patient has a cold, the subsequent set of questions (in the workflow) will vary, based on expressed symptoms and their severity.
  3. Provide Collective Intelligence: We all possess knowledge. When we share it with others, we contribute to the collective intelligence of a group. That knowledge and intelligence keep increasing over time, with every interaction. For example, in a doctor’s office, a nurse learns a medical procedure by asking questions such as “how do I do this”, “what do I use”, and “when does it need to be done”. The knowledge accumulated by the nurse, when shared with colleagues, who later practice it, increases the collective intelligence of the group.

In summary, an SLU Intelligence Foundation for each business should a) support unique terminology, b) assist unique workflows, and c) promote collective intelligence. Similar to the healthcare use case examples provided above, SLU can be applied to a variety of industries such as eCommerce, banking, health&fitness, and many more.

Alan AI is a SLU-based Conversational Voice AI Platform that understands and learns nuances of spoken language in a context, to deliver superior responses to questions, commands, and requests. It is also a discovery tool that can help and guide users with human-like conversational voice through any workflow process.

Using Alan’s self-service platform, you can build a SLU Intelligence Foundation for any business and application. With continuous usage and data coming from the application and its previous user interactions, it will become the intelligence foundation for your enterprise.

Learn how you can deploy the SLU Intelligence Foundation into your business applications by visiting the Alan AI platform or contacting us at sales@alan.app.

]]>
https://alan.app/blog/slu-the-intelligence-foundation-for-your-business/feed/ 0 4067
The New World of Voice Interface for Enterprise Apps https://alan.app/blog/the-new-world-of-conversational-voice-ai/ https://alan.app/blog/the-new-world-of-conversational-voice-ai/#comments Wed, 23 Sep 2020 12:59:48 +0000 https://alan.app/blog/?p=3971 During the last decade, voice assistants have become the new way to interact with computers, and over the past several years, we have heard that AI (Deep Learning) has helped speech recognition accuracy reach 95% for specific workloads and applications.]]>

During the last decade, voice interfaces have become the new way to interact with computers, and over the past several years, we have heard that AI (Deep Learning) has helped speech recognition accuracy reach 95% for specific workloads and applications.

However, usage of voice interfaces is limited and not reliable. Today’s voice assistants help us by bringing content from other applications into their apps, and experience tells us that their reliability is very low; simple tasks are unable to be completed in applications using voice commands, and complex ones are almost impossible.  Considering these challenges, it comes as no surprise that almost all applications we use daily are built on touch and type experiences.

The current voice assistants cannot provide a complete and reliable experience for all applications. This is because in the real world, our daily conversations rely heavily on context, whereas these voice assistants do not have any context of the user and the application content.

Humans use not only spoken language but also visual context of the environment to communicate effectively, eg. facial expressions, gestures and content. Using these principles, Alan AI has pioneered the technology to converse with users while leveraging the visual context in any application to provide unprecedented reliability and predictability of conversational voice experiences.

User Context in every application has three dimensions: Visual, Dialog, and Workflow. Visual Context limits the scope of the entity values for AI models, improving the accuracy of interpretation. Dialog and Workflow Contexts give us further insights, to drive better accuracy for predictions made by Alan AI models.

Alan AI is able to provide conversational voice experiences that enable humans to interact with applications of their choice, using their own voices. In other words, for the first time, our innovations make conversational voice experience very reliable and complete, that too, for any application. 

The Alan AI Platform brings conversational voice experiences to all applications, using fundamental AI technology for contextual language understanding. With thousands of developers signing up to the self-service platform, we will soon reach the tipping point to make reliable and complete conversational voice experiences ubiquitous, across all applications. Alan AI is poised to be the oxygen required to thrive and excel in this new world.

]]>
https://alan.app/blog/the-new-world-of-conversational-voice-ai/feed/ 1 3971
Safety of Frontline Utility Workers https://alan.app/blog/safety-of-frontline-utility-workers/ https://alan.app/blog/safety-of-frontline-utility-workers/#respond Thu, 07 May 2020 17:59:35 +0000 https://alan.app/blog/?p=3548 In the Post COVID-19 world, every business is conscious about protecting the safety of its employees and customers. This is especially true for the Utility Industry. To this end, the new normal is a Contactless user experience for many business workflows.  Today, Alan Conversational Voice UX can rapidly enhance a...]]>

In the Post COVID-19 world, every business is conscious about protecting the safety of its employees and customers. This is especially true for the Utility Industry. To this end, the new normal is a Contactless user experience for many business workflows. 

Today, Alan Conversational Voice UX can rapidly enhance a business application with Contactless user experience, enabling: 

  1. Safety for Frontline employees:  A touch-free experience guards against cross-contamination and virus infection. 
  2. Field maintenance effectiveness: Contactless UX enables the work crew to receive and upload timely work data in an environment of high temperature & voltage, even with adverse weather conditions. 
  3. Works with existing applications: Most important from an implementation viewpoint, the Contactless UX is achieved using the existing application enhanced with Alan Conversational Voice AI Platform. There is no re-design of the application to enable Contactless UX. 

With Alan, any developer can add an engaging conversational experience front-end to their app in a matter of hours. 

We invite you to lead transformation in the Utility industry with Voice in your Applications today, with the Alan Platform!

 

 

]]>
https://alan.app/blog/safety-of-frontline-utility-workers/feed/ 0 3548
Conversational Voice for Utilities with Alan AI and Utegration https://alan.app/blog/webinar-april-8-at-11am-pst-conversational-voice-for-utilities-with-alan-ai-and-utegration/ https://alan.app/blog/webinar-april-8-at-11am-pst-conversational-voice-for-utilities-with-alan-ai-and-utegration/#respond Sat, 04 Apr 2020 20:35:23 +0000 https://alan.app/blog/?p=3194 When you’re wearing heavy gloves or hanging high from a utility pole, don’t you wish your mobile application understood your spoken voice the same as if it was a typed command? The Alan AI team recognized the benefit of mobile applications having intelligent conversational voice enablement, and took it far...]]>

When you’re wearing heavy gloves or hanging high from a utility pole, don’t you wish your mobile application understood your spoken voice the same as if it was a typed command?

The Alan AI team recognized the benefit of mobile applications having intelligent conversational voice enablement, and took it far beyond today’s consumer-level voice command technology.  Alan Voice control uses patented AI methods to improve the usability of your mobile applications while improving safety and accuracy.

We’re hosting a webinar April 8th at 11AM PST to show you how Utegration and Alan AI can enhance your existing mobile application to understand your unique industry jargon and be ready for your complex mobile processes.

Update: for those who missed the webinar, you can view it here

 

]]>
https://alan.app/blog/webinar-april-8-at-11am-pst-conversational-voice-for-utilities-with-alan-ai-and-utegration/feed/ 0 3194
Technology and Innovation for Public Safety https://alan.app/blog/technology-and-innovation-for-public-safety/ https://alan.app/blog/technology-and-innovation-for-public-safety/#respond Fri, 27 Mar 2020 14:27:35 +0000 https://alan.app/blog/?p=3172

The public safety sector is constantly looking for new and better ways to protect cities, infrastructure, businesses, and citizens. What should not be overlooked is the potential of technology to prepare, anticipate, and respond to these needs. In terms of adopting new technology, public safety is considered to be moving at a slower pace than… Continue Reading →

]]>

The public safety sector is constantly looking for new and better ways to protect cities, infrastructure, businesses, and citizens. What should not be overlooked is the potential of technology to prepare, anticipate, and respond to these needs.

In terms of adopting new technology, public safety is considered to be moving at a slower pace than other industries. However, more and more stakeholders are turning to technological innovation for public safety. They are starting to realize its potential in bringing intelligence, situational awareness, and operational efficiency to public safety. In this article, we’ll explore digital transformation in the security industry and what to expect in the future.

Table of Contents

What is Public Safety?

The goal of public safety agencies is to protect all elements of society, such as citizens, the economy, and critical infrastructure. The framework ensuring the welfare and protection of society consists of police and emergency services, city or government departments, transport operators, and community safety groups.

The efficiency of these agencies depends on multiple factors. As a way to develop a security roadmap and address all relevant aspects, we can define an incident management flow. Here is a sequence of activities to proactively solve public safety issues: 

  1. Protect – It involves being prepared for physical and cybersecurity issues far in advance. This is a crucial step to ensure key community resources form a strong support system. 
  2. Prevent – To anticipate emerging threats, agencies need to carry out more precise actions, such as analytics, integration of data, sensors, and smart networks.
  3. Detect – Set up routine monitoring to have real-time security threat detection and immediate results. Establish a clear operational picture of what could go wrong and what kind of response would be needed in that case.
  4. Respond – This is a two-part process. During an incident, maintain communication flows so that the right departments and specialists have all the information they need to react appropriately. Then, after the incident, public safety agencies need to prioritize resource allocation for investigating the situation.

Why We Can’t Imagine Public Safety Without Technology

Public safety is a constantly evolving issue. At this point, we can identify four main factors shaping our security landscape:

  • Threats (crime and cybercrime targeted at government, enterprise, and people);
  • Budgets (government funding, business models, and maximizing budget utility);
  • Digital transformation (development of digital cities and an increase in Internet connectivity); 
  • Usage of most modern security technology – IoT devices, powered up with AI analysis.

As you can see, two out of the four factors are associated with a surge in digitization. The right public safety technology gives agencies better opportunities to coordinate human and technical resources and manage their operations. 

Traditional security technologies have made a shift from analog to digital systems. As a result, many security solutions have become more reliable, relatively portable, and have fewer latency-related issues. 

One of the technological advances that has already penetrated the public safety sector and continues to evolve is communication systems. It includes high-speed Internet connectivity, wireless devices and applications, and voice-only communications.

 

The Role of AI and Machine Learning in Public Safety

The digitization of security operations leads to a more networked and data-led approach. AI and machine learning help you make sense of data, identify patterns, and make decisions with minimal human intervention. 

The benefits of having more information on the ground and drawing conclusions through AI are applicable to:

  • Government and public-sector institutions
  • Public safety equipment vendors
  • Police and emergency services
  • Regulators

AI is a concept that any kind of organization should be looking into regardless of its current effectiveness (since it is still quite new). In the context of public safety, it’s crucial to make sure we possess as many resources as we can to achieve the best outcome. 

Because of automated data-intensive processing assignments, certain resource constraints get resolved. It includes everything from interpreting large amounts of text to displaying situation overview in an easy-to-comprehend manner. This reduces the burden of some tasks for people and frees them for higher impact work that requires human intelligence and reasoning. 

 

Technology and Public Safety: Pros and Cons 

Does technology help us achieve more and better results using fewer human resources? Benefits supporting this statement include:

  • More information with less human resources – The success of security operation relies on the ability to communicate seamlessly and access relevant data. Cutting-edge technology helps first responders, as well as other participants, to gather more intelligence and respond to situations faster.
  • Provides cost efficiency – The limitations of government budgets force agencies to be innovative. New digital technologies are efficient in reducing operational expenditures.
  • Fosters security innovation – As said earlier, public safety services need to constantly employ better, more advanced techniques. Since crime and associated challenges are evolving, agencies need to catch up.   
  • Brings convenience – Technology opens up opportunities that seemed impossible but are now easily done. Digital devices revolutionize the way we approach public safety; for example, police officers can access huge databases just by talking to the application. 

On the other hand, we can expect certain hardships that come with the increased use of technology. Here is what we should take into account and try to minimize:

  • Relying on signal strength – Technology relies heavily on access points, especially when it comes to mobile devices. Remote locations may not have a signal, so responders will not be able to rely on technology at all times.
  • Lack of personal contact – There are some risks of misunderstandings, miscommunication, and assumptions. While this also happens with face-to-face communication, it’s something worth noting.
  • Privacy concerns for police and citizens – Data breaches have an immense effect. All applications and gathered data should be protected with appropriate security technologies and strategies.

The Latest Trends in Public Safety for 2020

There are many reasons for tech companies and emergency services to foster clear communication and collaboration. As tech companies are implementing new functionality in their products, these advances are well-intended for the reality of emergency services. Below are several directions we expect the public safety tech sector to move further into.  

The Internet of Things (IoT)

The growing network of IP addressable sensors and electronic devices is projected to continue further. They are powerful yet cost-effective, and provide more opportunities for police intervention and decision-makers.

IoTs are also known to improve situational awareness, which is essential for this type of work. Plus, they ensure better officer safety and offer more insight during investigative operations. This utility is what makes us believe that they will continue to penetrate security services.

Artificial Intelligence (AI)

The pure collection of data is not enough. So, interpreting vast amounts of data is of great importance. Instead of manually reviewing it, AI can take over that task. AI can synthesize data by acting on user-configured thresholds, which means it’s not going to roam around freely. It operates based on specific objectives and addresses information overload in a way that is managed by people.

AI can generate analytics, such as specific locations and times of occurrence so that human staff can then make their own conclusions. Also, in 2020, AI-powered apps will only get better, which will make mobile devices more useful to police and other services than ever before. 

More Data = More Transparency

Law enforcement will need to be more open with the public as to which data sources they are using, as well as when and why they are being used. The range of options for technology use is likely to increase as web-connected technology proliferates. Therefore, agencies will need to make information transparency and the way it’s processed their top priority.

 

Alan’s Solutions for Public Safety

Advanced technologies help police and fire departments and other institutions capture and exchange relevant data. These tools combine information from multiple sources to form a full picture of the situation. 

Our solution aims to close the gap in communication for first responders. This optimized productivity is what the Alan voice AI-powered platform aims to achieve. Through quick voice integration into mobile and web applications, you can enable the following services to perform at the top of their abilities:

  • Law enforcement
  • Fire stations
  • Private security 
  • Airports
  • Schools
  • EMS

For example, police departments can receive real-time updates using voice-operated software by Alan. Officers can have convenient access to information with consideration to priority incidents, traffic reports, city plans, and maps, etc. The system can be configured to help with crisis and disaster management, such as determining the closest location of an incident, which crews are available, and other important data.

Instead of wasting time fiddling with a radio, first responders can simply tap an icon to get all of the essential incident details. Plus, they aren’t distracted since the information is given through voice. This interface can address all their needs while syncing its words with visuals on the screen when there is no time to lose. 

Businesses that are interested in implementing voice into their application have two options: let the Alan team take over the task or request the SDK so that their developers can take care of it. In any case, it should be a great opportunity for you to enhance your workflow.

]]>
https://alan.app/blog/technology-and-innovation-for-public-safety/feed/ 0 3172
Incture and Alan partnership: Bringing Voice AI to Field Asset Management https://alan.app/blog/alan-ceo-talks-to-incture/ https://alan.app/blog/alan-ceo-talks-to-incture/#respond Fri, 13 Mar 2020 14:55:14 +0000 https://alan.app/blog/?p=3096 Co-founder and CEO of Alan Ramu Sunkara, sat down to talk about how Alan and Incture are bringing the world’s first conversational voice experience to field asset management. Together, Incture and Alan have deployed touchless mobile field asset management in the Energy industry with the Murphy Oil IOP application for...]]>

Co-founder and CEO of Alan Ramu Sunkara, sat down to talk about how Alan and Incture are bringing the world’s first conversational voice experience to field asset management.

Together, Incture and Alan have deployed touchless mobile field asset management in the Energy industry with the Murphy Oil IOP application for Oil Well operations. This solution is now a finalist for the 2020 SAP Innovation Award.

Touchless, Conversational Voice in mobile applications make it easy and safe for employees to enter operations data while on the go. The solution uses Machine Learning and Artificial Intelligence to recognize unique terms
and phrases specific to Oil Well operations with Alan’s proprietary Language Understanding Models.

After the introduction of the Conversational Voice User Experience, employees adoption and engagement with their mobile applications increased. This led to an increase in revenue, productivity, and operational effectiveness. As a result of this deployment, Murphy Oil was able to get complete real time visibility into their production operations and make informed business decisions quickly.

Field Asset Management Operations can benefit with hands-free:

  • Check-ins when employees start work
  • Task management
  • Communication to other employees in the field using voice comments
  • Onboarding for new employees on proper procedures and protocol
  • Training for existing employees on new processes

Learn more about the Incture and Alan Touchless Mobile solution for Murphy Oil here.

]]>
https://alan.app/blog/alan-ceo-talks-to-incture/feed/ 0 3096
Touchless Mobile Apps for Oil Well Operations at Murphy Oil https://alan.app/blog/touchless-mobile-apps-for-oil-well-operations-at-murphy-oil/ https://alan.app/blog/touchless-mobile-apps-for-oil-well-operations-at-murphy-oil/#respond Mon, 27 Jan 2020 09:01:51 +0000 https://alan.app/blog/?p=2617 Murphy Oil Corporation is a leading energy company based in Houston, Texas, operating in the asset-intensive energy industry. The production oil wells need frequent planned and unplanned maintenance activities by their field employees. These operational field activities are essential in keeping the oil wells at targeted production capacity as well...]]>

Murphy Oil Corporation is a leading energy company based in Houston, Texas, operating in the asset-intensive energy industry. The production oil wells need frequent planned and unplanned maintenance activities by their field employees. These operational field activities are essential in keeping the oil wells at targeted production capacity as well as ensure the safety and regulatory compliance of oil wells.

Murphy Oil partnered with SAP Partner Incture to build a mobile app to assign and track the completion of these activities. However, they faced the challenge of mobile employees not entering information into their apps in a timely and efficient manner safely.  The mobile app had reduced usage and engagement from employees due to the challenging nature of their work: maintaining and repairing machinery with gloves, driving from one oil well to another with a constant focus on safety and compliance

Stopping to use their existing mobile applications interrupted their daily work routine, which caused a decrease in overall technology adoption and data entry. The lack of data entry led to delays in production operations and business decisions.

Touchless, Conversational Voice in mobile applications made it easy and safe for employees to enter operations data while on the go. The solution uses Machine Learning and Artificial Intelligence to recognize unique terms and phrases specific to Oil Well operations.

Murphy Oil was able to get complete real-time visibility into their production operations and make informed business decisions quickly. After the introduction of the Alan Conversational Voice User Experience, employees’ adoption, and engagement with their mobile applications increased. 

The Murphy Oil case study video demonstrates how the Alan Voice Experience seamlessly integrates with the current Visual UI of a Field Service app in a contextual voice conversation that guides the business workflow. For the detailed Murphy Oil and Alan AI case study please review the SAP Innovation Award 2020 entry.  

Alan AI provides a complete platform for adding Conversational Voice to any mobile employee application. Conversational Voice makes field operations safer, improves operational effectiveness, and helps companies adhere to and exceed compliance standards.

Krishna Sunkammurali

VP, Growth and Adoption

]]>
https://alan.app/blog/touchless-mobile-apps-for-oil-well-operations-at-murphy-oil/feed/ 0 2617
Video Spotlight – SAP Deliveries https://alan.app/blog/videospotlight-sap/ https://alan.app/blog/videospotlight-sap/#respond Thu, 19 Sep 2019 15:00:00 +0000 http://alan.app/blog/?p=2292 Video demonstration of a Visual Voice experience for the SAP sample application built using the Alan platform.]]>
SAP Deliveries Demo

Today we want to spotlight a video demonstration of a Visual Voice experience for the SAP sample application built using the Alan platform.

Click here for documentation:
https://docs.alan.app/docs/integrations/ios.html

Make sure to subscribe to our YouTube channel here: https://www.youtube.com/channel/UCrsg0b32nL6L2j5jG7uxOuw

]]>
https://alan.app/blog/videospotlight-sap/feed/ 0 2292
Why Alan AI is the Right Choice for your Enterprise https://alan.app/blog/why-alan-ai-is-the-right-choice-for-your-enterprise/ https://alan.app/blog/why-alan-ai-is-the-right-choice-for-your-enterprise/#respond Thu, 11 Jul 2019 16:57:51 +0000 http://alan.app/blog/?p=2206 Voice Artificial Intelligence is  becoming a huge part of all enterprises and companies futures. By 2020, there will be 200 billion connected devices, with voice as the primary interface. Voice offers capabilities unparalleled by tedious touch and type, and enterprises are starting to catch on and realize all the benefits...]]>

Voice Artificial Intelligence is  becoming a huge part of all enterprises and companies futures. By 2020, there will be 200 billion connected devices, with voice as the primary interface. Voice offers capabilities unparalleled by tedious touch and type, and enterprises are starting to catch on and realize all the benefits voice will have on their future.

Voice has made waves in recent years, with voice assistants like Siri, Alexa, and Cortana becoming more popular in the mainstream. Despite their increasing popularity, these voice assistants are not very strong: they can only answer basic questions and tasks like asking for the weather or putting an alarm on. Although these assistants can understand basic commands, due to privacy concerns, they can only offer a single command and response conversation, resulting in very short conversations, with a limited knowledge of what the user could be asking. The current voice options in place can accomplish these basic tasks, but they are not and will not ever be designed to be integrated into an application. According to an article written in TechTalks, “The future of voice is the integration of artificial intelligence in plenty of narrow settings and tasks instead of a broad, general purpose AI assistant that can fulfill anything and everything you can think of.” In the race for voice, Alan AI has pioneered the technology to add voice AI conversations within your application.

At Alan, we have developed advanced and unique voice AI technology to overcome the shortcomings of the current voice options. Spoken Language Understanding (SLU) is our patent pending technology. It filters out anaphoras and filler words people use in everyday conversation such as “like” or “um”, using probabilistic models to help determine what the user said, as well as their intent. Alan takes in context to determine user intents much more accurately, unlike any other Voice AI platform. We are expecting to see Alan enabled voice experiences mature by 2020 to pass the Turing Test. Alan will take your enterprise to the next level. 

]]>
https://alan.app/blog/why-alan-ai-is-the-right-choice-for-your-enterprise/feed/ 0 2206
Add Visual Voice Experiences to your SAP Mobile Applications https://alan.app/blog/add-a-visual-voice-experience-to-your-sap-mobile-applications-with-alan/ https://alan.app/blog/add-a-visual-voice-experience-to-your-sap-mobile-applications-with-alan/#respond Fri, 17 May 2019 18:05:05 +0000 http://alan.app/blog/?p=2041 Recently, we created a full visual voice experience for the SAP sample application provided through the SAP Cloud Platform SDK for iOS. We did this with the new integration from the Alan Voice AI platform. Here, we’ll go over the steps we took to create this visual voice experience. You can find the full source code of this application and the supporting Alan Visual Voice scripts here.

1. Download and Install the SAP Cloud Platform SDK for iOS

Head over to SAP’s Developer page and click on “SAP Cloud Platform SDK for iOS”. Click on the top link there to download the SAP Cloud Platform SDK for iOS. Add to your Applications folder and open the Application.

2. Create the SAP Sample Application

Now, open the SAP Cloud Platform SDK on your computer, click “Create new”, then click “Sample Application”. Then follow the steps to add your SAP account, Application details, and the name of your Xcode project. This will create an Xcode project with the Sample application.

Once this is done, open the Xcode project and take a look around. Build the project and you can see it’s an application with Suppliers, Categories, Products, and Ordering information.

Now let’s integrate with Alan.

3. Integrate the application with Alan Platform

Go to Alan Studio at https://studio.alan.app. If you don’t have an account, create one to get started.

Once you login, create a Project named “SAP”. Now, we’re just going to be integrating our SAP sample application with Alan. Later we will create the voice experience.

At the top of the screen, switch from “Development” to “Production”. Now open the “Embed  Code </>” menu, then click on the “iOS” tab and review the steps.

Then, Download the iOS SDK Framework. Once you download go back to your Xcode project.In your Xcode project, create a new group named “Alan”. Drag and drop the iOS SDK Framework into this group.

Next, go to the “Embedded Binaries” section and add the SDK Framework. Make sure that you also have the framework in your project’s “Linked Frameworks and Libraries” section as well.

Now, we need to show a message asking for microphone access. To do this, go to the file info.plist. In the “Key” column, right click and select “Add Row”. For the name of the Key, input “NSMicrophoneDescription”. The value here will be the message that your users will see when they press on the Alan button for this first time in the application. For this, use the message “Alan needs microphone access to provide the voice experience for this application” or something similar.

Go back to the group you created earlier named “Alan,” right click it, and select “New File”. Select “Swift” as the filetype and name it “WindowUI+Alan”. All of the Alan button’s functions will be stored in this file, including the size, color styles, and voice states. You can find the code for this file here:

[code language="objc" collapse="true" title="WindowUI+Alan.swift"]
//
//  UIWindow+Alan.swift
//  MyDeliveries
//
//  Created by Sergey Yuryev on 22/04/2019.
//  Copyright © 2019 SAP. All rights reserved.
//

import UIKit
import AlanSDK

public final class ObjectAssociation&amp;amp;amp;amp;amp;lt;T: Any&amp;amp;amp;amp;amp;gt; {
    
    private let policy: objc_AssociationPolicy
    
    public init(policy: objc_AssociationPolicy = .OBJC_ASSOCIATION_RETAIN_NONATOMIC) {
        self.policy = policy
    }
    
    public subscript(index: AnyObject) -&amp;amp;amp;amp;amp;gt; T? {
        get { return objc_getAssociatedObject(index, Unmanaged.passUnretained(self).toOpaque()) as! T? }
        set { objc_setAssociatedObject(index, Unmanaged.passUnretained(self).toOpaque(), newValue, policy) }
    }
    
}

extension UIWindow {
    
    private static let associationAlanButton = ObjectAssociation&amp;amp;amp;amp;amp;lt;AlanButton&amp;amp;amp;amp;amp;gt;()
    private static let associationAlanText = ObjectAssociation&amp;amp;amp;amp;amp;lt;AlanText&amp;amp;amp;amp;amp;gt;()
    
    var alanButton: AlanButton? {
        get {
            return UIWindow.associationAlanButton[self]
        }
        set {
            UIWindow.associationAlanButton[self] = newValue
        }
    }
    
    var alanText: AlanText? {
        get {
            return UIWindow.associationAlanText[self]
        }
        set {
            UIWindow.associationAlanText[self] = newValue
        }
    }
    
    func moveAlanToFront() {
        if let button = self.alanButton {
            self.bringSubviewToFront(button)
        }
        if let text = self.alanText {
            self.bringSubviewToFront(text)
        }
    }
    
    func addAlan() {
        let buttonSpace: CGFloat = 20
        let buttonWidth: CGFloat = 64
        let buttonHeight: CGFloat = 64
        let textWidth: CGFloat = self.frame.maxX - buttonWidth - buttonSpace * 3
        let textHeight: CGFloat = 64
        
        let config = AlanConfig(key: "", isButtonDraggable: false)

        self.alanButton = AlanButton(config: config)
        if let button = self.alanButton {
            let safeHeight = self.frame.maxY - self.safeAreaLayoutGuide.layoutFrame.maxY
            let realX = self.frame.maxX - buttonWidth - buttonSpace
            let realY = self.frame.maxY - safeHeight - buttonHeight - buttonSpace
            
            button.frame = CGRect(x: realX, y: realY, width: buttonWidth, height: buttonHeight)
            self.addSubview(button)
            self.bringSubviewToFront(button)
        }
        
        self.alanText = AlanText(frame: CGRect.zero)
        if let text = self.alanText {
            let safeHeight = self.frame.maxY - self.safeAreaLayoutGuide.layoutFrame.maxY
            let realX = self.frame.minX + buttonSpace
            let realY = self.frame.maxY - safeHeight - textHeight - buttonSpace
            
            text.frame = CGRect(x: realX, y: realY, width: textWidth, height: textHeight)
            self.addSubview(text)
            self.bringSubviewToFront(text)
            
            text.layer.shadowColor = UIColor.black.cgColor
            text.layer.shadowOffset = CGSize(width: 0, height: 0)
            text.layer.shadowOpacity = 0.3
            text.layer.shadowRadius = 4.0
            
            for subview in text.subviews {
                if let s = subview as? UILabel {
                    s.backgroundColor = UIColor.white
                }
            }
        }
    }
}
[/code]

The next thing to do is to open the projects “ApplicationUIManager.swift” file and add a few methods required to use the voice button in the application. Here are the sections that each method should be added to:

[code language="objc" collapse="true" title="ApplicationUIManager.swift" highlight="28,92"]
//
// AlanDeliveries
//
// Created by SAP Cloud Platform SDK for iOS Assistant application on 24/04/19
//

import SAPCommon
import SAPFiori
import SAPFioriFlows
import SAPFoundation

class SnapshotViewController: UIViewController {}

class ApplicationUIManager: ApplicationUIManaging {
    // MARK: –&amp;amp;amp;amp;amp;nbsp;Properties

    let window: UIWindow

    /// Save ViewController while splash/onboarding screens are presented
    private var _savedApplicationRootViewController: UIViewController?
    private var _onboardingSplashViewController: (UIViewController &amp;amp;amp;amp;amp;amp; InfoTextSettable)?
    private var _coveringViewController: UIViewController?

    // MARK: – Init

    public init(window: UIWindow) {
        self.window = window
        self.window.addAlan()
    }

    // MARK: - ApplicationUIManaging

    func hideApplicationScreen(completionHandler: @escaping (Error?) -&amp;amp;amp;amp;amp;gt; Void) {
        // Check whether the covering screen is already presented or not
        guard self._coveringViewController == nil else {
            completionHandler(nil)
            return
        }

        self.saveApplicationScreenIfNecessary()
        self._coveringViewController = SnapshotViewController()
        self.window.rootViewController = self._coveringViewController

        completionHandler(nil)
    }

    func showSplashScreenForOnboarding(completionHandler: @escaping (Error?) -&amp;amp;amp;amp;amp;gt; Void) {
        // splash already presented
        guard self._onboardingSplashViewController == nil else {
            completionHandler(nil)
            return
        }

        setupSplashScreen()

        completionHandler(nil)
    }

    func showSplashScreenForUnlock(completionHandler: @escaping (Error?) -&amp;amp;amp;amp;amp;gt; Void) {
        guard self._onboardingSplashViewController == nil else {
            completionHandler(nil)
            return
        }

        self.saveApplicationScreenIfNecessary()

        setupSplashScreen()

        completionHandler(nil)
    }

    func showApplicationScreen(completionHandler: @escaping (Error?) -&amp;amp;amp;amp;amp;gt; Void) {
        // Check if an application screen has already been presented
        guard self.isSplashPresented else {
            completionHandler(nil)
            return
        }

        // Restore the saved application screen or create a new one
        let appViewController: UIViewController
        if let savedViewController = self._savedApplicationRootViewController {
            appViewController = savedViewController
        } else {
            let appDelegate = (UIApplication.shared.delegate as! AppDelegate)
            let splitViewController = UIStoryboard(name: "Main", bundle: Bundle.main).instantiateViewController(withIdentifier: "MainSplitViewController") as! UISplitViewController
            splitViewController.delegate = appDelegate
            splitViewController.modalPresentationStyle = .currentContext
            splitViewController.preferredDisplayMode = .allVisible
            appViewController = splitViewController
        }
        self.window.rootViewController = appViewController
        self.window.moveAlanToFront()
        self._onboardingSplashViewController = nil
        self._savedApplicationRootViewController = nil
        self._coveringViewController = nil

        completionHandler(nil)
    }

    func releaseRootFromMemory() {
        self._savedApplicationRootViewController = nil
    }

    // MARK: –&amp;amp;amp;amp;amp;nbsp;Helpers

    private var isSplashPresented: Bool {
        return self.window.rootViewController is FUIInfoViewController || self.window.rootViewController is SnapshotViewController
    }

    /// Helper method to capture the real application screen.
    private func saveApplicationScreenIfNecessary() {
        if self._savedApplicationRootViewController == nil, !self.isSplashPresented {
            self._savedApplicationRootViewController = self.window.rootViewController
        }
    }

    private func setupSplashScreen() {
        self._onboardingSplashViewController = FUIInfoViewController.createSplashScreenInstanceFromStoryboard()
        self.window.rootViewController = self._onboardingSplashViewController

        // Set the splash screen for the specific presenter
        let modalPresenter = OnboardingFlowProvider.modalUIViewControllerPresenter
        modalPresenter.setSplashScreen(self._onboardingSplashViewController!)
        modalPresenter.animated = true
    }
}
[/code]

For the final step of the integration, return to your project in Alan Studio, open the “Embed  Code </>” menu, “iOS” tab, and copy the “Alan SDK Key”. Make sure you copy the “Production” key for this step!

Now go back to your Xcode project’s “WindowUI+Alan.swift” file. Paste key into the quotes between the quotes in the line let config = AlanConfig(key: ””, isButtonDraggable: false)

It’s time to Build the application to see how it looks. Press the big Play button in the upper left of Xcode.

See the Alan button in the bottom right of the application. Now it’s time to create the full Visual Voice experience for the application.

4. Create the Visual Voice experience in Alan

The Visual Voice experience for this application will let users ask about products, orders, and suppliers. We’ve already created the scripts for this, which you can find here. Take these scripts and copy and paste them into your project within Alan and save. You’ll want to create a new version with this script and put it on “Production”.

Now that that’s done, we need to add handlers in the application which will control the application with voice commands. Note that the handlers for your application will be slightly different. Here are examples of our handlers:

[code language="objc" collapse="true" title="WindowUI+Alan.swift" highlight="27-36,40,41,45-61,90-113,157,160-228"]
//
//  UIWindow+Alan.swift
//  MyDeliveries
//
//  Created by Sergey Yuryev on 22/04/2019.
//  Copyright © 2019 SAP. All rights reserved.
//

import UIKit
import AlanSDK

public final class ObjectAssociation&amp;amp;amp;amp;amp;amp;amp;amp;lt;T: Any&amp;amp;amp;amp;amp;amp;amp;amp;gt; {
    
    private let policy: objc_AssociationPolicy
    
    public init(policy: objc_AssociationPolicy = .OBJC_ASSOCIATION_RETAIN_NONATOMIC) {
        self.policy = policy
    }
    
    public subscript(index: AnyObject) -&amp;amp;amp;amp;amp;amp;amp;amp;gt; T? {
        get { return objc_getAssociatedObject(index, Unmanaged.passUnretained(self).toOpaque()) as! T? }
        set { objc_setAssociatedObject(index, Unmanaged.passUnretained(self).toOpaque(), newValue, policy) }
    }
    
}

protocol ProductViewDelegate {
    func highlightProductId(_ id: String?)
    func showProductCategory(_ category: String)
    func showProductIds(_ ids: [String])
}

protocol NavigateViewDelegate {
    func navigateCategory(_ category: String)
    func navigateBack()
}

extension UIWindow {
    
    private static let navigateDelegate = ObjectAssociation&amp;amp;amp;amp;amp;amp;amp;amp;lt;NavigateViewDelegate&amp;amp;amp;amp;amp;amp;amp;amp;gt;()
    private static let productDelegate = ObjectAssociation&amp;amp;amp;amp;amp;amp;amp;amp;lt;ProductViewDelegate&amp;amp;amp;amp;amp;amp;amp;amp;gt;()
    private static let associationAlanButton = ObjectAssociation&amp;amp;amp;amp;amp;amp;amp;amp;lt;AlanButton&amp;amp;amp;amp;amp;amp;amp;amp;gt;()
    private static let associationAlanText = ObjectAssociation&amp;amp;amp;amp;amp;amp;amp;amp;lt;AlanText&amp;amp;amp;amp;amp;amp;amp;amp;gt;()
    
    var navigateViewDelegate: NavigateViewDelegate? {
        get {
            return UIWindow.navigateDelegate[self]
        }
        set {
            UIWindow.navigateDelegate[self] = newValue
        }
    }
    
    var productViewDelegate: ProductViewDelegate? {
        get {
            return UIWindow.productDelegate[self]
        }
        set {
            UIWindow.productDelegate[self] = newValue
        }
    }
    
    var alanButton: AlanButton? {
        get {
            return UIWindow.associationAlanButton[self]
        }
        set {
            UIWindow.associationAlanButton[self] = newValue
        }
    }
    
    var alanText: AlanText? {
        get {
            return UIWindow.associationAlanText[self]
        }
        set {
            UIWindow.associationAlanText[self] = newValue
        }
    }
    
    func moveAlanToFront() {
        if let button = self.alanButton {
            self.bringSubviewToFront(button)
        }
        if let text = self.alanText {
            self.bringSubviewToFront(text)
        }
    }
    
    func setVisual(_ data: [String: Any]) {
        print("setVisual: \(data)");
        if let button = self.alanButton {
            button.setVisual(data)
        }
    }
    
    func playText(_ text: String) {
        if let button = self.alanButton {
            button.playText(text)
        }
    }
    
    func playData(_ data: [String: String]) {
        if let button = self.alanButton {
            button.playData(data)
        }
    }
    
    func call(method: String, params: [String: Any], callback:@escaping ((Error?, String?) -&amp;amp;amp;amp;amp;amp;amp;amp;gt; Void)) {
        if let button = self.alanButton {
            button.call(method, withParams: params, callback: callback)
        }
    }
    
    func addAlan() {
        let buttonSpace: CGFloat = 20
        let buttonWidth: CGFloat = 64
        let buttonHeight: CGFloat = 64
        let textWidth: CGFloat = self.frame.maxX - buttonWidth - buttonSpace * 3
        let textHeight: CGFloat = 64
        
        let config = AlanConfig(key: "", isButtonDraggable: false)

        self.alanButton = AlanButton(config: config)
        if let button = self.alanButton {
            let safeHeight = self.frame.maxY - self.safeAreaLayoutGuide.layoutFrame.maxY
            let realX = self.frame.maxX - buttonWidth - buttonSpace
            let realY = self.frame.maxY - safeHeight - buttonHeight - buttonSpace
            
            button.frame = CGRect(x: realX, y: realY, width: buttonWidth, height: buttonHeight)
            self.addSubview(button)
            self.bringSubviewToFront(button)
        }
        
        self.alanText = AlanText(frame: CGRect.zero)
        if let text = self.alanText {
            let safeHeight = self.frame.maxY - self.safeAreaLayoutGuide.layoutFrame.maxY
            let realX = self.frame.minX + buttonSpace
            let realY = self.frame.maxY - safeHeight - textHeight - buttonSpace
            
            text.frame = CGRect(x: realX, y: realY, width: textWidth, height: textHeight)
            self.addSubview(text)
            self.bringSubviewToFront(text)
            
            text.layer.shadowColor = UIColor.black.cgColor
            text.layer.shadowOffset = CGSize(width: 0, height: 0)
            text.layer.shadowOpacity = 0.3
            text.layer.shadowRadius = 4.0
            
            for subview in text.subviews {
                if let s = subview as? UILabel {
                    s.backgroundColor = UIColor.white
                }
            }
        }
        
        NotificationCenter.default.addObserver(self, selector: #selector(self.handleEvent(_:)), name:NSNotification.Name(rawValue: "kAlanSDKEventNotification"), object:nil)
    }
    
    @objc func handleEvent(_ notification: Notification) {
        guard let userInfo = notification.userInfo else {
            return
        }
        guard let event = userInfo["onEvent"] as? String else {
            return
        }
        guard event == "command" else {
            return
        }
        guard let jsonString = userInfo["jsonString"] as? String else {
            return
        }
        guard let data = jsonString.data(using: .utf8) else {
            return
        }
        guard let unwrapped = try? JSONSerialization.jsonObject(with: data, options: [])  else {
            return
        }
        guard let d = unwrapped as? [String: Any] else {
            return
        }
        guard let json = d["data"] as? [String: Any] else {
            return
        }
        guard let command = json["command"] as? String else {
            return
        }
        
        if command == "showProductCategory" {
            if let value = json["value"] as? String {
                if let d = self.productViewDelegate {
                    d.showProductCategory(value)
                }
            }
        }
        else if command == "showProductIds" {
            if let value = json["value"] as? [String] {
                if let d = self.productViewDelegate {
                    d.showProductIds(value)
                }
            }
        }
        else if command == "highlightProductId" {
            if let value = json["value"] as? String {
                if let d = self.productViewDelegate {
                    d.highlightProductId(value)
                }
            }
            else {
                if let d = self.productViewDelegate {
                    d.highlightProductId(nil)
                }
            }
        }
        else if command == "navigate" {
            if let value = json["screen"] as? String {
                if let d = self.navigateViewDelegate {
                    d.navigateCategory(value)
                }
            }
        }
        else if command == "goBack" {
            if let d = self.navigateViewDelegate {
                d.navigateBack()
            }
        }
    }
}
[/code]
[code language="objc" collapse="true" title="ProductMasterViewController.swift" highlight="13,25,29-31,36-39,115-165"]
//
// AlanDeliveries
//
// Created by SAP Cloud Platform SDK for iOS Assistant application on 24/04/19
//

import Foundation
import SAPCommon
import SAPFiori
import SAPFoundation
import SAPOData

class ProductMasterViewController: FUIFormTableViewController, SAPFioriLoadingIndicator, ProductViewDelegate {
    var espmContainer: ESPMContainer&amp;amp;amp;amp;amp;amp;lt;OnlineODataProvider&amp;amp;amp;amp;amp;amp;gt;!
    public var loadEntitiesBlock: ((_ completionHandler: @escaping ([Product]?, Error?) -&amp;amp;amp;amp;amp;amp;gt; Void) -&amp;amp;amp;amp;amp;amp;gt; Void)?
    private var entities: [Product] = [Product]()
    private var allEntities: [Product] = [Product]()
    private var entityImages = [Int: UIImage]()
    private let logger = Logger.shared(named: "ProductMasterViewControllerLogger")
    private let okTitle = NSLocalizedString("keyOkButtonTitle",
                                            value: "OK",
                                            comment: "XBUT: Title of OK button.")
    var loadingIndicator: FUILoadingIndicatorView?

    var highlightedId: String?
    
    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        if let window = UIApplication.shared.keyWindow {
            window.productViewDelegate = nil
        }
    }
    
    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        if let window = UIApplication.shared.keyWindow {
            window.setVisual(["screen": "Product"])
            window.productViewDelegate = self
        }
    }
    
    override func viewDidLoad() {
        super.viewDidLoad()
        self.edgesForExtendedLayout = []
        // Add refreshcontrol UI
        self.refreshControl?.addTarget(self, action: #selector(self.refresh), for: UIControl.Event.valueChanged)
        self.tableView.addSubview(self.refreshControl!)
        // Cell height settings
        self.tableView.rowHeight = UITableView.automaticDimension
        self.tableView.estimatedRowHeight = 98
        self.updateTable()
    }

    var preventNavigationLoop = false
    var entitySetName: String?

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
        self.clearsSelectionOnViewWillAppear = self.splitViewController!.isCollapsed
    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }

    // MARK: - Table view data source

    override func tableView(_: UITableView, numberOfRowsInSection _: Int) -&amp;amp;amp;amp;amp;amp;gt; Int {
        return self.entities.count
    }

    override func tableView(_: UITableView, canEditRowAt _: IndexPath) -&amp;amp;amp;amp;amp;amp;gt; Bool {
        return true
    }

    override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -&amp;amp;amp;amp;amp;amp;gt; UITableViewCell {
        let product = self.entities[indexPath.row]
        let cell = CellCreationHelper.objectCellWithNonEditableContent(tableView: tableView, indexPath: indexPath, key: "ProductId", value: "\(product.productID!)")
        cell.preserveDetailImageSpacing = true
        cell.headlineText = product.name
        cell.footnoteText = product.productID
        let backgroundView = UIView()
        backgroundView.backgroundColor = UIColor.white
        
        if let image = image(for: indexPath, product: product) {
            cell.detailImage = image
            cell.detailImageView.contentMode = .scaleAspectFit
        }
        if let hid = self.highlightedId, let current = product.productID, hid == current {
            backgroundView.backgroundColor = UIColor(red: 235 / 255, green: 245 / 255, blue: 255 / 255, alpha: 1.0)
        }
        cell.backgroundView = backgroundView
        return cell
    }

    override func tableView(_ tableView: UITableView, commit editingStyle: UITableViewCell.EditingStyle, forRowAt indexPath: IndexPath) {
        if editingStyle != .delete {
            return
        }
        let currentEntity = self.entities[indexPath.row]
        self.espmContainer.deleteEntity(currentEntity) { error in
            if let error = error {
                self.logger.error("Delete entry failed.", error: error)
                AlertHelper.displayAlert(with: NSLocalizedString("keyErrorDeletingEntryTitle", value: "Delete entry failed", comment: "XTIT: Title of deleting entry error pop up."), error: error, viewController: self)
            } else {
                self.entities.remove(at: indexPath.row)
                tableView.deleteRows(at: [indexPath], with: .fade)
            }
        }
    }

    // MARK: - Data accessing
    
    func highlightProductId(_ id: String?) {
        self.highlightedId = id
        DispatchQueue.main.async {
            self.tableView.reloadData()
            self.logger.info("Alan: Table updated successfully!")
        }
    }

    internal func showProductCategory(_ category: String) {
        if category == "All" {
            self.entityImages.removeAll()
            self.entities.removeAll()
            self.entities.append(contentsOf: self.allEntities)
        }
        else {
            let filtered = self.allEntities.filter {
                if let c = $0.category, c == category {
                    return true
                }
                return false
            }
            self.entityImages.removeAll()
            self.entities.removeAll()
            self.entities.append(contentsOf: filtered)
        }
        DispatchQueue.main.async {
            let range = NSMakeRange(0, self.tableView.numberOfSections)
            let sections = NSIndexSet(indexesIn: range)
            self.tableView.reloadSections(sections as IndexSet, with: .automatic)
            self.logger.info("Alan: Table updated successfully!")
        }
        
    }
    
    internal func showProductIds(_ ids: [String]) {
        let filtered = self.allEntities.filter {
            if let productId = $0.productID, ids.contains(productId) {
                return true
            }
            return false
        }
        self.entityImages.removeAll()
        self.entities.removeAll()
        self.entities.append(contentsOf: filtered)
        DispatchQueue.main.async {
            let range = NSMakeRange(0, self.tableView.numberOfSections)
            let sections = NSIndexSet(indexesIn: range)
            self.tableView.reloadSections(sections as IndexSet, with: .automatic)
            self.logger.info("Alan: Table updated successfully!")
        }
    }
    
    
    func requestEntities(completionHandler: @escaping (Error?) -&amp;amp;amp;amp;amp;amp;gt; Void) {
        self.loadEntitiesBlock!() { entities, error in
            if let error = error {
                completionHandler(error)
                return
            }
            
            self.entities = entities!
            self.allEntities.append(contentsOf: entities!)
            
            let encoder = JSONEncoder()
            if let encodedEntityValue = try? encoder.encode(self.entities) {
                if let json = String(data: encodedEntityValue, encoding: .utf8) {
                    print(json)
                    if let window = UIApplication.shared.keyWindow {
                        window.call(method: "script::updateProductEntities", params: ["json": json] , callback: { (error, result) in
                        })
                    }
                }
            }
            
            completionHandler(nil)
        }
    }

    // MARK: - Segues

    override func prepare(for segue: UIStoryboardSegue, sender _: Any?) {
        if segue.identifier == "showDetail" {
            // Show the selected Entity on the Detail view
            guard let indexPath = self.tableView.indexPathForSelectedRow else {
                return
            }
            self.logger.info("Showing details of the chosen element.")
            let selectedEntity = self.entities[indexPath.row]
            let detailViewController = segue.destination as! ProductDetailViewController
            detailViewController.entity = selectedEntity
            detailViewController.navigationItem.leftItemsSupplementBackButton = true
            detailViewController.navigationItem.title = self.entities[(self.tableView.indexPathForSelectedRow?.row)!].productID ?? ""
            detailViewController.allowsEditableCells = false
            detailViewController.tableUpdater = self
            detailViewController.preventNavigationLoop = self.preventNavigationLoop
            detailViewController.espmContainer = self.espmContainer
            detailViewController.entitySetName = self.entitySetName
        } else if segue.identifier == "addEntity" {
            // Show the Detail view with a new Entity, which can be filled to create on the server
            self.logger.info("Showing view to add new entity.")
            let dest = segue.destination as! UINavigationController
            let detailViewController = dest.viewControllers[0] as! ProductDetailViewController
            detailViewController.title = NSLocalizedString("keyAddEntityTitle", value: "Add Entity", comment: "XTIT: Title of add new entity screen.")
            let doneButton = UIBarButtonItem(barButtonSystemItem: .done, target: detailViewController, action: #selector(detailViewController.createEntity))
            detailViewController.navigationItem.rightBarButtonItem = doneButton
            let cancelButton = UIBarButtonItem(title: NSLocalizedString("keyCancelButtonToGoPreviousScreen", value: "Cancel", comment: "XBUT: Title of Cancel button."), style: .plain, target: detailViewController, action: #selector(detailViewController.cancel))
            detailViewController.navigationItem.leftBarButtonItem = cancelButton
            detailViewController.allowsEditableCells = true
            detailViewController.tableUpdater = self
            detailViewController.espmContainer = self.espmContainer
            detailViewController.entitySetName = self.entitySetName
        }
    }

    // MARK: - Image loading

    private func image(for indexPath: IndexPath, product: Product) -&amp;amp;amp;amp;amp;amp;gt; UIImage? {
        if let image = self.entityImages[indexPath.row] {
            return image
        } else {
            espmContainer.downloadMedia(entity: product, completionHandler: { data, error in
                if let error = error {
                    self.logger.error("Download media failed. Error: \(error)", error: error)
                    return
                }
                guard let data = data else {
                    self.logger.info("Media data is empty.")
                    return
                }
                if let image = UIImage(data: data) {
                    // store the downloaded image
                    self.entityImages[indexPath.row] = image
                    // update the cell
                    DispatchQueue.main.async {
                        self.tableView.beginUpdates()
                        if let cell = self.tableView.cellForRow(at: indexPath) as? FUIObjectTableViewCell {
                            cell.detailImage = image
                            cell.detailImageView.contentMode = .scaleAspectFit
                        }
                        self.tableView.endUpdates()
                    }
                }
            })
            return nil
        }
    }

    // MARK: - Table update

    func updateTable() {
        self.showFioriLoadingIndicator()
        DispatchQueue.global().async {
            self.loadData {
                self.hideFioriLoadingIndicator()
            }
        }
    }

    private func loadData(completionHandler: @escaping () -&amp;amp;amp;amp;amp;amp;gt; Void) {
        self.requestEntities { error in
            defer {
                completionHandler()
            }
            if let error = error {
                AlertHelper.displayAlert(with: NSLocalizedString("keyErrorLoadingData", value: "Loading data failed!", comment: "XTIT: Title of loading data error pop up."), error: error, viewController: self)
                self.logger.error("Could not update table. Error: \(error)", error: error)
                return
            }
            DispatchQueue.main.async {
                self.tableView.reloadData()
                self.logger.info("Table updated successfully!")
            }
        }
    }

    @objc func refresh() {
        DispatchQueue.global().async {
            self.loadData {
                DispatchQueue.main.async {
                    self.refreshControl?.endRefreshing()
                }
            }
        }
    }
}

extension ProductMasterViewController: EntitySetUpdaterDelegate {
    func entitySetHasChanged() {
        self.updateTable()
    }
}
[/code]
[code language="objc" collapse="true" title="CollectionsViewController.swift" highlight="20,36-91,107-110"]
//
// AlanDeliveries
//
// Created by SAP Cloud Platform SDK for iOS Assistant application on 24/04/19
//

import Foundation
import SAPFiori
import SAPFioriFlows
import SAPOData

protocol EntityUpdaterDelegate {
    func entityHasChanged(_ entity: EntityValue?)
}

protocol EntitySetUpdaterDelegate {
    func entitySetHasChanged()
}

class CollectionsViewController: FUIFormTableViewController, NavigateViewDelegate {
    private var collections = CollectionType.all

    // Variable to store the selected index path
    private var selectedIndex: IndexPath?

    private let okTitle = NSLocalizedString("keyOkButtonTitle",
                                            value: "OK",
                                            comment: "XBUT: Title of OK button.")

    var isPresentedInSplitView: Bool {
        return !(self.splitViewController?.isCollapsed ?? true)
    }

    // Navigate
    
    func navigateBack() {
        DispatchQueue.main.async {
            if let navigation1 = self.splitViewController?.viewControllers.last as? UINavigationController {
                if let navigation2 = navigation1.viewControllers.last as? UINavigationController {
                    if navigation2.viewControllers.count &amp;amp;amp;amp;amp;lt; 2 {
                        navigation1.popViewController(animated: true)
                    }
                    else {
                        if let last = navigation2.viewControllers.last {
                            last.navigationController?.popViewController(animated: true)
                        }
                    }
                }
            }
        }
    }
    
    func navigateCategory(_ category: String) {
        var indexPath = IndexPath(row: 0, section: 0)
        if( category == "Sales") {
            indexPath = IndexPath(row: 6, section: 0)
        }
        else if( category == "PurchaseOrderItems") {
            indexPath = IndexPath(row: 3, section: 0)
        }
        else if( category == "ProductText") {
            indexPath = IndexPath(row: 2, section: 0)
        }
        else if( category == "PurchaseOrderHeaders") {
            indexPath = IndexPath(row: 4, section: 0)
        }
        else if( category == "Supplier") {
            indexPath = IndexPath(row: 0, section: 0)
        }
        else if( category == "Product") {
            indexPath = IndexPath(row: 9, section: 0)
        }
        else if( category == "Stock") {
            indexPath = IndexPath(row: 5, section: 0)
        }
        else if( category == "ProductCategory") {
            indexPath = IndexPath(row: 1, section: 0)
        }
        else if( category == "SalesOrder") {
            indexPath = IndexPath(row: 8, section: 0)
        }
        else if( category == "Customer") {
            indexPath = IndexPath(row: 7, section: 0)
        }
        DispatchQueue.main.async {
            self.navigationController?.popToRootViewController(animated: true)
            DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
                self.collectionSelected(at: indexPath)
            }
        }
    }
    
    // MARK: - Lifecycle

    override func viewDidLoad() {
        super.viewDidLoad()
        self.preferredContentSize = CGSize(width: 320, height: 480)

        self.tableView.rowHeight = UITableView.automaticDimension
        self.tableView.estimatedRowHeight = 44
    }

    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        self.makeSelection()
        
        if let window = UIApplication.shared.keyWindow {
            window.setVisual(["screen": "Main"])
            window.navigateViewDelegate = self
        }
    }

    override func viewWillTransition(to _: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
        coordinator.animate(alongsideTransition: nil, completion: { _ in
            let isNotInSplitView = !self.isPresentedInSplitView
            self.tableView.visibleCells.forEach { cell in
                // To refresh the disclosure indicator of each cell
                cell.accessoryType = isNotInSplitView ? .disclosureIndicator : .none
            }
            self.makeSelection()
        })
    }

    // MARK: - UITableViewDelegate

    override func numberOfSections(in _: UITableView) -&amp;amp;amp;amp;amp;gt; Int {
        return 1
    }

    override func tableView(_: UITableView, numberOfRowsInSection _: Int) -&amp;amp;amp;amp;amp;gt; Int {
        return collections.count
    }

    override func tableView(_: UITableView, heightForRowAt _: IndexPath) -&amp;amp;amp;amp;amp;gt; CGFloat {
        return 44
    }

    override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -&amp;amp;amp;amp;amp;gt; UITableViewCell {
        let cell = tableView.dequeueReusableCell(withIdentifier: FUIObjectTableViewCell.reuseIdentifier, for: indexPath) as! FUIObjectTableViewCell
        cell.headlineLabel.text = self.collections[indexPath.row].rawValue
        cell.accessoryType = !self.isPresentedInSplitView ? .disclosureIndicator : .none
        cell.isMomentarySelection = false
        return cell
    }

    override func tableView(_: UITableView, didSelectRowAt indexPath: IndexPath) {
        self.collectionSelected(at: indexPath)
    }

    // CollectionType selection helper
    private func collectionSelected(at indexPath: IndexPath) {
        // Load the EntityType specific ViewController from the specific storyboard"
        var masterViewController: UIViewController!
        guard let espmContainer = OnboardingSessionManager.shared.onboardingSession?.odataController.espmContainer else {
            AlertHelper.displayAlert(with: "OData service is not reachable, please onboard again.", error: nil, viewController: self)
            return
        }
        self.selectedIndex = indexPath

        switch self.collections[indexPath.row] {
        case .suppliers:
            let supplierStoryBoard = UIStoryboard(name: "Supplier", bundle: nil)
            let supplierMasterViewController = supplierStoryBoard.instantiateViewController(withIdentifier: "SupplierMaster") as! SupplierMasterViewController
            supplierMasterViewController.espmContainer = espmContainer
            supplierMasterViewController.entitySetName = "Suppliers"
            func fetchSuppliers(_ completionHandler: @escaping ([Supplier]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchSuppliers(matching: query, completionHandler: completionHandler)
                }
            }
            supplierMasterViewController.loadEntitiesBlock = fetchSuppliers
            supplierMasterViewController.navigationItem.title = "Supplier"
            masterViewController = supplierMasterViewController
        case .productCategories:
            let productCategoryStoryBoard = UIStoryboard(name: "ProductCategory", bundle: nil)
            let productCategoryMasterViewController = productCategoryStoryBoard.instantiateViewController(withIdentifier: "ProductCategoryMaster") as! ProductCategoryMasterViewController
            productCategoryMasterViewController.espmContainer = espmContainer
            productCategoryMasterViewController.entitySetName = "ProductCategories"
            func fetchProductCategories(_ completionHandler: @escaping ([ProductCategory]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchProductCategories(matching: query, completionHandler: completionHandler)
                }
            }
            productCategoryMasterViewController.loadEntitiesBlock = fetchProductCategories
            productCategoryMasterViewController.navigationItem.title = "ProductCategory"
            masterViewController = productCategoryMasterViewController
        case .productTexts:
            let productTextStoryBoard = UIStoryboard(name: "ProductText", bundle: nil)
            let productTextMasterViewController = productTextStoryBoard.instantiateViewController(withIdentifier: "ProductTextMaster") as! ProductTextMasterViewController
            productTextMasterViewController.espmContainer = espmContainer
            productTextMasterViewController.entitySetName = "ProductTexts"
            func fetchProductTexts(_ completionHandler: @escaping ([ProductText]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchProductTexts(matching: query, completionHandler: completionHandler)
                }
            }
            productTextMasterViewController.loadEntitiesBlock = fetchProductTexts
            productTextMasterViewController.navigationItem.title = "ProductText"
            masterViewController = productTextMasterViewController
        case .purchaseOrderItems:
            let purchaseOrderItemStoryBoard = UIStoryboard(name: "PurchaseOrderItem", bundle: nil)
            let purchaseOrderItemMasterViewController = purchaseOrderItemStoryBoard.instantiateViewController(withIdentifier: "PurchaseOrderItemMaster") as! PurchaseOrderItemMasterViewController
            purchaseOrderItemMasterViewController.espmContainer = espmContainer
            purchaseOrderItemMasterViewController.entitySetName = "PurchaseOrderItems"
            func fetchPurchaseOrderItems(_ completionHandler: @escaping ([PurchaseOrderItem]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchPurchaseOrderItems(matching: query, completionHandler: completionHandler)
                }
            }
            purchaseOrderItemMasterViewController.loadEntitiesBlock = fetchPurchaseOrderItems
            purchaseOrderItemMasterViewController.navigationItem.title = "PurchaseOrderItem"
            masterViewController = purchaseOrderItemMasterViewController
        case .purchaseOrderHeaders:
            let purchaseOrderHeaderStoryBoard = UIStoryboard(name: "PurchaseOrderHeader", bundle: nil)
            let purchaseOrderHeaderMasterViewController = purchaseOrderHeaderStoryBoard.instantiateViewController(withIdentifier: "PurchaseOrderHeaderMaster") as! PurchaseOrderHeaderMasterViewController
            purchaseOrderHeaderMasterViewController.espmContainer = espmContainer
            purchaseOrderHeaderMasterViewController.entitySetName = "PurchaseOrderHeaders"
            func fetchPurchaseOrderHeaders(_ completionHandler: @escaping ([PurchaseOrderHeader]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchPurchaseOrderHeaders(matching: query, completionHandler: completionHandler)
                }
            }
            purchaseOrderHeaderMasterViewController.loadEntitiesBlock = fetchPurchaseOrderHeaders
            purchaseOrderHeaderMasterViewController.navigationItem.title = "PurchaseOrderHeader"
            masterViewController = purchaseOrderHeaderMasterViewController
        case .stock:
            let stockStoryBoard = UIStoryboard(name: "Stock", bundle: nil)
            let stockMasterViewController = stockStoryBoard.instantiateViewController(withIdentifier: "StockMaster") as! StockMasterViewController
            stockMasterViewController.espmContainer = espmContainer
            stockMasterViewController.entitySetName = "Stock"
            func fetchStock(_ completionHandler: @escaping ([Stock]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchStock(matching: query, completionHandler: completionHandler)
                }
            }
            stockMasterViewController.loadEntitiesBlock = fetchStock
            stockMasterViewController.navigationItem.title = "Stock"
            masterViewController = stockMasterViewController
        case .salesOrderItems:
            let salesOrderItemStoryBoard = UIStoryboard(name: "SalesOrderItem", bundle: nil)
            let salesOrderItemMasterViewController = salesOrderItemStoryBoard.instantiateViewController(withIdentifier: "SalesOrderItemMaster") as! SalesOrderItemMasterViewController
            salesOrderItemMasterViewController.espmContainer = espmContainer
            salesOrderItemMasterViewController.entitySetName = "SalesOrderItems"
            func fetchSalesOrderItems(_ completionHandler: @escaping ([SalesOrderItem]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchSalesOrderItems(matching: query, completionHandler: completionHandler)
                }
            }
            salesOrderItemMasterViewController.loadEntitiesBlock = fetchSalesOrderItems
            salesOrderItemMasterViewController.navigationItem.title = "SalesOrderItem"
            masterViewController = salesOrderItemMasterViewController
        case .customers:
            let customerStoryBoard = UIStoryboard(name: "Customer", bundle: nil)
            let customerMasterViewController = customerStoryBoard.instantiateViewController(withIdentifier: "CustomerMaster") as! CustomerMasterViewController
            customerMasterViewController.espmContainer = espmContainer
            customerMasterViewController.entitySetName = "Customers"
            func fetchCustomers(_ completionHandler: @escaping ([Customer]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchCustomers(matching: query, completionHandler: completionHandler)
                }
            }
            customerMasterViewController.loadEntitiesBlock = fetchCustomers
            customerMasterViewController.navigationItem.title = "Customer"
            masterViewController = customerMasterViewController
        case .salesOrderHeaders:
            let salesOrderHeaderStoryBoard = UIStoryboard(name: "SalesOrderHeader", bundle: nil)
            let salesOrderHeaderMasterViewController = salesOrderHeaderStoryBoard.instantiateViewController(withIdentifier: "SalesOrderHeaderMaster") as! SalesOrderHeaderMasterViewController
            salesOrderHeaderMasterViewController.espmContainer = espmContainer
            salesOrderHeaderMasterViewController.entitySetName = "SalesOrderHeaders"
            func fetchSalesOrderHeaders(_ completionHandler: @escaping ([SalesOrderHeader]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchSalesOrderHeaders(matching: query, completionHandler: completionHandler)
                }
            }
            salesOrderHeaderMasterViewController.loadEntitiesBlock = fetchSalesOrderHeaders
            salesOrderHeaderMasterViewController.navigationItem.title = "SalesOrderHeader"
            masterViewController = salesOrderHeaderMasterViewController
        case .products:
            let productStoryBoard = UIStoryboard(name: "Product", bundle: nil)
            let productMasterViewController = productStoryBoard.instantiateViewController(withIdentifier: "ProductMaster") as! ProductMasterViewController
            productMasterViewController.espmContainer = espmContainer
            productMasterViewController.entitySetName = "Products"
            func fetchProducts(_ completionHandler: @escaping ([Product]?, Error?) -&amp;amp;amp;amp;amp;gt; Void) {
                // Only request the first 20 values. If you want to modify the requested entities, you can do it here.
                
                let query = DataQuery().selectAll().top(20)
                do {
                    espmContainer.fetchProducts(matching: query, completionHandler: completionHandler)
                }
            }
            productMasterViewController.loadEntitiesBlock = fetchProducts
            productMasterViewController.navigationItem.title = "Product"
            masterViewController = productMasterViewController
        case .none:
            masterViewController = UIViewController()
        }

        // Load the NavigationController and present with the EntityType specific ViewController
        let mainStoryBoard = UIStoryboard(name: "Main", bundle: nil)
        let rightNavigationController = mainStoryBoard.instantiateViewController(withIdentifier: "RightNavigationController") as! UINavigationController
        rightNavigationController.viewControllers = [masterViewController]
        self.splitViewController?.showDetailViewController(rightNavigationController, sender: nil)
    }

    // MARK: - Handle highlighting of selected cell

    private func makeSelection() {
        if let selectedIndex = selectedIndex {
            tableView.selectRow(at: selectedIndex, animated: true, scrollPosition: .none)
            tableView.scrollToRow(at: selectedIndex, at: .none, animated: true)
        } else {
            selectDefault()
        }
    }

    private func selectDefault() {
        // Automatically select first element if we have two panels (iPhone plus and iPad only)
        if self.splitViewController!.isCollapsed || OnboardingSessionManager.shared.onboardingSession?.odataController.espmContainer == nil {
            return
        }
        let indexPath = IndexPath(row: 0, section: 0)
        self.tableView.selectRow(at: indexPath, animated: true, scrollPosition: .middle)
        self.collectionSelected(at: indexPath)
    }
}

[/code]

Once you’ve added your handlers in Xcode, save and build the application.

Test a few of the voice commands:

  • Open products
  • What products do you have?
  • Show notebooks less than 1500 euros
  • What’s the price of the Notebook Basic 15?

And that concludes our integration and Visual Voice experience for this SAP sample application. This application was created as part of Alan and SAP’s partnership to voice enable the enterprise. Here’s a full video on the integration. For more details, please check out Alan’s documentation here.

Feel free to provide your feedback or just ask about support via sergey@alan.app

]]>
https://alan.app/blog/add-a-visual-voice-experience-to-your-sap-mobile-applications-with-alan/feed/ 0 2041
Alan partners with SAP to provide Visual Voice experience for Enterprises https://alan.app/blog/alan-teams-up-with-sap-to-voice-enable-enterprise-applications/ https://alan.app/blog/alan-teams-up-with-sap-to-voice-enable-enterprise-applications/#respond Fri, 03 May 2019 03:28:23 +0000 http://alan.app/blog/?p=1990 Alan partners with SAP to usher the Visual Voice experience for enterprise applications. Recently we took SAP Cloud SDK for iOS and integrated Alan with one of the sample applications “SAP Deliveries” in a few steps. Users can now talk to the mobile application using natural voice commands like “Show me...]]>

Alan partners with SAP to usher the Visual Voice experience for enterprise applications. Recently we took SAP Cloud SDK for iOS and integrated Alan with one of the sample applications “SAP Deliveries” in a few steps. Users can now talk to the mobile application using natural voice commands like “Show me all the phones” or “Show me phones that cost less than $500”, etc. without having to touch or type on their devices. Voice responses are synchronized with the visual elements in the SAP Deliveries application. Alan uses Spoken Language Understanding(SLU) for voice (unlike text-based NLP) and leverages any application’s visual and dialog context to provide the next-generation visual voice experiences. You can find more details about the future Visual Voice experience using Alan for SAP applications here.

]]>
https://alan.app/blog/alan-teams-up-with-sap-to-voice-enable-enterprise-applications/feed/ 0 1990
Central Square Technologies – #1 Public Safety Vendor https://alan.app/blog/central-square-1-public-safety-vendor/ https://alan.app/blog/central-square-1-public-safety-vendor/#respond Thu, 18 Apr 2019 22:42:30 +0000 http://alan.app/blog/?p=1970 CentralSquare A New Industry Leader

We attended CentralSquare 2019 and learned the nature of the Public Safety market and the current state of the products.  Central Square Technologies (CST) is the #1 Public Safety player in the US and Canada by the above statistics. We were able to interact with the leaders at CST: Simon Angove (CEO), Jatin Atre (CMO), and Steve Seoane (EVP) to get a pulse on their vision for the future. We also had the ability to interact with some of the 300+ employees of CST,  a few of the 575 Public Safety Agencies and the 1300+ Public Safety Professionals attending the annual event.

The following are some key facts we learned about the daily lives of the First Responders.

  • A 1-second decrease in response time translates to saving 10,000 lives.
  • Distracted driving is a major cause of the fatalities among First Responders.

The Alan Safety solution was developed as a reference application to integrate with Public Safety CAD deployments from CST and other vendors. Alan Safety brings Voice Interface to first responder applications so that they can interact in a handsfree mode to get all the data a few seconds faster while on the go, thereby decreasing the fatalities and the response time of First Responders.

Alan is delighted to have the opportunity to bring its state-of-the-art Voice Interface to the daily lives of the 2.25M First Responders in the US. We are looking forward to working with Central Square Technologies to further their vision of building safer and smarter communities.  CST was formed by the merger of Tritech, Superion, and Zuercher,  with the investments from Bain Private Equity and Vista Equity Partners. All these companies have pioneered solutions for Public Safety market over the last three decades.

]]>
https://alan.app/blog/central-square-1-public-safety-vendor/feed/ 0 1970
Alan Safety™ is the world’s first Visual Voice Solution for First Responders https://alan.app/blog/article-alan-is-the-worlds-first-voice-solution-for-first-responders/ https://alan.app/blog/article-alan-is-the-worlds-first-voice-solution-for-first-responders/#respond Wed, 03 Apr 2019 21:48:23 +0000 http://alan.app/blog/?p=1944 Alan was recently mentioned as the top voice solution for first responders in an article from VOICE Summit, leading voice blog and host of the yearly VOICE Summit Event.

Police, firefighters, and paramedics are always on the go and need a solution that keeps them up to date on incidents they’re responding to, all hands-free, with the ability to show visuals and drill through multiple screens quickly.

Alan was selected as the voice solution to accomplish this — but why Alan over big tech companies like Amazon, Google, or Apple?

Alan is the only voice company that offers a fully integrated voice experience to any existing application. Public Safety professionals have been using Mobile and Desktop apps for years, and have multi-step, complex workflows that they use to get all the information they need when addressing a 911 call. Alan facilitates these workflows using “visual voice synchronization”: when users give voice commands, Alan changes the screen to what the user requests and also highlights relevant information. At each step of the workflow, Alan recognizes what the user is viewing and adjusts accordingly for better voice responses.

feature-illustration-03@2xOther solutions do not have a visual experience, and ones that do require that visuals and information be imported into their application instead, which cut off multi-step workflows and also requires the ‘importing’ of visuals and data at each step. For enterprises, this is costly, slows down the development process, and dilutes their branded experience. Alan leverages everything, including existing visuals and data, without requiring any rebuilding.

And because of the sensitive nature of the information that Public Safety professionals work with, they also needed a private and secure solution that is compliant with the standards they already have in place. Alan was the only voice solution that was able to do this, as all data is encrypted and isolated within their own app and in the cloud.

In Public Safety there’s several phrases and jargon like “10-100”, “BOLO”, “L002”. These are what are part of a Domain Language Model, or a set of terms and phrases specific to one enterprise. Alan specializes in training off of any enterprise’s Domain Language Model using Machine Learning, which betters the ability to respond correctly to any given voice command.

A full visual voice experience in existing apps combined with secure, and enterprise level Machine Learning makes Alan an attractive solution for businesses looking to voice enable without losing the control of their brand, experience, or workflows.

Read the full article from VOICE Summit here to see how Alan is changing the Public Safety industry with voice.

]]>
https://alan.app/blog/article-alan-is-the-worlds-first-voice-solution-for-first-responders/feed/ 0 1944
Enterprise: Where No Voice Has Gone Before https://alan.app/blog/enterprise-where-no-voice-has-gone-before/ https://alan.app/blog/enterprise-where-no-voice-has-gone-before/#comments Mon, 23 Jul 2018 22:34:23 +0000 http://alan.app/blog/?p=1802

Looking forward to the talk at the VOICE summit in Newark on July 24 how Voice and AI will transform every enterprise. We have entered the voice-first era, where voice will not just be used for simple tasks like control the meeting or book conference room, it will be used to complete mission-critical business tasks. I will weigh in on how enterprises can add voice to their applications and get a fully immersive voice-visual experience during VOICE18 “Enterprise: Where No Voice Has Gone Before” panel.

]]>
https://alan.app/blog/enterprise-where-no-voice-has-gone-before/feed/ 1 1802
2018 will be the year of Voice in Enterprise https://alan.app/blog/2018-will-be-the-year-of-voice-in-enterprise/ https://alan.app/blog/2018-will-be-the-year-of-voice-in-enterprise/#comments Sat, 10 Feb 2018 00:19:24 +0000 https://alan.app/blog/?p=1413

2017 was a year of Cambrian explosion for voice-enabled devices. Echo Dot was Amazon’s best-seller during the holiday period. CES this year offered a glimpse into the future of conversational voice: a world where consumers can use voice to control everything — thermostats, lamps, sprinklers, security systems, smartwatch, fitness bracelet, smart TV, or their vehicles.

2018 will be the year for Voice to enter the Enterprise and transform how work gets done. Here are a few important factors for Voice to make a seminal impact in the enterprise:

  • Conversational Experience: Enterprise Voice AI should provide a human-like conversational experience. Unlike request-response, we should be able to interrupt the Assistant anytime. While you are getting your daily briefing by saying “how does my day looks like” and listening to the summary of your meetings, urgent emails, direct messages and important tasks; you should be able to interrupt anytime and ask to “Text Laura that I am running 10 min late, and move my next meeting by half an hour”.
  • Contextually Intelligent: Today, we use a whole array of applications to get things done at work. Enterprise Voice AI need to preserve the context of the conversation so the user can have a deep meaningful dialogue to complete the business workflow drawing information from different data sources — without switching the applications. For example, a technician may ask “Where is my next appointment” followed by “Which cable modem should I bring”. Enterprise Voice Assistant will need to preserve the customer context from the first question to pull up the right modem information.
  • Proactive Predictions: Voice AI for the enterprise should not just respond to the user commands, it should also be able to leverage Artificial Intelligence (AI) to proactively make recommendations and predictions to accelerate the task. For example, when a Delivery Service guy asks “how do I get to the next delivery”, Voice Assistant should provide directions but also proactively tell “By the way, the door code is 1290 to get access to the building.”
  • Domain Specific: Every enterprise and every industry is unique, and has its own lingua franca. Enterprise Voice AI should be able to apply Machine Learning (ML) to learn the unique vocabulary of every domain to improve the speech-to-meaning. The assistant should cumulatively learn from each user interaction such that it can make recommendations to other colleagues about what to say to complete the task.

Voice AI will not just be used for simple work tasks like control the meeting or book a conference room. It will be used to complete the complex business workflows because as we think about it, the more complex the scenario, the more it benefits from voice AI to help users get what they need quickly without switching applications.

2018 will be the groundbreaking year business leaders in every industry will wake up to the importance of voice AI.

]]>
https://alan.app/blog/2018-will-be-the-year-of-voice-in-enterprise/feed/ 1 1413
Future of Voice in Enterprise https://alan.app/blog/future-of-voice-in-business/ https://alan.app/blog/future-of-voice-in-business/#comments Thu, 16 Nov 2017 06:35:32 +0000 https://alan.app/blog/?p=1213 Voice-recognition-640x373

We talk with our co-workers, customers, partners, and other stakeholders in business every day. In every meeting, whether in-person, web conference or over the phone, we use voice as the primary medium of communication. Be it project meetings, sales calls, support calls or interviews, precious information is contained in these conversations. Today, most of this information is lost, as it is neither feasible nor practical to capture voice conversations with existing technology. And even if we are able to capture the voice conversations, we can’t search what is inside them, we can’t see who said what, and we can’t get to the key moments. Have you ever tried to find a single sentence from an hour-long recording? Voice conversations are like dark matter.

Why? First, it isn’t easy to handle voice in all situations, and the devices we use to capture voice determines the quality. For the right quality, voice needs to be handled differently in all kinds of conversations. It’s easy to capture voice in a Web Conference, but how can it be done for in-person meetings or phone calls? Once voice is captured, how can we uniquely identify each speaker? How can we make sure each speaker’s voice is heard equally well?

Second, transforming voice into a visual, searchable stream of information is hard. The words we use in our conversations depend on the context. They also depend on our business domain. For example, the word “checkin” in software context means putting software in a repository, but in airline context, it is two words “check-in”. The intent and entities are different based on the context, user, and the domain. And each of us pause differently between our sentences, that makes it hard to segment speech into sentences.

At Synqq, we have pioneered new technology to handle voice in all situations and have made seminal advancements in Natural Language Processing — that is going to change the way we handle voice conversations in the enterprise. The current era of infinite computing makes it affordable. Voice conversations will no longer be the dark matter of the past. Voice conversations can become the searchable record for every enterprise.

]]>
https://alan.app/blog/future-of-voice-in-business/feed/ 1 1213