Named in 150+ Gartner reports - 50 in 2025. See what Gartner® says about AI agents

Learn more
Home > GSX Platform > Cognitive Services
Menu
On this page

    Cognitive Services

    Generative AI

    Key Business Use Cases: Generative AI

    O

    Optimized pricing and lower latency

    GSX Cognitive Architecture allows you to use any LLM and Gen AI model, meaning that you have future-proofed AI strategy, lower latency, and optimized pricing.

    P

    Prevent hallucinations with monitoring and Eval Agents

    Agent messages are interrogated and evaluated for compliance purposes, which may lead to message suppression and alerting management to when an agent attempted to break protocol or have a compliance violation.

    B

    Build and deploy faster with built-in generative AI functionality

    GSX Platform has Gen AI baked-in: Test AI agents via our Test Agent, build CSS and custom style components through Gen AI tools, get help from our GSX AI Agent Helper, and have suggested next steps at your fingertips for easy development.

    Overview: Generative AI Capabilities in GSX

    Our team has been working with generative AI since OpenAI first released a public version of GPT in 2020. We have a generative AI toolkit with over a dozen capabilities that allow for designers to use these capabilities in discrete ways, rather than treating it as a silver bullet (a simple and seemingly magical solution to a complicated problem). We’ve also natively integrated it in to our platform functionality – for example, it is in Step UIs allowing you to generate content for A/B testing, it is in our NLP engine to turbocharge training knowledge models, it helps conversational designers with “suggested next steps”, etc. It is so core to our product, and almost everything in our product touches GPT in some way. The OneReach.ai platform also has steps and integrations so that customers can build solutions that connect with all of the major LLMs and Generative AI systems (ChatGPT, GPT-3/3.5/4, Anthropic, models from DeepMind, etc.). Our LLM capabilities are extensive because our team has been using generative AI for the past year and a half.

    Leverage LLMs / Generative AI to create flow of conversation

    Train generative AI models on your own data set
    Generative AI models, such as ChatGPT, are trained on a massive dataset from the internet. This makes for good generalist AI agents, but it means that they require numerous guardrails before being used for enterprise solutions. Our generative AI steps allow you to train the generative models on specific documents so that the content generated by the model is narrowed down to what will be helpful for the context of the AI agent.

    Small talk modules
    We offer prebuilt small talk models (traditional and LLM enabled). However, we strongly recommend avoiding the use of small talk modules for any enterprise deployment.

    Goal oriented decision making
    As LLMs evolve in code reliability, they can take on increasingly capable agentic roles, allowing for a mix of generative and programmatic AI agents. This hybrid model—where agents receive objectives rather than detailed instructions—empowers them to interpret high-level goals using natural language. For instance, a “get well” agent might automatically decide to send an email, a physical card, or even flowers, based on factors like the employee’s needs, context, and company policies. This flexibility allows agents to exercise a degree of autonomous decision-making, while retaining certain guardrails through precise, structured language where needed.

    Set prompt/corpus at the beginning of the conversation, or adjust as the conversation progresses
    Many generative AI models allow you to set a prompt or corpus to give the model some guardrails and expectations (e.g. “You are a friendly customer service agent, but you can’t discuss pricing”). Our steps allow you to set a prompt at the beginning of a conversation or to adjust the prompt as you get feedback and responses from the user.

    Generative AI for Reasoning – Agent Based Approach

    AI agent based approach
    Agents are AI steps that are capable of making decisions and reasoning based on context. Using an agent-based approach, there isn’t a strict scripted call flow that would need to be tracked and optimized. Instead, the system determines the flow and script of the conversation with the specific individual based on user attributes, inputs, and external factors. This allows the agent to take into account user history, previous goal attainment context, and other inputs to craft a bespoke journey for each user.

    “Swarms” of AI agents
    We have built an infrastructure that supports “swarms” of agents working together dynamically. As organizations scale to thousands of agents, measured outcomes, feedback loops, and a human-in-the-loop (HiTL) structure will ensure safe, reliable operations. This feedback loop is essential for gradually transitioning from HiTL to autonomous functionality, agent by agent.

    Leverage LLMs for context handling and intent extraction

    New intent discovery
    We offer native clustering algorithms via traditional NLP models as well as LLMs to discover new intents in live customer data. This enables customers to continually improve the completeness of the model’s intent architecture and extends ability to automate additional use cases.

    Suggested training data
    Generate training data for rapid training and development of applications.

    Auto generated test script
    A typical conversational process may have thousands of permutations, all of which require thorough testing. Most solutions, however, are not entirely tested because of the large number of man-hours required. We have released autogenerated conversational simulation testing using generative AI, wherein another AI agent would test an AI agent enabled with LLMs. This way, designers could see possible interaction paths and areas where the AI agent or intents may fail without having humans spend hundreds of hours testing every conversation variant.

    Intercept and manage LLM prompts and responses

    Intercept and adjust generative AI response before sending back to the user
    As mentioned, some generative AI models are trained on the entire internet. There are major risks in using something like this in enterprise solutions, and one way we make sure that this risk is mitigated is by intercepting the response that the model generated. We can then intercept the message and search for anything off topic, vulgar, inappropriate, etc. We can also intercept it to trigger other actions and capture intents that require follow up actions/automations.

    Intercept and adjust the user response before sending to generative AI model
    Similarly, we also intercept the message from the user before sending to Generative AI models to redact PII, catch anything inappropriate, etc. and capture intents that require follow up action that the Generative AI model can’t help with. (In most cases, this is needed and we highly recommend this to our customers)

    Leverage LLMs and Generative AI for Content Generation

    Q&A content management
    Content and knowledge management is fully supported and includes the ability to go straight from utterance to knowledge as well as addressing the ongoing workflows to manage knowledge. We recently deployed an approach for Q&A management that is made up of three main areas of functionality: combining LLMs with Elastic Search capability, and a heavy focus on knowledge workflow management. LLMs, external connectors to Q&A search tools, encoder NLU, graph based knowledge management are all a part of the toolkit that customers can use to handle simple to complex management of knowledge. We encourage our customers to use LLM decoders in combination with a graph DB and taxonomy to create a hypergraph with knowledge domains that point to unstructured data documents.

    Drafting copy, documents, etc using LLMs
    We fully support the ability to draft, iterate on, and facilitate collaboration on content and documents. This includes that ability to test and optimize content, and customize content based on an agent-based approach. As these artifacts are created, sub-content categories can be indexed and stored for future reference (think dynamic snippets of text and images that can be used to ensure consistency across mediums).

    Leverage LLMs and Generative AI for Agent Assist

    Sentiment analysis and emotion detection
    We provide built-in sentiment analysis and emotion detection information to live agents using a multitude of different technologies. This includes traditional NLP methods, along with LLMs.

    Passing information and engagement summaries to agents
    We provide any aspect from the conversation and/or prior customer records to better inform the agent. Information like conversation topic, issues involved, status of customer are often used, but you can go beyond that and add things like sentiment analysis or recommended next actions to the agent. We can pass all forms of data for screen pops, CTI or any other functionality you may use on our stand alone agent interface or your third party system via custom SIP headers, APIs or more legacy methods of data transfer.

    Improving Agent performance
    Grammar correction, spelling correction, phrase suggestion, etc.

    Drafting optimal responses
    Agents are presented with one or more version of a response that are inclusive of best practices and language that is statistically most likely to achieve the desired outcome.

    Compliance
    Since each interaction flows through our Communication Fabric, agent messages can be interrogated and evaluated for compliance purposes, which may lead to message suppression and alerting management to when an agent attempted to break protocol or have a compliance violation.

    Management of agent workflow
    Agent assist doesn’t have to just exist within the context of a transaction, but it can maintain relationship context and aid the agent with more than just the conversation or transaction, it can help with things like covering shifts, managing agent staff, etc.

    Other Generative AI Functionality

    Custom micromodels
    These are very small generative AI models trained on less data, and more specific/fine-tuned data to reduce latency and hallucinations.

    AI-enabled language translation
    We leverage language detection and machine translation for NLU on a user’s input for both voice and text via our partner ecosystem and via LLMs and generative AI. However, best practice is to have individual models for individual languages, but as a kickstarter this can be helpful and OneReach.ai fully supports this for both voice and text.

    Out-of-the-box model usage
    Use ChatGPT Integrations and APIs to access and make use of ChatGPT as is.

    Conversational limitations
    A major differentiator in our approach to Generative AI is our work in ensuring that Steps that utilize LLMs have limitations on the type of conversation the AI agent can have with the user.

    Using generative AI to author cypher queries
    We have Steps in our platform to assist users in writing Cypher to assist with building customized Graph Databases.

    Customer Security with LLMs and Generative AI

    Prevent data ingestion
    We use security measures like redaction, masking, and limit the collection and access that LLMs have to sensitive data. We support the ability to sort out and determine on a turn-by-turn basis which conversational data can be passed to an LLM.

    Private LLMs
    Locally-installed LLMs can be hosted by the customer or in our private cloud.

    Protection against bias
    Prompt chaining and using models that detect and/or correct bias can be used. We provide toolkits for these models, and also encourage the use of prompt engineering. There is no ‘silver bullet’ for unbiasing a model completely, so the best method is to use a secondary model to detect bias.

    Masking and suppressing PII
    We can redact PII in live conversations and transcripts using several methods, including NLP, regex, and LLMs or any combination thereof. The approach is determined on a case by case basis, informed by business, regulatory and security requirements. PII can also be masked on the backend (e.g. logs and reporting) and also on the front end when applicable (e.g. client-side webchat interfaces). We can mute and/or use tone overlays on voice recordings and their transcripts on a per utterance basis via our native voice stack.

    Security standards
    We work with customers with the strictest data security and compliance requirements including medical, insurance, and government organizations. We can configure the infrastructure/environment to match their specifications.

    GPT-4 Toolkit for Enteprise

    Facilitating the use of generative AI for enterprises
    Generative AI and LLMs have proven to be an important ingredient in conversation design, but they require an incredible amount of governance, management, and oversight to use in an enterprise deployment. Our strategy is to enable the use of these models in an appropriate way for real world deployments (human-in-the-loop, content moderation, tagging, caching, etc.).

    GPT-4 Toolkit with steps to optimize UX
    We have a 14-step generative AI toolkit which includes tone rephrase (the process of changing the tone or sentiment without changing its meaning), stay on subject (a rerouting of the conversation towards the goals and scope), and phrase to question (a tool to impact the tenor of the conversation). When combined with Human-in-the-Loop (including generative summarization) the user experience can be optimized.

    1. Entity Extraction
    2. Synonym phrase
    3. Data transformations
    4. Q&A
    5. Tone rephrase
    6. Confidence phrase
    7. Summarize
    8. Antonyms
    9. Embed Question
    10. Stay on Subject
    11. Phrase to question
    12. Sentiment
    13. Simple Query
    14. Analyze phone outcomes

    AI Agent Builder

    At OneReach.ai, we see agents as temporary functions, orchestrated to achieve specific objectives without persistent, predefined scripts. This design allows agents to interact and collaborate on the fly, where the system essentially writes and executes the necessary code for one-time execution, adapting to both the task and environment in real-time.

    Key Business Use Cases: AI Agent Builder

    24/7 Customer Service AI Agents

    Deploy AI agents in minutes across chat, voice, SMS, and web channels, automating customer inquiries, authentication, appointment management, and more. GSX excels here because of its robust orchestration, seamless integration with legacy and cloud systems, human-in-the-loop capabilities, and the ability to maintain context and conversational memory across all channels.

    S

    Service Desk and Employee Support

    Build intelligent agents that handle IT helpdesk, HR requests, and internal operations entirely through self-service, automating common workflows such as password resets, device provisioning, policy queries, and ticket triaging. GSX is natively designed for complex, multi-agent workflows and granular enterprise governance, allowing enterprises to rapidly roll out secure, scalable automations.

    E

    Enterprise-Grade Multi-Agent System

    GSX provides the agent runtime environment needed to build and manage hundreds of AI agents. With GSX Agent Builder you can build once and deploy everywhere, perform rigorous testing right from the Agent UI, and set up feedback loops for monitoring and improvement.

    Types of AI agents supported

    Fully autonomous AI agents
    After rigorous training on industry-specific data and scenarios, agents are capable of executing tasks such as fraud detection, personalized recommendations, or ticket creation without human intervention. This autonomy has become a common solution for our customers, as it streamlines operations, reduces costs, and enhances efficiency across various business functions.

    Semi-autonomous AI agents / Human-in-the-Loop
    Agents can be designed to work hand-in-hand with human agents. Here are some common examples we’ve seen from our customers:

    • Real-Time Assistance with Industry Insights: Our platform provides industry-specific recommendations based on real-time analysis. For instance, during a sales call, agents might be notified that mentioning a free return policy within the next 30 seconds can boost conversion likelihood by 22%.
    • Enhanced Contextual Awareness: Human agents gain access to conversational history, customer context, sentiment analysis, and other relevant insights, enabling more informed interactions.
    • Auto-Generated Response Suggestions: Based on conversation context, the AI agent generates response suggestions to streamline interactions and improve efficiency.
    • Whisper Agent and Custom Widgets: GSX includes a configurable “whisper agent” to assist human agents in real-time and allows for widget use (e.g., data collection) within the interface, providing a cohesive, bot-like functionality for seamless information gathering.
    • Human-in-the-Loop (HITL): HITL enables agents to monitor conversations and receive automatic prompts if AI assistance is needed to clarify customer intent. If, for instance, an NLU confidence score drops below a set threshold, agents can clarify intent or take over the conversation temporarily.

    Autonomous with partnership of other AI agents
    GSX is a multi-agent system designed for swarms of AI agents working together. We believe the market is ultimately headed towards this type of “agent ecosystem” where agents can lean on one another for answers and knwoledge needed for goal completion. Our vision is to create a composable, flexible platform for agents—capable of supporting two-way interaction and constant adaptation—creating an intelligent and responsive multi-agent ecosystem. This architecture will enable businesses to leverage agents in increasingly complex, goal-oriented, and context-aware ways, gradually moving towards a fully autonomous, self-sustaining cognitive environment.

    Build AI Agents in Minutes

    Define agent objectives
    The Objective parameter is crucial for guiding the agent’s behavior and decision-making process. It defines the purpose or goal that the agent aims to achieve. An objective should clearly answer the question “What should be done?” and not “How should it be done?” This helps ensure that the agent’s interactions are focused and relevant to the goal.

    Define agent actions
    Actions are specific operations that the agent can execute to achieve its objectives. They serve as a bridge between the AI component and the software component of the application. Each action is described in an imperative form to outline its purpose.

    An action includes:

    • Description: A clear, imperative statement of what the action does.
    • Inputs Schema: Defines the necessary inputs for the action, aiding the agent in understanding what entities it needs to extract from user inputs.
    • Outputs Schema: Specifies what data should be expected as the result of the action.

    Define agent knowledge
    Connect any Knowledge Base to your AI agent. Knowledge bases are collections of content that you can upload to your GSX account. This step allows you to create parameters and guardrails so that your AI agent provides accurate and relevant answers.

    To learn more about Knowledge base, visit the Lookup service in OneReach.ai. Then create a new collection and add notebooks to it by uploading it from URL, computer, getting data from Notion etc.

    WebSearch for AI agents
    WebSearch feature enables the agent to search for and retrieve information directly from the web. This tool is designed to provide up-to-date, accurate, and contextually relevant information for queries requiring real-time data or highly specific details not included in the agent’s pre-trained knowledge base.

    Define generative models for AI agents
    In the Model Settings section, you can select and configure the language model that powers your agent’s capabilities:

    • Provider: Choose the AI model provider (e.g., OpenAI).
    • Model: Select the specific language model to use (e.g., GPT-4o-mini).
    • Temperature: Adjust the creativity level of the model’s responses by setting the temperature. Lower values make output more focused, while higher values increase variability.
    • Max Token Length: Determine the maximum length of the response in tokens.
    • Frequency Penalty: Control the likelihood of repeating the same phrases. Higher values reduce repetition.
    • Presence Penalty: Influence the model’s willingness to talk about new topics. Higher values make it more likely to keep the conversation on the same topic

    Adjust AI agent settings
    In the Agent Settings section, you can customize the core operating parameters of your agent:

    • Max Retries: Set the maximum number of retry attempts if an action fails.
    • Max Iterations: Define the maximum number of iterations for a conversation thread before termination.
    • Agent History Length: Specify how much of the conversation history the agent should consider when making decisions.

    NLU, Language Tools, Deterministic Methods

    Key Business Use Cases: NLU, Language Tools, Deterministic Methods

    Future-proofed AI Strategy via GSX Cognitive Architecture

    GSX Open, modular Cognitive Architecure allows you to connect with any LLM or NLU tool in the market. As the marketplace evolves, you can always use the latest and greatest tools.

    O

    Operate in Any Language

    GSX supports 160+ languages, and provides tools for dialect and ontologies (word and phrase sets) specitic to your industry, region, or company.

    I

    Improve Quickly with Supervision and Feedback Loops

    Generative Studio X Provides manual and automated learning loops to improve AI Agent perfomance, accuracy, and task completion.

    General NLU and Language Tools

    Deterministic and probabalistic capabilities
    GSX is a combination of both deterministc and probabalistic capabilities which allow builders to envoke the appropriate systems based on the desired outcome and tolerance for generated content. For example, our agents can use tools but they need to be given explicit permision to use them to help with governance and safety.

    Native NLU capabilities
    We have our own first-party inbuilt NLU capabilities, built on our own stack that allow customers to use the best performing engine on a case by case basis. We often recommend using our first-party NLU to lower costs on high volume NLP use cases, such as transcription insights (conversation summary, sentiment, assisted annotation, etc.).

    Natvie NLU amalgamation engine
    OneReach.ai has its own proprietary NLU amalgamation engine that uses multiple provider solutions at the same time, including our own, and automatically selects the best performing engine to answer each client question. This ensures that our clients always have the highest confidence scores for cognitive performance and do not need to rely on a single vendor in any situation.

    Automatic disambiguation
    OneReach.ai offers many methods of disambiguation with granular control methods. With simple disambiguation between two or more intents with high confidence, OneReach.ai automatically asks clarifying questions to determine the user’s goal. We use every disambiguated response collected to train our model automatically, improving its NLU capabilities. This capability extends to entity clarification as well to ensure we fully confirm our understanding of a users needs.

    Multiple-intent matching
    OneReach.ai’s first-party NLU engine supports the detection of as many intents that exist in an utterance. We then allow the designer to determine how many intents to acknowledge at once and how to coordinate the task management that results from these intents. This can be done with our no-code builder and therefore without engineers or NLU experts required. It’s also important to note that these controls are granular based on individual scenarios and are not universal rules.

    Antagonistic training
    Use multiple NLU engines to train each other based on the highest percentage answer.

    Knowledge model training
    Educating your AI agent with relevant information. Create phrase / responses pairs for your solution that allow your solution to understand your user’s needs. You can train with a variety of response types such as text, JSON or a Flow.

    Suggested and unrecognized handling
    Set up your flow to train and update your model easily based on confidence levels gathered from interactions with the user

    Confidence scores
    Your AI agent reports how certain its understanding is. This allows the designer to create logic that enables the AI agent to answer based on a rating % or above; otherwise, it forwards the request to a human to ensure the highest confidence in the answers.

    Custom model creation
    Connect to an existing model that has already been created

    Leverage other knowledge models
    Use any models you have created using NLU engines like MS LUIS, Dialog Flow, AWS Lex, RASA, and others.

    By choosing OneReach.ai you are choosing the market for your NLU capabilities, future proofing your organization against a rapidly evolving NLU marketplace. OneReach.ai’s truly open architecture enables seamless integration with any third party or custom NLU engine, allowing our customers to avoid vendor lock in. We can connect to these systems through our native amalgamation engine or via direct connections. “

    Import knowledge model
    Upload massive amounts of information to educate your AI agent. Upload a CSV or JSON file of an existing model that will populate phrases, responses, and context into a new knowledge model, which you can then use in your solution.

    Build custom ontologies
    We have built-in Graph DBs with the ability to create custom schemas or use library schemas to kick-start your ontology. Even though companies may primarily use certain languages when communicating with customers and employees, every company has its own company lexicon (i.e. product names, acronyms, etc.). Our system allows users to extend schemas to include custom ontologies. This is important especially because for most companies, this is how their employees and customers relate to information (referring to products, features, locations, or processes common to the company).

    Graph DB: Native database for ontologies
    OneReach.ai utilizes Graph Databases to store, organize and manage ontologies (for example, you can use a graph database to make connections between different internal operations, Q&A pairs, customers and their preferences/history, open tickets, etc.). These are incredibly powerful tools for tracking data and predicting trends or behavior across the customer journey, uncovering meaningful relationships within customer data that previously would have been impossible to discover.

    NLU engine customization
    We have several options available for optimizing NL engines. All customizations are extremely granular down to the individual intents, entities, and even the dialog step. All of these options are available for customization without coding, by our customers.

    • Model Feature Configuration: No Code, Customer Accessible
    • NLU Patterns: No Code, Customer Accessible
    • Entity Groupings / Roles: No Code, Customer Accessible
    • Customizable Entities: No Code, Customer Accessible
    • Flow Entities: No Code, Customer Accessible
    • LLM and Generative AI Utterance Suggestions (synthetic data): No Code, Customer Accessible
    • Suggested Utterances (from live data): No Code, Customer Accessible
    • Clustered Unrecognized Utterances (from live data): No Code, Customer Accessible

    Entity Features

    Multi-language entity support
    Support for capturing data (entities) within an end-user utterance across various languages. Create multilingual AI agent solutions and capture the required data from an end user’s utterance in multiple languages. This allows for automating an end-user task based on captured data.

    Map entities to parameters
    Capture entities from an end user utterance which then can be used to automate a task. Fully automate valuable tasks by capturing end user intent and data (entities) required to perform a task such as resetting passwords, transferring funds between accounts, etc.

    Multiple mechanisms for entity extraction
    OneReach.ai allows you to use flow entities, which are created using a combination of technologies. For example, you don’t have to choose between Machine learned, REGEX entities, large language model entities, grammar based entities, etc. OneReach.ai enables a combination of all of these capabilities to produce the most accurate entity extraction approach for each unique use case. For example, when extracting phone numbers, OneReach.ai could use regex techniques to extract the number, machine learning techniques to locate the number in context, and combine these extractions against a table of accepted numbers for verification, all within one flow entity. We can also use generative AI models to look out for specific entities.

    Entity extraction
    Gather periphery information based on a user’s input. You can improve the performance of your model by identifying relevant words or phrases as entities in your context. You can choose from different types of entities like machine learning, list, RegExp, Pattern or Prebuilt.

    Here is a full list of the entity extractions commonly used on our platform:

    • Machine Learning Entities: Custom entities built with unstructured conversational data
    • List Entities: Custom entities built from a simple list of terms and bolstered by a custom feature model
    • Regex Entities: Custom entities built from regex. We have a library of prebuilt regex patterns for common use cases like passwords
    • Pattern Entities: Custom entities built from common language patterns
    • Entity extraction using Generative AI Models: Models can be trained to look out for and collect specific entities. These are used for entities that are otherwise inaccurate on traditional methods (e.g. names, emails, etc.). This is very quick to build and can be powerful when combined with traditional entity extraction methods.
    • Prebuilt Entities: Utilize OneReach.ai native or external prebuilt entities from any vendor (Lex, LUIS, Dialogflow, etc). You can use any combination of vendor entities in a single conversation, giving extreme flexibility and avoiding vendor lock in.
    • Hierarchical Relationships: We can create sophisticated role based entities with our annotation GUI and even create hierarchical entity structures with parent child relationships that can extend to infinite levels of depth
    • Flow Entities: Custom entities built with OneReach.ai’s flow builder that can take into account formal logic, context, and even combine multiple pre-built, machine learned, regex, list, and pattern entities. These flow built entities have unlimited potential for customization as they have all the functionality of OneReach.ai’s 600+ function library and are accessible for granular access at the source code.
    • Entity Marketplace: As additional entity models are created, we include them in our marketplace, so clients can easily leverage them for their solutions, further future proofing your NLU capabilities.

    Create synonym questions
    ​Mimic natural language by asking the same question in different ways.

    Linguistics Features

    Spelling check
    Inspect end user responses and correct wrong language usage to ensure end user meaning to improve user experience.

    Autogenerate disambiguation questions
    If your AI agent is confused, it can generate questions to gain clarity and move forward. For low-confidence responses, the AI agent can ask follow-up questions of the user to improve the confidence level and then auto-train based on the end-user’s acknowledgments.

    Generate text
    Generate training examples that extend NLU model. Greatly improve training datasets by generating many examples based on a few manually entered utterances.

    AI enabled language translation
    We leverage language detection and machine translation for NLU on a user’s input for both voice and text via our partner ecosystem and via Autoregressive Language Models (Generative models, like GPT-4). Best practice is to have individual models for individual languages, but as a kickstarter this can be helpful and OneReach.ai fully supports this for both voice and text.

    Machine Learning

    Cognitive Architecture
    Not all cognitive services are equal, and the pace of change is so fast that placing a bet on a single vendor guarantees suboptimal performance. Our platform provides a Cognitive Architecture, that is ability to amalgamate language services (i.e. NLU, TTS, ASR, localization, etc.) with other cognitive services (e.g. computer vision and generative AI). A cognitive architecture allows you to add vendors, manage cognitive services, and use them in combination in one place. This unlocks designers to create user experiences previously out of reach.

    Since the conversational logic, integration engine, channel connectors and knowledge management are fully decoupled from any single ML provider, our customers benefit from a future-proofed architecture that will allow them to always be able to plug in the best ML capabilities.

    Collect unrecognized utterances for training
    We collect any utterances that prompt disambiguation techniques in a simple GUI for further review. We have complete control over choosing an unsupervised or supervised learning approach to utilize this data for retraining our NLU models. OneReach.ai’s unsupervised learning algorithms automatically create suggested training data based on live conversations, vastly accelerating AI agent performance. They also identify unrecognized user phrases and can proactively prompt business users in a channel of their choosing to the potential of additional use cases.

    Reinforcement learning
    We also support the continuous training of models via reinforcement learning through simple methods such as a thumbs up or thumbs down from users, live agents grading suggested responses from our models, dynamic A/B testing that automatically reinforces a model’s behavior based on certain success criteria, etc.

    Cognitive Broker: Other Cognitive Series (Computer Vision, Sentiment analysis, etc.)

    Sentiment analysis
    We fully support sentiment analysis; Since we have access to multiple sentiment analysis engines, you will always have access to the best performing options. These can be combined with contextual awareness to further improve the analysis (e.g. knowing that someone has called in three times today to ask a question about their bill likely means they are not highly satisfied). We have also created a custom sentiment analysis model with generative AI engines.

    Tonal analysis, demographic matching, voice stress analysis, etc
    Similarly to sentiment analysis, we can support any additional analysis engines and amalgamate multiple.

    Computer vision
    Derive information quickly and accurately from files (PDFs, PNGs, JPEGs, etc.), images, and other media.

    OneReach.ai enables users to engage in dialogues and ask questions about images based on visual content through its advanced computer vision capabilities. The platform can analyze images and extract relevant information from them. This allows users to have natural language conversations and ask questions about the content of the images. For example, users can ask questions like “What size is that one?” or “Is that the nearest location to me?”. Using computer vision partnerships, our platform can then process the image, understand the content, and provide accurate and contextual answers. This feature is particularly useful in scenarios where visual information is important, such as e-commerce, customer support, or content moderation.

    Contact Us

    loader

    Contact Us

    loader