Complimentary Gartner Report: "Assess 5 AI Agent Market Categories"

Download Report

Home > GSX Platform > Hosting and Runtime
Menu

    Hosting and Runtime

    Key Business Use Cases: Hosting and Runtime

    R

    Regulatory compliance and data residency control

    Enterprises in highly regulated industries such as healthcare, financial services, and government can ensure strict compliance with regional and global data governance rules.

    E

    Enterprise-grade reliability with failover and disaster recovery

    Global businesses that operate across multiple time zones and regions can use GSX’s containerized architecture for uninterrupted operations. Enterprises can set up redundant environments in different regions and automatically shift traffic during outages.

    F

    Federated multitenant identity and access management

    Large organizations with multiple business units or teams can utilize PDEs to support a federated but well-integrated architecture under a single umbrella. Each business unit can have its own accounts and environments (Dev, QA, Staging, Production) while maintaining enterprise-wide governance and security policies.

    Private Dedicated Environments

    Containerized architecture gives each OneReach.ai customer granular control over infrastructure-level considerations such as data residency (geographic locations), data retention, access controls, and availability (DR and Failover).

    Private Dedicated Environment (PDE)
    The OneReach.ai platform is implemented on top of cloud infrastructure and uses a unique blend of containerized and virtualized services to provide a highly flexible, scalable, and redundant AI agent platform. Our architecture allows OneReach.ai to provide our customers with one or more Private Dedicated Environments (PDEs), which are unique instances of our platform in specific geographic regions unique to each customer, each of which is multitenant (meaning multiple accounts for a single enterprise, which is how we can offer well-integrated solutions that are federated across different teams and business units).

    Separate environments for dev, QA, staging and production
    This separation allows us to provide separate environments for Dev, QA, Staging and Production. In addition this also means that our customers can have completely redundant environments in separate regions and either load balance their solution traffic between regions or use one as a primary and the second as a failover region to ensure solution availability.

    Full control over infrastructure and publishing
    Each customer has a dedicated environment, and therefore full control over their infrastructure.

    Extensible Cognitive Architecture

    Unlike traditional computing systems, extensible cognitive architecture for conversational AI isn’t confined to a single physical case or motherboard.

    Elastically scaled servers
    The architectures created on the OneReach.ai platform consist of a network of elastically scaled servers. These servers are flexible and capable of performing a multitude of tasks, reflecting the diverse and dynamic requirements of conversational AI applications.

    Network of microservices
    A network of microservices are linked to various resources, including databases for storage and retrieval, file storage systems, networking components for communication, and in-memory databases like Redis, which serve a similar purpose to RAM in a traditional setup.

    Create swarms of AI agents as needed
    The elastic scalability of this architecture allows it to expand or contract resources as needed to accommodate varying loads and tasks. AI agents can also be orchestrated to work as a swarm, where different skills are activated and work in tandem as needed. This swarming activity can generate and require access to a large volume of files, requiring a scalable storage solution to accommodate growing data needs without compromising performance.

    Serverless

    Each experience built is a standalone application and microservice that is deployed to your PDE. These can be expressed as HTTPs endpoints, enabling you to build your own APIs.

    Serverless architecture
    The output of what you build on OneReach.ai are standalone applications and microservices that are deployed into your private dedicated environments hosted within AWS. Our services heavily utilize containers, Kubernetes, and serverless technologies.

    Microservice architecture and event based model
    Every flow built in our platform is a microservice and is triggered by the centralized event-based system. This means any flow (and step within a flow) can subscribe to events, emit events, and those events can then trigger other flows to run.

    Event-based architecture
    Because of the event based architecture, you can create flows that can handle complex schedules, scheduling automation for specific dates like the third Wednesday of the month, or for specific scenarios, like sending and email three days after a form is completed if a user has visited the website more than twice this week.

    Composability
    Using a microservices, event-based approach that incorporates autoscaling services (like Teraform), every aspect of the platform autoscales. This means when a customer deploys a new skill it is unnecessary to estimate traffic or involve IT to provision any services as the platform is fully elastic. The only consideration is soft limits put in place to prevent intentional or unintentional load. We’ve implemented a very granular capability for our customers to manage soft caps, down to flow concurrency and read-writes. Out-of-the-box if not otherwise configured, our entire solution automatically scales to hundreds of request per second and 2,000 concurrent calls (each of these are soft limits that can be increased on demand).

    Build your own APIs
    Each Flow can be expressed as an HTTPs endpoint, enabling you to build your own APIs.

    High Availability, Redundancy, & Scalability

    PDEs can be hosted in one or more AWS region(s) and configured to account for the highest levels of availability. Scalability is limited only by your imagination and the limits of AWS.

    High availability
    Each PDE has its own SLA, so the level of availability of a solution is based on how many PDEs are utilized.

    Multi-version publishing control
    OneReach.ai is unique in its ability to publish new iterations of experiences while production traffic is running with no interruptions. This results in exponentially more public AI agent iterations, which is crucial for fine-tuning the user experience (see Private Dedicated Environment above).

    RESTful Architecture

    Every flow built in our platform is a microservice triggered by a centralized event-based system. Any flow (or any step within a flow) can subscribe to events and emit events. When an event occurs (or is published) all the agents or skills subscribed to that event are notified and can then act upon it. Event-based models allow IDWs to be reactive. They remain idle or in a low-resource state until an event they are subscribed to is triggered. This leads to efficient resource utilization, as IDWs are activated only when needed.

    Event based architecture
    Every flow built in our platform is a microservice and is triggered by the centralized event-based system. This means any flow (and step within a flow) can subscribe to events, emit events, and those events can then trigger other flows to run.

    Build your own APIs and endpoints
    Because of the Event Based architecture, you can create flows that can handle complex schedules, scheduling automation for specific dates like the third wednesday of the month, or for specific scenarios, like sending and email three days after a form is completed if a user has visited the website more than twice this week.

    Embed API calls into workflows
    Include integrations with external systems, enterprise wide systems, etc. into any workflow built on our platform. With deep integrations, our platform can listen and respond to literally thousands of different events that can alter and trigger automations.

    Previous page

    Multimodality

    Contact Us

    loader

    Contact Us

    loader