Bit Cloud announced the general availability of Hope AI, its new AI-powered development agent that enables professional developers and organizations to build, share, deploy, and maintain complex applications using natural language prompts, specifications and design files. Hope AI takes AI-driven development further, beyond basic websites or application prototypes. It designs complete system architectures, assembles reusable software components, and generates scalable, production-ready applications — from CRM systems to e-commerce platforms to healthcare surgery room management systems — dramatically reducing both time to market and maintenance costs. Hope AI functions as an intelligent software architect, leveraging existing, proven components to compose professional and practical software solutions, enabling consistency and simplifying long-term maintainability. Bit’s solution turns components into reusable digital assets, so teams don’t need to rebuild functionality from scratch every time. Key innovations of Hope AI include: Natural Language to Professional Code, Composable Solutions, Team Collaboration, DevOps Integration.
EnCharge AI’s accelerator uses precise and scalable analog in-memory computing to deliver 200+ TOPS of total compute power for on-device computing with up to ~20x better performance per watt across various AI workloads
EnCharge AI announced the EnCharge EN100, the industry’s first AI accelerator built on precise and scalable analog in-memory computing. Designed to bring advanced AI capabilities to laptops, workstations, and edge devices, EN100 leverages transformational efficiency to deliver 200+ TOPS of total compute power within the power constraints of edge and client platforms such as laptops. By fundamentally reshaping where AI inference happens, developers can now deploy sophisticated, secure, personalized applications locally. This breakthrough enables organizations to rapidly integrate advanced capabilities into existing products—democratizing powerful AI technologies and bringing high-performance inference directly to end-users. EN100, the first of the EnCharge EN series of chips, features an optimized architecture that efficiently processes AI tasks while minimizing energy. Available in two form factors – M.2 for laptops and PCIe for workstations – EN100 is engineered to transform on-device capabilities: M.2 for Laptops: Delivering up to 200+ TOPS of AI compute power in an 8.25W power envelope, EN100 M.2 enables sophisticated AI applications on laptops without compromising battery life or portability. PCIe for Workstations: Featuring four NPUs reaching approximately 1 PetaOPS, the EN100 PCIe card delivers GPU-level compute capacity at a fraction of the cost and power consumption, making it ideal for professional AI applications utilizing complex models and large datasets. Compared to competing solutions, EN100 demonstrates up to ~20x better performance per watt across various AI workloads. With up to 128GB of high-density LPDDR memory and bandwidth reaching 272 GB/s, EN100 efficiently handles sophisticated AI tasks, such as generative language models and real-time computer vision, that typically require specialized data center hardware.
Token Monster’s AI chatbot platform uses a third-party service to connect to multiple LLMs and routes user prompts to the LLMs and linked tools that are best suited to answer them to deliver enhanced output leveraging the strengths of multiple models
Token Monster, a new AI chatbot platform, has launched its alpha preview, aiming to change how users interact with LLMs. Developed by Matt Shumer, co-founder and CEO of OthersideAI and its hit AI writing assistant Hyperwrite AI, Token Monster’s key selling point is its ability to route user prompts to the best available LLMs for the task at hand, delivering enhanced outputs by leveraging the strengths of multiple models. There are seven major LLMs presently available through Token Monster. Once a user types something into the prompt entry box, Token Monster uses pre-prompts developed through iteration by Shumer himself to automatically analyze the user’s input, decide which combination of multiple available models and linked tools are best suited to answer it, and then provide a combined response leveraging the strengths of said models. Unlike other chatbot platforms, Token Monster automatically identifies which LLM is best for specific tasks — as well as which LLM-connected tools would be helpful such as web search or coding environments — and orchestrates a multi-model workflow. The alpha preview, which is currently free to sign up for at tokenmonster.ai, allows users to upload a range of file types, including Excel, PowerPoint, and Docs.
ElevenLabs enterprise multimodal AI voice agents can access external knowledge bases and retrieve relevant information instantly, using built-in Retrieval-Augmented Generation (RAG) system
ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. One of the more powerful additions is the built-in Retrieval-Augmented Generation (RAG) system. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels.
Cube Dev’s AI agent built on top of a semantic layer provides self-serve, natural language-driven analytics for any user by generating a SQL query to look for contextual insights and presents them in interactive visualizations
Cube Dev Inc., the creator of an open-source semantic layer that simplifies access to data from disparate systems, is launching an “agentic analytics” platform that uses AI to automate data analytics tasks. With D3, Cube says, it can scale up the productivity of business workers and enable them to explore data independently, without needing to seek help from data professionals first. The platform introduces the concept of “AI data co-workers” that can automate and enhance analytics tasks, with support for natural language queries, full explainability for every insight, and comprehensive governance. With Cube’s platform, developers can perform calculations on many different datasets in real time, without any of those hassles. It also provides an in-memory cache that saves the results of frequent calculations, so users don’t have to perform them constantly, meaning lower computing costs. Now, Cube is adding AI agents into the mix. At launch, Cube D3 features two different AI agents. The first is an AI Data Analyst, which is able to provide self-serve, natural language-driven analytics for any user. Users ask about their data in plain language, and the agent will generate a semantic Structured Query Language query that digs up the insights they need, presenting them in easily digestible, interactive visualizations. In addition, it can also perform tasks such as refining existing reports. The biggest advantage of building AI agents on top of a semantic layer is they gain more context, allowing them to perform tasks for users more effectively. There’s also an AI Data Engineer for more advanced users that’s able to automate the development of semantic AI models that can quickly leverage disparate data sources, enabling higher velocity and flexibility for the semantic data layer.
Intuit’s agentic AI can get customers paid 45% faster, an average of five days sooner through automated transaction matching and invoice review enabling it to offer personalized experiences in real-time
Intuit has been working on a generative AI revolution for over a decade, culminating in the creation of Intuit’s GenOS, launched in June 2023. GenOS powers all of Intuit’s generative AI and agentic experiences, abstracting away the complexity of various underlying systems to allow for large-scale deployment of AI agents. The system now powers production-ready AI agents, including accounts receivable and accounts payable agents designed to automate cash flow management tasks. As chief AI and data officer Ashok Srivastava, Intuit’s agentic system can get customers paid 45% faster, an average of five days sooner, thanks to automated transaction matching and invoice review. This tangible outcome matters more than the underlying model count, as it allows Intuit to personalize AI experiences in real-time, delivering cash flow forecasts, intelligent recommendations, and context-aware automation tailored to the customer’s immediate needs. Intuit’s commitment to open source is another pillar of its strategy, with projects like Admiral, NumaProj, and Agroproj contributing to the broader community and leveraging the best available technologies. Intuit has received the End User Award from the Cloud Native Computing Foundation twice, and its platform powers a suite of widely-used products including QuickBooks, TurboTax, Mailchimp, and Credit Karma. Srivastava believes that AI agents can help small businesses and consumers do better, as many US firms are under pressure from economic changes and face reduced access to capital. He also remains optimistic about the use of AI in the field of art, seeing it as just another medium, not a replacement
OpenAI plans to turn ChatGPT into a “super-assistant” that is personalized to each user and available to them via the chatbot’s website, the company’s native apps, phone, email and third-party platforms
OpenAI reportedly plans to turn ChatGPT into a “super-assistant” that is personalized to each user and available to them via the chatbot’s website, the company’s native apps, phone, email and third-party resources like Apple’s Siri. The plan is described in an OpenAI internal document from late 2024 that came to light in the Department of Justice’s antitrust case against Google. The super-assistant will be able to handle tedious daily tasks like answering questions and managing calendars, and more complicated ones like coding. It will be, the document said: “One that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do.” OpenAI has announced several updates over the last month that suggest the company aims to expand the capabilities of its artificial intelligence tools. Chief Operating Officer, Brad Lightcap said that OpenAI wants to build an “ambient computer layer” that doesn’t require users to look at a screen.
Veris AI’s platform allows developers to train and test AI agents using dynamic, realistic, high-fidelity simulated experiences rather than prompt engineering and human-generated data to enable deploying more accurate agents
Veris AI, a platform that lets companies safely train and test AI agents through novel high-fidelity simulated experiences, emerged from stealth and has raised $8.5M in seed funding. Veris allows developers to train agents using experience rather than prompt engineering and human-generated data. Veris’ dynamic, realistic, simulated environments gives enterprises a safe space for reinforcement learning and continuous improvement, ultimately helping them deploy and scale more accurate AI agents. Mehdi Jamei, CEO and co-founder of Veris said, ”We are building Veris to unlock the potential of agentic AI for enterprises – both by solving existing problems and improving the speed and quality in which new agents can come into production.”
OpenAI plans to turn ChatGPT into a “super-assistant” that is personalized to each user and available to them via the chatbot’s website, the company’s native apps, phone, email and third-party platforms
OpenAI reportedly plans to turn ChatGPT into a “super-assistant” that is personalized to each user and available to them via the chatbot’s website, the company’s native apps, phone, email and third-party resources like Apple’s Siri. The plan is described in an OpenAI internal document from late 2024 that came to light in the Department of Justice’s antitrust case against Google. The super-assistant will be able to handle tedious daily tasks like answering questions and managing calendars, and more complicated ones like coding. It will be, the document said: “One that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do.” OpenAI has announced several updates over the last month that suggest the company aims to expand the capabilities of its artificial intelligence tools. Chief Operating Officer, Brad Lightcap said that OpenAI wants to build an “ambient computer layer” that doesn’t require users to look at a screen.
Veris AI’s platform allows developers to train and test AI agents using dynamic, realistic, high-fidelity simulated experiences rather than prompt engineering and human-generated data to enable deploying more accurate agents
Veris AI, a platform that lets companies safely train and test AI agents through novel high-fidelity simulated experiences, emerged from stealth and has raised $8.5M in seed funding. Veris allows developers to train agents using experience rather than prompt engineering and human-generated data. Veris’ dynamic, realistic, simulated environments gives enterprises a safe space for reinforcement learning and continuous improvement, ultimately helping them deploy and scale more accurate AI agents. Mehdi Jamei, CEO and co-founder of Veris said, ”We are building Veris to unlock the potential of agentic AI for enterprises – both by solving existing problems and improving the speed and quality in which new agents can come into production.”