LandingAI, a pioneer in agentic vision technologies, announced the major upgrades to Agentic Document Extraction (ADE). Unlike traditional optical character recognition (OCR), ADE sees a PDF or other document visually, and uses an iterative workflow to accurately extract a document’s text, diagrams, charts, form fields, and so on to produce an LLM-ready output. ADE utilizes layout-aware parsing, visual grounding, and no-template setup, allowing for quick deployment and dependable outcomes without the need for fine-tuning or model training. A leading healthcare platform provider, Eolas Medical, is processing over 100,000 clinical guidelines in the form of PDFs and complex documents with ADE, streamlining the creation of structured summaries with the view to supporting over 1.2million queries per month from healthcare professionals on their platform. Their QA chatbot, powered by ADE, provides answers with direct references to the original documents, improving information traceability and reliability. In financial services, ADE is being used to automate document onboarding for use cases like Know Your Customer (KYC), mortgage and loan processing, and client due diligence. Visual grounding enables full auditability by linking extracted data directly to its source location in the document.
Snyk launches real-time governance and adaptive policy enforcement, crucial for managing evolving risks in AI-driven software development
Cybersecurity company Snyk launched the Snyk AI Trust Platform, an AI-native agentic platform designed to empower organizations to accelerate AI-driven innovation, mitigate business risk and secure agentic and generative AI. The platform introduces several innovations, including Snyk Assist, an AI-powered chat interface offering contextual guidance, next-step recommendations and security intelligence. Another feature called Snyk Agent further extends these capabilities by automating fixes and security actions throughout the development lifecycle, leveraging its testing engines. Other parts of the offering include Snyk Guard, which provides real-time governance and adaptive policy enforcement, crucial for managing evolving AI risks. Complementing these capabilities is the Snyk AI Readiness Framework, which helps organizations assess and mature their secure AI development strategies over time. Also launching from Snyk are two new platform-supporting curated AI Trust environments. Snyk Labs is an innovation hub for researching, experimenting with and incubating the future of AI security, while Snyk Studio allows technology partners to collaborate with Snyk experts to build secure AI-native applications for mutual customers. With Snyk Studio, developers and technology providers can collaborate with its security experts to embed critical security context and controls into their AI-generated code and AI-powered workflows.
Mistral AI’s ‘plug and play’ platform offers built-in connectors to run Python code, create custom visuals, access documents stored in cloud and retrieve information from web for easy customization of AI agents
French AI startup Mistral AI is introducing its Agents API, a “plug and play” platform that enables third-party software developers to quickly add autonomous generative AI capabilities to their existing applications. The API uses Mistral’s proprietary Medium 3 model as the “brains” of each agent, allowing for easy customization and integration of AI agents into enterprise and developer workflows. The API complements Mistral’s existing Chat Completion API and focuses on agentic orchestration, built-in connectors, persistent memory, and the ability to coordinate multiple AI agents to tackle complex tasks. This innovative approach aims to overcome the limitations of traditional language models. The Agents API comes equipped with several built-in connectors, including: Code Execution: Securely runs Python code, enabling applications in data visualization, scientific computing and other technical tasks. Image Generation: Leverages Black Forest Lab FLUX1.1 [pro] Ultra to create custom visuals for marketing, education or artistic uses. Document Library: Accesses documents stored in Mistral Cloud, enhancing retrieval-augmented generation (RAG) features. Web Search: Allows agents to retrieve up-to-date information from online sources, news outlets and other reputable platforms.
Amazon Bedrock Data Automation and Amazon Bedrock Knowledge Bases enable building multimodal applications for natural language querying through a RAG-based Q&A interface
Organizations face challenges in processing large amounts of unstructured data, including documents, images, audio files, and video files. Generative AI technologies are revolutionizing this by automatically processing, analyzing, and extracting insights from these diverse formats. Amazon Bedrock Data Automation and Amazon Bedrock Knowledge Bases enable organizations to build powerful multimodal RAG applications with minimal effort. These tools automate workflows, store extracted information in a unified repository, and enable natural language querying through a RAG-based Q&A interface. Real world use cases
The integration of Amazon Bedrock Data Automation and Amazon Bedrock Knowledge Bases enables powerful solutions for processing large volumes of unstructured data across various industries. Financial institutions process thousands of documents daily, from loan applications to financial statements. Amazon Bedrock Data Automation extracts key financial metrics and compliance information, while Amazon Bedrock Knowledge Bases allows analysts to ask questions like “What are the risk factors mentioned in the latest quarterly reports?” or “Show me all loan applications with high credit scores.”
Cloud Hope AI’s development agent can transform product mockups, specifications, and reference images directly into complete composable solutions, from backend systems to UI components using natural language prompts for use in existing or new applications
Bit Cloud announced the general availability of Hope AI, its new AI-powered development agent that enables professional developers and organizations to build, share, deploy, and maintain complex applications using natural language prompts, specifications and design files. Hope AI takes AI-driven development further, beyond basic websites or application prototypes. It designs complete system architectures, assembles reusable software components, and generates scalable, production-ready applications — from CRM systems to e-commerce platforms to healthcare surgery room management systems — dramatically reducing both time to market and maintenance costs. Hope AI functions as an intelligent software architect, leveraging existing, proven components to compose professional and practical software solutions, enabling consistency and simplifying long-term maintainability. Bit’s solution turns components into reusable digital assets, so teams don’t need to rebuild functionality from scratch every time. Key innovations of Hope AI include: Natural Language to Professional Code, Composable Solutions, Team Collaboration, DevOps Integration.
EnCharge AI’s accelerator uses precise and scalable analog in-memory computing to deliver 200+ TOPS of total compute power for on-device computing with up to ~20x better performance per watt across various AI workloads
EnCharge AI announced the EnCharge EN100, the industry’s first AI accelerator built on precise and scalable analog in-memory computing. Designed to bring advanced AI capabilities to laptops, workstations, and edge devices, EN100 leverages transformational efficiency to deliver 200+ TOPS of total compute power within the power constraints of edge and client platforms such as laptops. By fundamentally reshaping where AI inference happens, developers can now deploy sophisticated, secure, personalized applications locally. This breakthrough enables organizations to rapidly integrate advanced capabilities into existing products—democratizing powerful AI technologies and bringing high-performance inference directly to end-users. EN100, the first of the EnCharge EN series of chips, features an optimized architecture that efficiently processes AI tasks while minimizing energy. Available in two form factors – M.2 for laptops and PCIe for workstations – EN100 is engineered to transform on-device capabilities: M.2 for Laptops: Delivering up to 200+ TOPS of AI compute power in an 8.25W power envelope, EN100 M.2 enables sophisticated AI applications on laptops without compromising battery life or portability. PCIe for Workstations: Featuring four NPUs reaching approximately 1 PetaOPS, the EN100 PCIe card delivers GPU-level compute capacity at a fraction of the cost and power consumption, making it ideal for professional AI applications utilizing complex models and large datasets. Compared to competing solutions, EN100 demonstrates up to ~20x better performance per watt across various AI workloads. With up to 128GB of high-density LPDDR memory and bandwidth reaching 272 GB/s, EN100 efficiently handles sophisticated AI tasks, such as generative language models and real-time computer vision, that typically require specialized data center hardware.
Token Monster’s AI chatbot platform uses a third-party service to connect to multiple LLMs and routes user prompts to the LLMs and linked tools that are best suited to answer them to deliver enhanced output leveraging the strengths of multiple models
Token Monster, a new AI chatbot platform, has launched its alpha preview, aiming to change how users interact with LLMs. Developed by Matt Shumer, co-founder and CEO of OthersideAI and its hit AI writing assistant Hyperwrite AI, Token Monster’s key selling point is its ability to route user prompts to the best available LLMs for the task at hand, delivering enhanced outputs by leveraging the strengths of multiple models. There are seven major LLMs presently available through Token Monster. Once a user types something into the prompt entry box, Token Monster uses pre-prompts developed through iteration by Shumer himself to automatically analyze the user’s input, decide which combination of multiple available models and linked tools are best suited to answer it, and then provide a combined response leveraging the strengths of said models. Unlike other chatbot platforms, Token Monster automatically identifies which LLM is best for specific tasks — as well as which LLM-connected tools would be helpful such as web search or coding environments — and orchestrates a multi-model workflow. The alpha preview, which is currently free to sign up for at tokenmonster.ai, allows users to upload a range of file types, including Excel, PowerPoint, and Docs.
ElevenLabs enterprise multimodal AI voice agents can access external knowledge bases and retrieve relevant information instantly, using built-in Retrieval-Augmented Generation (RAG) system
ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. One of the more powerful additions is the built-in Retrieval-Augmented Generation (RAG) system. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels.
Cube Dev’s AI agent built on top of a semantic layer provides self-serve, natural language-driven analytics for any user by generating a SQL query to look for contextual insights and presents them in interactive visualizations
Cube Dev Inc., the creator of an open-source semantic layer that simplifies access to data from disparate systems, is launching an “agentic analytics” platform that uses AI to automate data analytics tasks. With D3, Cube says, it can scale up the productivity of business workers and enable them to explore data independently, without needing to seek help from data professionals first. The platform introduces the concept of “AI data co-workers” that can automate and enhance analytics tasks, with support for natural language queries, full explainability for every insight, and comprehensive governance. With Cube’s platform, developers can perform calculations on many different datasets in real time, without any of those hassles. It also provides an in-memory cache that saves the results of frequent calculations, so users don’t have to perform them constantly, meaning lower computing costs. Now, Cube is adding AI agents into the mix. At launch, Cube D3 features two different AI agents. The first is an AI Data Analyst, which is able to provide self-serve, natural language-driven analytics for any user. Users ask about their data in plain language, and the agent will generate a semantic Structured Query Language query that digs up the insights they need, presenting them in easily digestible, interactive visualizations. In addition, it can also perform tasks such as refining existing reports. The biggest advantage of building AI agents on top of a semantic layer is they gain more context, allowing them to perform tasks for users more effectively. There’s also an AI Data Engineer for more advanced users that’s able to automate the development of semantic AI models that can quickly leverage disparate data sources, enabling higher velocity and flexibility for the semantic data layer.
Intuit’s agentic AI can get customers paid 45% faster, an average of five days sooner through automated transaction matching and invoice review enabling it to offer personalized experiences in real-time
Intuit has been working on a generative AI revolution for over a decade, culminating in the creation of Intuit’s GenOS, launched in June 2023. GenOS powers all of Intuit’s generative AI and agentic experiences, abstracting away the complexity of various underlying systems to allow for large-scale deployment of AI agents. The system now powers production-ready AI agents, including accounts receivable and accounts payable agents designed to automate cash flow management tasks. As chief AI and data officer Ashok Srivastava, Intuit’s agentic system can get customers paid 45% faster, an average of five days sooner, thanks to automated transaction matching and invoice review. This tangible outcome matters more than the underlying model count, as it allows Intuit to personalize AI experiences in real-time, delivering cash flow forecasts, intelligent recommendations, and context-aware automation tailored to the customer’s immediate needs. Intuit’s commitment to open source is another pillar of its strategy, with projects like Admiral, NumaProj, and Agroproj contributing to the broader community and leveraging the best available technologies. Intuit has received the End User Award from the Cloud Native Computing Foundation twice, and its platform powers a suite of widely-used products including QuickBooks, TurboTax, Mailchimp, and Credit Karma. Srivastava believes that AI agents can help small businesses and consumers do better, as many US firms are under pressure from economic changes and face reduced access to capital. He also remains optimistic about the use of AI in the field of art, seeing it as just another medium, not a replacement