• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Google brings air‑gapped, multimodal AI to Distributed Cloud so regulated enterprises can deploy GenAI on premise without sacrificing data sovereignty

August 29, 2025 //  by Finnovate

Google announced the general availability of its Gemini artificial intelligence models on Google Distributed Cloud, extending its most advanced AI capabilities into enterprise and government data centers. The launch, which sees Gemini now available on GDC in an air-gapped configuration and in preview on GDC onnected, allows organizations with strict data residency and compliance requirements to deploy generative AI without sacrificing control over sensitive information. With the release and by bringing models on-premises, Google is addressing a longstanding issue faced by regulated industries: a choice between adopting modern AI tools or maintaining full sovereignty over their data. The integration provides access to Gemini’s multimodal capabilities, including text, images, audio and video. Google says that unlocks a range of use cases, including multilingual collaboration, automated document summarization, intelligent chatbots and AI-assisted code generation. The release also includes built-in safety tools that allow enterprises to improve compliance, detect harmful content and enforce policy adherence. Google argues that delivering these capabilities securely requires more than just models, positioning GDC as a full AI platform that combines infrastructure, model libraries and prebuilt agents such as the preview of Agentspace search. Under the hood, GDC makes use of Nvidia Corp.’s Hopper and Blackwell graphics processing units, paired with automated load balancing and zero-touch updates for high availability. Confidential computing is supported on both central processing units and GPUs, ensuring that sensitive data is encrypted even during processing. Customers also gain audit logging and granular access controls for end-to-end visibility of their AI workloads. Along with Gemini 2.5 Flash and Pro, the platform supports Vertex AI’s task-specific models and Google’s open-source Gemma family. Enterprises can also deploy their own open-source or proprietary models on managed virtual machines and Kubernetes clusters as part of a unified environment.

Read Article

Category: AI & Machine Economy, Innovation Topics

Previous Post: « Salesforce debuts a CRM ‘flight simulator’ to harden AI agents in realistic business scenarios and benchmark target reliability, integration security to close the 95% pilot failure gap
Next Post: Crypto becomes a primary gaming payment rail as near‑instant settlement, lower costs, wallet-based anonymity and borderless access beat bank rails, with casinos rapidly adding coin and stablecoin options »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.