Google LLC’s two major research units have made a significant advance in the area of LLM privacy with the introduction of a new model called VaultGemma, the world’s most powerful “differentially private LLM.” VaultGemma was trained from scratch using a differential privacy framework to ensure that it cannot remember or leak sensitive data. This is a critical feature that can have serious implications for AI applications in regulated industries such as finance and healthcare, the researchers said. One of the key innovations in VaultGemma saw the researchers adapt its training protocols to deal with the instability caused by the addition of noise. Google’s research shows how differential privacy alters the learning dynamics of LLMs. They came up with a few tricks to mitigate these costs that could potentially lower the barrier to adoption of private models. Architecturally, VaultGemma is a decoder-only transformer model based on Google’s Gemma 2 architecture, featuring 26 layers and using Multi-Query Attention. One of the key design choices was to limit the sequence length to just 1,024 tokens, which helps manage the intense computational requirements of private training, the researchers said.