Google has agreed to pay $68 million to settle a class-action lawsuit alleging its voice-activated assistant improperly recorded private conversations. The agreement was disclosed in a preliminary settlement filed Friday in federal court in San Jose, California, and requires approval by U.S. District Judge Beth Labson Freeman.
The lawsuit centered on so-called “false accepts,” instances where Google Assistant is alleged to have activated and recorded audio even when users had not intentionally triggered the assistant with prompt words like “Hey Google” or “Okay Google.”
Plaintiffs claimed that these unintended recordings captured portions of private conversations, which Google then used or shared for targeted advertising without users’ consent. False accepts have been a prime topic of contention for critics as they claim Google has used this as a practice to violated privacy laws.
Google has denied any wrongdoing in the case, but opted to settle to avoid prolonged litigation risks, costs, and uncertainty, according to court filings. The settlement covers individuals who bought Google devices or experienced false accepts since May 18, 2016, including Pixel phones, Google Home smart speakers, Nest Hub displays, and other Assistant-enabled hardware. Lawyers for the plaintiffs may request up to about $22.7 million of the settlement for attorneys’ fees under the proposed terms.
The allegations are nothing out of the ordinary. There have been similar privacy disputes faced by other major tech companies across the globe from time to time. In a comparable lawsuit, Apple agreed to pay $95 million in December 2024 to settle claims that its Siri assistant recorded conversations without proper activation and shared audio for internal review.
Voice assistant false activations have long been a technical and public relations issue for AI assistants from Google, Apple, and Amazon. Investigative reporting in 2019 by VRT NWS revealed that human contractors were reviewing some of the audio clips captured during these false accepts, sometimes hearing private and sensitive discussions. Such claims have always intensified user distrust and fueled legal action.
Consumer trust in voice assistants has been eroding overall. A recent PYMNTS analysis on voice assistant adoption and trust found that confidence in AI helpers has declined, with an increasing share of users expressing skepticism about privacy and reliability. The drop in trust has influenced how people engage with always-listening technology across age groups.
Google says that Assistant is designed to listen only for keywords and that users can manage or delete saved recordings via account settings. Users can also disable specific audio activity logging, configure automatic deletion, and use on-device mute controls on supported hardware. However, critics argue that these measures came too late or were not sufficiently transparent in earlier product iterations.