Consumer AI
A loss of trust…-> Google is pushing back on viral social media posts and articles like this one by Malwarebytes, claiming Google has changed its policy to use your Gmail messages and attachments to train AI models, and the only way to opt out is by disabling “smart features” like spell checking.
If someone had malicious intent, they would have been able to extract every single file used by Margolis lawyers – countless data protected by HIPAA and other legal standards, internal memos/payrolls, literally millions of the most sensitive documents this law firm has in their possession. Documents protected by court orders! This could have been a real nightmare for both the law firm and the clients whose data would have been exposed.
To companies who feel pressure to rush into the AI craze in their industry – be careful! Always ensure the companies you are giving your most sensitive information to secure that data.
Google Search VP Robby Stein says the company’s biggest AI advantage is using connected services like Gmail to tailor answers to each person. He calls the ability to “know you better” the core future of search and more useful than generic results. Gemini already mines emails, documents, photos, location history and browsing to feed features such as Gemini Deep Research and Workspace suggestions. Users can limit access through the “Connected Apps” setting, yet the privacy policy warns that human reviewers may read submitted data. TechCrunch warns that the line between personalized help and unwanted surveillance is narrowing as Google embeds AI deeper into every product. Stein plans to flag personalized responses and even push sale alerts, illustrating how escaping Google’s data collection will only become harder.
OpenAI just rolled out its group chat feature across all subscription tiers after an initial test period, allowing up to 20 users to simultaneously collaborate with each other and with ChatGPT in the same thread. The details: Shared chats are accessed through invite links, with ChatGPT gauging conversation flow and interjecting when appropriate or directly mentioned. Rate limits apply to AI responses rather than human messages, with the usage counting against the user who triggered the model reply. Privacy features isolate group sessions from individual memory, with ChatGPT not retaining info from collaborative threads or applying personal context. The feature initially launched in four Asia-Pacific markets last week for a test trial and is now expanding to Free, Go, Plus, and Pro tiers. Why it matters: Group projects just got a powerful new collaboration tool for the AI age. It might take some time to get the flow of using ChatGPT alongside friends or coworkers, but in a short time, we’ll likely see (and welcome) contributions from models in collaborative efforts as naturally as any other human participants.