Gemini
A loss of trust…-> Google is pushing back on viral social media posts and articles like this one by Malwarebytes, claiming Google has changed its policy to use your Gmail messages and attachments to train AI models, and the only way to opt out is by disabling “smart features” like spell checking.
Google Search VP Robby Stein says the company’s biggest AI advantage is using connected services like Gmail to tailor answers to each person. He calls the ability to “know you better” the core future of search and more useful than generic results. Gemini already mines emails, documents, photos, location history and browsing to feed features such as Gemini Deep Research and Workspace suggestions. Users can limit access through the “Connected Apps” setting, yet the privacy policy warns that human reviewers may read submitted data. TechCrunch warns that the line between personalized help and unwanted surveillance is narrowing as Google embeds AI deeper into every product. Stein plans to flag personalized responses and even push sale alerts, illustrating how escaping Google’s data collection will only become harder.
Google releases its Gemini 3 model and unveils Antigravity, an agent-based coding platform that can autonomously execute tasks on a user’s computer. The launch moves the conversation beyond text generation to AI that plans, codes, and coordinates work with human oversight. In real-world tests, Gemini 3 built a playable game from a single prompt and created a full website that summarized years of blog posts, all while routing approvals through an inbox interface. Antigravity reads local files, writes code, conducts web research, and even controls the browser to validate its output. The model also cleaned messy research data, devised fresh hypotheses, executed statistical analysis, and delivered a 14-page journal-style paper with minimal guidance. The author says managing Gemini 3 feels like supervising a capable graduate assistant rather than coaxing a chatbot.
Apple reportedly finalized plans to deploy a custom 1.2T parameter version of Google's Gemini model for its long-delayed Siri overhaul, according to Bloomberg — committing roughly $1B annually to license the technology.
The details:
Gemini will handle summarization and multi-step planning within Siri, running on Apple's Private Cloud Compute infrastructure to keep user info private.
Apple also trialed models from OpenAI and Anthropic, with the 1.2T parameter count far exceeding the 150B used in the current Apple Intelligence model.
Bloomberg said the partnership is “unlikely to be promoted publicly”, with Apple intending for Google to be a “behind-the-scenes” tech supplier.
The new Siri could arrive as soon as next Spring, with Apple planning to use Gemini as a stopgap while it builds its own capable internal model.Why it matters: After years of delays and uncertainty around Siri’s upgrade, Gemini is the model set to bring the voice assistant into the AI world (at least in some capacity). Apple views the move as temporary, but building its own solution, considering the company’s struggles and employee exodus, certainly doesn’t feel like a given.
Google launched a redesigned Build mode in AI Studio that lets anyone generate and deploy a web app from a simple text prompt. The update, branded as “vibe coding,” is available now at ai.studio/build and requires no payment info to begin. Users can mix Gemini 2.5 Pro with tools like Veo, Imagine and Flashlight, edit the full React/TypeScript source, and push directly to GitHub or Cloud Run. An “I’m Feeling Lucky” button auto-creates app concepts for inspiration, while advanced models and Cloud Run deployment unlock only after adding a paid API key. The hands-on demo showed a novice building a working dice-rolling app in 65 seconds, highlighting how far the barrier to AI app creation has fallen. That speed and simplicity position Google’s offering as a direct challenger to developer-oriented tools like OpenAI’s Codex and Anthropic’s Claude Code, according to the article.
Recent Pew Research found that when Google shows an AI Overview summary, only 8% of users click through to actual websites (versus 15% when there's no AI summary). That's a 50% drop in clicks. For questions starting with “who,“ “what,“ “when,“ or “why,“ Google now triggers AI summaries 60% of the time. Users rarely click the sources cited in AI summaries; it happens in just 1% of visits to pages with AI Overviews.