Business AI
Apple reportedly finalized plans to deploy a custom 1.2T parameter version of Google's Gemini model for its long-delayed Siri overhaul, according to Bloomberg — committing roughly $1B annually to license the technology.
The details:
Gemini will handle summarization and multi-step planning within Siri, running on Apple's Private Cloud Compute infrastructure to keep user info private.
Apple also trialed models from OpenAI and Anthropic, with the 1.2T parameter count far exceeding the 150B used in the current Apple Intelligence model.
Bloomberg said the partnership is “unlikely to be promoted publicly”, with Apple intending for Google to be a “behind-the-scenes” tech supplier.
The new Siri could arrive as soon as next Spring, with Apple planning to use Gemini as a stopgap while it builds its own capable internal model.Why it matters: After years of delays and uncertainty around Siri’s upgrade, Gemini is the model set to bring the voice assistant into the AI world (at least in some capacity). Apple views the move as temporary, but building its own solution, considering the company’s struggles and employee exodus, certainly doesn’t feel like a given.
Google launched a redesigned Build mode in AI Studio that lets anyone generate and deploy a web app from a simple text prompt. The update, branded as “vibe coding,” is available now at ai.studio/build and requires no payment info to begin. Users can mix Gemini 2.5 Pro with tools like Veo, Imagine and Flashlight, edit the full React/TypeScript source, and push directly to GitHub or Cloud Run. An “I’m Feeling Lucky” button auto-creates app concepts for inspiration, while advanced models and Cloud Run deployment unlock only after adding a paid API key. The hands-on demo showed a novice building a working dice-rolling app in 65 seconds, highlighting how far the barrier to AI app creation has fallen. That speed and simplicity position Google’s offering as a direct challenger to developer-oriented tools like OpenAI’s Codex and Anthropic’s Claude Code, according to the article.
Recent Pew Research found that when Google shows an AI Overview summary, only 8% of users click through to actual websites (versus 15% when there's no AI summary). That's a 50% drop in clicks. For questions starting with “who,“ “what,“ “when,“ or “why,“ Google now triggers AI summaries 60% of the time. Users rarely click the sources cited in AI summaries; it happens in just 1% of visits to pages with AI Overviews.
Google’s AI note-taking and research assistant NotebookLM now lets users customize the tone of their Audio Overviews, which are podcasts with AI virtual hosts that summarize and discuss documents shared with NotebookLM, such as course readings or legal briefs. When generating an Audio Overview, users can now choose whether they want their AI podcasts to be formatted as a “Deep Dive,” “Brief,” “Critique,” or “Debate.”
Teachers and Parents Can’t See AI Chat Transcripts While Gemini may be “student safe,” only administrators can review chat histories. That’s a huge blind spot. If a student is confused by a Gemini response, misuses the tool, or gets inaccurate information—teachers and parents won’t know unless the student says something.
Is AI doing the thinking—or the student? Many features encourage speed and convenience, but could inadvertently promote over-reliance. Students can get summaries, answers, and explanations so easily that critical thinking risks taking a backseat.
There’s no way to track edits or usage Gemini doesn’t offer version history for AI-generated content. That means teachers can’t see how a document evolved—or how much of it came from AI.
Equity gaps may widen Some schools have tech coaches, training time, and infrastructure to support thoughtful AI use. Others don’t. Without equitable implementation support, Gemini’s benefits may be limited to already well-resourced districts.