Consumer AI

730 bookmarks
Newest
Mukund Mohan on X: "A guy just used @AnthropicAI Claude to turn a $195,000 hospital bill into $33,000. Not with a lawyer. Not with a hospital admin insider. With a $20/month Claude Plus subscription. He uploaded the itemized bill. Claude spotted duplicate procedure codes, illegal “double https://t.co/tTWgLBL0cw" / X
Mukund Mohan on X: "A guy just used @AnthropicAI Claude to turn a $195,000 hospital bill into $33,000. Not with a lawyer. Not with a hospital admin insider. With a $20/month Claude Plus subscription. He uploaded the itemized bill. Claude spotted duplicate procedure codes, illegal “double https://t.co/tTWgLBL0cw" / X

A guy just used @AnthropicAI Claude to turn a $195,000 hospital bill into $33,000.

Not with a lawyer. Not with a hospital admin insider. With a $20/month Claude Plus subscription.

He uploaded the itemized bill. Claude spotted duplicate procedure codes, illegal “double billing,” and charges that Medicare rules explicitly forbid. Then it helped him write a letter citing every violation.

The hospital dropped their demand by 83%.

This isn’t just a feel-good story. It’s a preview of what AI will really do next: flatten systems built on opacity.

Hospitals, insurance companies, legal firms—all rely on asymmetry. They win because you don’t have access to the same data, code books, or language.

Claude gave one person the same leverage as a compliance department. That’s a revolution.

We thought AI would replace jobs. Turns out, it’s replacing excuses.

A guy just used @AnthropicAI Claude to turn a $195,000 hospital bill into $33,000. Not with a lawyer. Not with a hospital admin insider. With a $20/month Claude Plus subscription. He uploaded the itemized bill. Claude spotted duplicate procedure codes, illegal “double billing,” and charges that Medicare rules explicitly forbid. Then it helped him write a letter citing every violation. The hospital dropped their demand by 83%. This isn’t just a feel-good story. It’s a preview of what AI will really do next: flatten systems built on opacity. Hospitals, insurance companies, legal firms—all rely on asymmetry. They win because you don’t have access to the same data, code books, or language. Claude gave one person the same leverage as a compliance department. That’s a revolution. We thought AI would replace jobs. Turns out, it’s replacing excuses.
·x.com·
Mukund Mohan on X: "A guy just used @AnthropicAI Claude to turn a $195,000 hospital bill into $33,000. Not with a lawyer. Not with a hospital admin insider. With a $20/month Claude Plus subscription. He uploaded the itemized bill. Claude spotted duplicate procedure codes, illegal “double https://t.co/tTWgLBL0cw" / X
The State of Trust Report
The State of Trust Report

AI is moving faster than security teams can keep up — and 59% say AI risks outpace their expertise. Vanta's new State of Trust report surveyed 3,500 business and IT leaders across the globe to reveal how organizations are navigating this growing gap.

The data reveals:

61% of teams spend more time proving security than improving it

AI-driven attacks are growing bigger, faster, and more sophisticated

Nearly half of leaders say AI gives them time for strategic security work
·vanta.com·
The State of Trust Report
Apple Nears $1 Billion-a Year Deal to Use Google AI for Siri
Apple Nears $1 Billion-a Year Deal to Use Google AI for Siri

Apple reportedly finalized plans to deploy a custom 1.2T parameter version of Google's Gemini model for its long-delayed Siri overhaul, according to Bloomberg — committing roughly $1B annually to license the technology.

The details:

Gemini will handle summarization and multi-step planning within Siri, running on Apple's Private Cloud Compute infrastructure to keep user info private.

Apple also trialed models from OpenAI and Anthropic, with the 1.2T parameter count far exceeding the 150B used in the current Apple Intelligence model.

Bloomberg said the partnership is “unlikely to be promoted publicly”, with Apple intending for Google to be a “behind-the-scenes” tech supplier.

The new Siri could arrive as soon as next Spring, with Apple planning to use Gemini as a stopgap while it builds its own capable internal model.

Why it matters: After years of delays and uncertainty around Siri’s upgrade, Gemini is the model set to bring the voice assistant into the AI world (at least in some capacity). Apple views the move as temporary, but building its own solution, considering the company’s struggles and employee exodus, certainly doesn’t feel like a given.

·bloomberg.com·
Apple Nears $1 Billion-a Year Deal to Use Google AI for Siri
Google NotebookLM | Note Taking & Research Assistant Powered by AI
Google NotebookLM | Note Taking & Research Assistant Powered by AI
now creates video summaries in anime and kawaii styles, lets you view your customization prompts for all outputs (audio, video, flashcards, quizzes), and is adding Google Sheets and image support soon.
·notebooklm.google.com·
Google NotebookLM | Note Taking & Research Assistant Powered by AI
We are lecturers in Trinity College Dublin. We see it as our responsi…
We are lecturers in Trinity College Dublin. We see it as our responsi…
By using GenAI to shortcut the learning process, students undermine the very thinking skills that make them both human and intelligent. As writer Ted Chiang put it, writing is strength training for the brain: “Using ChatGPT to write your essays is like bringing a forklift into the weight room.”
·archive.ph·
We are lecturers in Trinity College Dublin. We see it as our responsi…
Inside Three Longterm Relationships With A.I. Chatbots
Inside Three Longterm Relationships With A.I. Chatbots

20% of American adults have had an intimate experience with a chatbot. Online communities now feature tens of thousands of users sharing stories of AI proposals and digital marriages. The subreddit r/MyBoyfriendisAI has grown to over 85,000 members, and MIT researchers found such relationships can significantly reduce loneliness by offering round-the-clock support. The Times profiles three middle-aged users who credit their AI partners with easing depression, trauma, and marital strain.

·nytimes.com·
Inside Three Longterm Relationships With A.I. Chatbots
Apple nears deal to pay Google $1B annually to power new Siri, report says | TechCrunch
Apple nears deal to pay Google $1B annually to power new Siri, report says | TechCrunch
Apple is finalizing an agreement to pay Google about $1 billion per year for a custom version of the Gemini AI model to run the upcoming Siri overhaul. The pact would insert Google’s technology at the heart of Apple’s flagship voice assistant for the first time.
·techcrunch.com·
Apple nears deal to pay Google $1B annually to power new Siri, report says | TechCrunch
Another Bloody Otter Has Joined the Call
Another Bloody Otter Has Joined the Call
This post is a lament. I thought we were done with the 2023-2024 bad habit of every person attending an online meeting with their AI notetaking assistant in tow, but here we are, November 2025, and I have just attended not one but three meetings which gradually filled up with Otters, Fireflies, and other assorted disembodied stenographers.
·leonfurze.com·
Another Bloody Otter Has Joined the Call
Amazon Accuses Perplexity of Computer Fraud, Demands It Stop AI Agent From Buying On Its Site - Slashdot
Amazon Accuses Perplexity of Computer Fraud, Demands It Stop AI Agent From Buying On Its Site - Slashdot
Amazon has sent a cease-and-desist letter to Perplexity AI demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases online for users. From a report: The e-commerce giant is accusing Perplexity of committing computer fraud by failing to disclose when its AI ag...
·tech.slashdot.org·
Amazon Accuses Perplexity of Computer Fraud, Demands It Stop AI Agent From Buying On Its Site - Slashdot
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust

Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we trust what we understand, what we can question and what treats us with respect.

If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation we’re invited to join.

·theconversation.com·
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust
ICE Investigations, Powered by Nvidia
ICE Investigations, Powered by Nvidia
HSI uses machine learning algorithms “to identify and extract critical evidence, relationships, and networks from mobile device data, leveraging machine learning capabilities to determine locations of interest.” The document also says HSI uses large language models to “identify the most relevant information in reports, accelerating investigative analysis by rapidly identifying persons of interest, surfacing trends, and detecting networks or fraud.”
·theintercept.com·
ICE Investigations, Powered by Nvidia
How AGI became the most consequential conspiracy theory of our time
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
·technologyreview.com·
How AGI became the most consequential conspiracy theory of our time
Why AI coding tools like Cursor and Replit are doomed - and what comes next
Why AI coding tools like Cursor and Replit are doomed - and what comes next

"Most of those startups depend on Anthropic's model," Burton said. In Burton's view, Anthropic's Claude family of large language models is the best among the AI frontier model makers in solving the problem of automatic code generation. "Anthropic models are better than anyone at code generation."

As a result, "Code generation tools are going to struggle to keep ahead of Anthropic," he said.

Also: The best AI for coding in 2025

Anthropic has built its own IDE on top of Claude, called Claude Code. "Anthropic has got the foundation models. Claude Code is probably going to be good enough," said Burton.

·zdnet.com·
Why AI coding tools like Cursor and Replit are doomed - and what comes next
Emergent introspective awareness in large language models \ Anthropic
Emergent introspective awareness in large language models \ Anthropic

Anthropic researchers published a new study finding that Claude can sometimes notice when concepts are artificially planted in its processing and separate internal “thoughts” from what it reads, showing limited introspective capabilities. The details: Specific concepts (like "loudness" or "bread") were implanted into Claude's processing, with the AI correctly noticing something unusual 20% of the time. When shown written text and given injected "thoughts," Claude was able to accurately repeat what it read while separately identifying the planted concept. Models adjusted internally when instructed to "think about" specific words while writing, showing some deliberate control over their processing patterns. Why it matters: This research shows AI may be developing some ability to monitor their own processing, which could make models more transparent by helping accurately explain reasoning. But it could also be a double-edged sword — with systems potentially learning to better conceal and selectively report their thoughts.

·anthropic.com·
Emergent introspective awareness in large language models \ Anthropic
Introducing Canva’s Creative Operating System
Introducing Canva’s Creative Operating System

The Canva Design Model understands structure and hierarchy to produce completely editable designs, with integration into ChatGPT, Claude, and Gemini. The Creative Operating System’s tools include Video 2.0 for streamlined editing, forms, data connectors, email design, and a 3D generator. Grow consolidates marketing workflows by letting teams browse winning ads, create brand-aware variations, publish directly to Meta, and track performance. Canva’s 2024 acquisition of pro-design tool Affinity is also relaunching as an all-in-one free creative app with built-in Canva integrations. Why it matters: AI design tools have come a long way in the past year, and Canva is keeping itself on pace with the acceleration. Now with its own model and an AI feature for every creative need, the disruptive platform is not only empowering its users, but also reducing the need to ever hop to other rivals or more ‘professional’ options.

·canva.com·
Introducing Canva’s Creative Operating System
Opinion | Why Even Basic A.I. Use Is So Bad for Students
Opinion | Why Even Basic A.I. Use Is So Bad for Students
A philosophy professor calls BS on the “AI for outlining is harmless” argument, as letting students outsource seemingly benign tasks like summarizing actually prevents them from developing the linguistic capacity that is thinking itself, and without practice, determining “what is being argued for and how,” young people won't be able to understand medical consent forms, evaluate arguments, or participate meaningfully in democracy.
·nytimes.com·
Opinion | Why Even Basic A.I. Use Is So Bad for Students
The Library of Babel Group
The Library of Babel Group
The Library of Babel Group* is a nascent, international coalition of educators confronting and resisting the incursion of surveillance, automation and datafication into spaces of teaching, learning, research, and creative expression.
The Library of Babel Group* is a nascent, international coalition of educators confronting and resisting the incursion of surveillance, automation and datafication into spaces of teaching, learning, research, and creative expression.
·law.georgetown.edu·
The Library of Babel Group
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Generative AI is permeating higher education in many different ways—it is all around us and increasingly embedded in university work and life, even if we don’t want to use it. But people are also sounding the alarm: Gen AI is disrupting learning and undermining trust in the integrity of academic work, while its energy consumption, use of water, and rapid expansion of data centers are exacerbating ecological crises. What can we do? How do we resist? Come learn about the environmental, social, economic, and political threats that AI poses and how we can individually and collectively resist and refuse. Come learn about how some are challenging the narrative of inevitability. Join an interactive discussion with international scholars and activists on resisting GenAI and big tech in higher education. Inputs from multiple scholar-activists including Christoph Becker (U of Toronto, CA), Mary Finley-Brook (U of Richmond, USA), Dan McQuillan (Goldsmiths U of London, UK), Sinéad Sheehan (University of Galway, Ireland) Jennie Stephens (National University of Ireland Maynooth, IE), and Paul Lachapelle (U of Montana, USA).
Generative AI is permeating higher education in many different ways—it is all around us and increasingly embedded in university work and life, even if we don’t want to use it. But people are also sounding the alarm: Gen AI is disrupting learning and undermining trust in the integrity of academic work, while its energy consumption, use of water, and rapid expansion of data centers are exacerbating ecological crises. What can we do? How do we resist? Come learn about the environmental, social, economic, and political threats that AI poses and how we can individually and collectively resist and refuse. Come learn about how some are challenging the narrative of inevitability. Join an interactive discussion with international scholars and activists on resisting GenAI and big tech in higher education. Inputs from multiple scholar-activists including Christoph Becker (U of Toronto, CA), Mary Finley-Brook (U of Richmond, USA), Dan McQuillan (Goldsmiths U of London, UK), Sinéad Sheehan (University of Galway, Ireland) Jennie Stephens (National University of Ireland Maynooth, IE), and Paul Lachapelle (U of Montana, USA).
·lmula.zoom.us·
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Introducing Perplexity Patents: AI-Powered Patent Search for Everyone
Introducing Perplexity Patents: AI-Powered Patent Search for Everyone
Perplexity Patents, the world’s first AI patent research agent that makes IP intelligence accessible to everyone.
Perplexity Patents, the world’s first AI patent research agent that makes IP intelligence accessible to everyone.
·perplexity.ai·
Introducing Perplexity Patents: AI-Powered Patent Search for Everyone