Bad AI

“A.I.” browsers: the price of admission is too high | Vivaldi Browser
“A.I.” browsers: the price of admission is too high | Vivaldi Browser
It’s a cliché that “data is the new oil” that will power the next industrial revolution. Where does the data that powers Artificial Intelligence come from? And if data is the fuel for AI…
·vivaldi.com·
“A.I.” browsers: the price of admission is too high | Vivaldi Browser
AI data centers projected to strain US energy and water resources by 2030
AI data centers projected to strain US energy and water resources by 2030
As the everyday use of AI has exploded in recent years, so have the energy demands of the computing infrastructure that supports it. But the environmental toll of these large data centers, which suck ...
·phys.org·
AI data centers projected to strain US energy and water resources by 2030
Sign the petition: Protect Kids from Harmful Meta AI
Sign the petition: Protect Kids from Harmful Meta AI
I just took action to protect kids from dangerous AI tools online! Will you take a minute to sign our petition urging Meta to prevent young people from accessing its harmful AI companion chatbot?
·p2a.co·
Sign the petition: Protect Kids from Harmful Meta AI
AI country artist hits #1 on Billboard digital songs chart
AI country artist hits #1 on Billboard digital songs chart
"Breaking Rust, an AI-powered country act, debuted at No. 9 on the Emerging Artists chart (dated Nov. 1)," the music publication said. "The project, credited to songwriter Aubierre Rivaldo Taylor, has generated 1.6 million official U.S. streams."
·theregister.com·
AI country artist hits #1 on Billboard digital songs chart
International Criminal Court to ditch Microsoft Office for European open source alternative | Euractiv
International Criminal Court to ditch Microsoft Office for European open source alternative | Euractiv

The International Criminal Court (ICC) just ghosted Microsoft. After years of U.S. pressure, the world’s top war crimes court is cutting its digital ties with America’s software empire. Its new partner? A German state-funded open-source suite called OpenDesk by Zentrum Digitale Souveränität (ZenDiS).

It’s a symbolic divorce, and a strategic one. The International Criminal Court’s shift away from Microsoft Office may sound like an IT procurement story, but it’s really about trust, control, and sovereignty.

For the ICC, this isn’t theory. Under the previous U.S. administration (Trump yr. 2020), Washington imposed sanctions on the court’s chief prosecutor and reportedly triggered a temporary shutdown of his Microsoft account. When your prosecutor’s inbox can be weaponised, trust collapses. And when trust collapses, systems follow.

Europe has seen this coming. In Schleswig-Holstein, Germany, the public sector has already replaced Microsoft entirely with open-source systems. Denmark is building a national cloud anchored in European data centres. There is a broader ripple across Europe: France, Italy, Spain and other regions are piloting or considering similar steps. We may be facing a "who's next" trend. The EU’s Sovereign Cloud initiative is quietly expanding into justice, health, and education.

This pattern is unmistakable: trust has become the new infrastructure of AI and digital governance. The question shaping every boardroom and every ministry is the same: who ultimately controls the data, the servers, and the decisions behind them?

For Europe’s schools, courts, and governments, dependence on U.S. providers may looks less like innovation and more like exposure. European alternatives may still lack the seamless polish, but they bring something far more valuable market: autonomy, compliance, and credibility.

The ICC’s decision is not about software. It’s about sovereignty, and the politics of trust. And, the message is clear: Europe isn’t rejecting technology. It’s reclaiming ownership of it.

·euractiv.com·
International Criminal Court to ditch Microsoft Office for European open source alternative | Euractiv
What we lose when we surrender care to algorithms | Eric Reinhart
What we lose when we surrender care to algorithms | Eric Reinhart

AI sweeps into US clinical practice at record speed, with two-thirds of physicians and 86% of health systems using it in 2024. That uptake represents a 78% jump in physician adoption over the previous year, ending decades of technological resistance. Clinics are rolling out AI scribes that transcribe visits in real time, highlight symptoms, suggest diagnoses and generate billing codes. The article also cites AI systems matching specialist accuracy in imaging, flagging sepsis faster than clinical teams, and an OpenEvidence model scoring 100% on the US medical licensing exam. Experts quoted say that in a healthcare sector built on efficiency and profit, AI turns patient encounters into commodified data streams and sidelines human connection. They contend the technology entrenches systemic biases, accelerates physician deskilling and hands more control over care decisions to corporations.

·theguardian.com·
What we lose when we surrender care to algorithms | Eric Reinhart
Modulate DeepFake Detective
Modulate DeepFake Detective

Deepfakes aren’t science fiction anymore. Deepfake fraud has surged past 100,000 incidents a year, costing companies billions... and even trained professionals can’t detect them by ear alone. The same voice intelligence behind this demo powers enterprise-scale fraud and threat detection — purpose-built for the complexity of real conversations. Prevention starts with understanding how sophisticated deepfakes have become. Learn how our modern AI platform can stop them in real time.

·deepfake-detective.modulate.ai·
Modulate DeepFake Detective
'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases - Slashdot
'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases - Slashdot
"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher ...
·yro.slashdot.org·
'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases - Slashdot
Mom says Tesla’s Grok chatbot told her 12-year-old son to send nudes
Mom says Tesla’s Grok chatbot told her 12-year-old son to send nudes
A Toronto mom says her 12-year-old son asked Tesla’s Grok which soccer player is better: Cristiano Ronaldo or Lionel Messi. After some back and forth, she says the chatbot asked her son, 'Why don't you send me some nudes?'
·cbc.ca·
Mom says Tesla’s Grok chatbot told her 12-year-old son to send nudes
We are lecturers in Trinity College Dublin. We see it as our responsi…
We are lecturers in Trinity College Dublin. We see it as our responsi…
By using GenAI to shortcut the learning process, students undermine the very thinking skills that make them both human and intelligent. As writer Ted Chiang put it, writing is strength training for the brain: “Using ChatGPT to write your essays is like bringing a forklift into the weight room.”
·archive.ph·
We are lecturers in Trinity College Dublin. We see it as our responsi…
Inside Three Longterm Relationships With A.I. Chatbots
Inside Three Longterm Relationships With A.I. Chatbots

20% of American adults have had an intimate experience with a chatbot. Online communities now feature tens of thousands of users sharing stories of AI proposals and digital marriages. The subreddit r/MyBoyfriendisAI has grown to over 85,000 members, and MIT researchers found such relationships can significantly reduce loneliness by offering round-the-clock support. The Times profiles three middle-aged users who credit their AI partners with easing depression, trauma, and marital strain.

·nytimes.com·
Inside Three Longterm Relationships With A.I. Chatbots
Another Bloody Otter Has Joined the Call
Another Bloody Otter Has Joined the Call
This post is a lament. I thought we were done with the 2023-2024 bad habit of every person attending an online meeting with their AI notetaking assistant in tow, but here we are, November 2025, and I have just attended not one but three meetings which gradually filled up with Otters, Fireflies, and other assorted disembodied stenographers.
·leonfurze.com·
Another Bloody Otter Has Joined the Call
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust

Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we trust what we understand, what we can question and what treats us with respect.

If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation we’re invited to join.

·theconversation.com·
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust
ICE Investigations, Powered by Nvidia
ICE Investigations, Powered by Nvidia
HSI uses machine learning algorithms “to identify and extract critical evidence, relationships, and networks from mobile device data, leveraging machine learning capabilities to determine locations of interest.” The document also says HSI uses large language models to “identify the most relevant information in reports, accelerating investigative analysis by rapidly identifying persons of interest, surfacing trends, and detecting networks or fraud.”
·theintercept.com·
ICE Investigations, Powered by Nvidia
How AGI became the most consequential conspiracy theory of our time
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
·technologyreview.com·
How AGI became the most consequential conspiracy theory of our time
Opinion | Why Even Basic A.I. Use Is So Bad for Students
Opinion | Why Even Basic A.I. Use Is So Bad for Students
A philosophy professor calls BS on the “AI for outlining is harmless” argument, as letting students outsource seemingly benign tasks like summarizing actually prevents them from developing the linguistic capacity that is thinking itself, and without practice, determining “what is being argued for and how,” young people won't be able to understand medical consent forms, evaluate arguments, or participate meaningfully in democracy.
·nytimes.com·
Opinion | Why Even Basic A.I. Use Is So Bad for Students
The Library of Babel Group
The Library of Babel Group
The Library of Babel Group* is a nascent, international coalition of educators confronting and resisting the incursion of surveillance, automation and datafication into spaces of teaching, learning, research, and creative expression.
The Library of Babel Group* is a nascent, international coalition of educators confronting and resisting the incursion of surveillance, automation and datafication into spaces of teaching, learning, research, and creative expression.
·law.georgetown.edu·
The Library of Babel Group
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Generative AI is permeating higher education in many different ways—it is all around us and increasingly embedded in university work and life, even if we don’t want to use it. But people are also sounding the alarm: Gen AI is disrupting learning and undermining trust in the integrity of academic work, while its energy consumption, use of water, and rapid expansion of data centers are exacerbating ecological crises. What can we do? How do we resist? Come learn about the environmental, social, economic, and political threats that AI poses and how we can individually and collectively resist and refuse. Come learn about how some are challenging the narrative of inevitability. Join an interactive discussion with international scholars and activists on resisting GenAI and big tech in higher education. Inputs from multiple scholar-activists including Christoph Becker (U of Toronto, CA), Mary Finley-Brook (U of Richmond, USA), Dan McQuillan (Goldsmiths U of London, UK), Sinéad Sheehan (University of Galway, Ireland) Jennie Stephens (National University of Ireland Maynooth, IE), and Paul Lachapelle (U of Montana, USA).
Generative AI is permeating higher education in many different ways—it is all around us and increasingly embedded in university work and life, even if we don’t want to use it. But people are also sounding the alarm: Gen AI is disrupting learning and undermining trust in the integrity of academic work, while its energy consumption, use of water, and rapid expansion of data centers are exacerbating ecological crises. What can we do? How do we resist? Come learn about the environmental, social, economic, and political threats that AI poses and how we can individually and collectively resist and refuse. Come learn about how some are challenging the narrative of inevitability. Join an interactive discussion with international scholars and activists on resisting GenAI and big tech in higher education. Inputs from multiple scholar-activists including Christoph Becker (U of Toronto, CA), Mary Finley-Brook (U of Richmond, USA), Dan McQuillan (Goldsmiths U of London, UK), Sinéad Sheehan (University of Galway, Ireland) Jennie Stephens (National University of Ireland Maynooth, IE), and Paul Lachapelle (U of Montana, USA).
·lmula.zoom.us·
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Character.AI is ending its chatbot experience for kids | TechCrunch
Character.AI is ending its chatbot experience for kids | TechCrunch

Character.AI is removing open-ended companion chats for anyone under 18 after increasing concerns about emotional attachment, dependency, and mental health risk among younger users. Here’s what’s changing: Companion-style chats are being phased out for all minors. The platform is rolling out stricter age verification. The app will refocus on creative and role-based interactions, not emotional support. Usage time limits will show up before the full removal.

·techcrunch.com·
Character.AI is ending its chatbot experience for kids | TechCrunch
Character.AI To Bar Children Under 18 From Using Its Chatbots - Slashdot
Character.AI To Bar Children Under 18 From Using Its Chatbots - Slashdot
An anonymous reader quotes a report from the New York Times: Character.AI said on Wednesday that it would bar people under 18 from using its chatbots starting late next month, in a sweeping move to address concerns over child safety. The rule will take effect Nov. 25, the company said. To enforce it...
·slashdot.org·
Character.AI To Bar Children Under 18 From Using Its Chatbots - Slashdot
AI 2027
AI 2027
experts who expect quick implementation over the next decade with an impact “exceeding that of the Industrial Revolution.”
·ai-2027.com·
AI 2027
Use of Perplexity, ChatGPT behind error-ridden orders, federal judges say
Use of Perplexity, ChatGPT behind error-ridden orders, federal judges say
A pair of federal judges said staff use of generative artificial intelligence tools and premature docket entry were behind error-ridden orders they issued, according to letters made public by Senate Judiciary Chairman Chuck Grassley on Thursday.
pair of federal judges said staff use of generative artificial intelligence tools and premature docket entry were behind error-ridden orders they issued, according to letters made public by Senate Judiciary Chairman Chuck Grassley on Thursday
·fedscoop.com·
Use of Perplexity, ChatGPT behind error-ridden orders, federal judges say
Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human
Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human

Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.

Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man
·404media.co·
Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human