Bad AI

111 bookmarks
Custom sorting
How AGI became the most consequential conspiracy theory of our time
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
·technologyreview.com·
How AGI became the most consequential conspiracy theory of our time
Opinion | Why Even Basic A.I. Use Is So Bad for Students
Opinion | Why Even Basic A.I. Use Is So Bad for Students
A philosophy professor calls BS on the “AI for outlining is harmless” argument, as letting students outsource seemingly benign tasks like summarizing actually prevents them from developing the linguistic capacity that is thinking itself, and without practice, determining “what is being argued for and how,” young people won't be able to understand medical consent forms, evaluate arguments, or participate meaningfully in democracy.
·nytimes.com·
Opinion | Why Even Basic A.I. Use Is So Bad for Students
The Library of Babel Group
The Library of Babel Group
The Library of Babel Group* is a nascent, international coalition of educators confronting and resisting the incursion of surveillance, automation and datafication into spaces of teaching, learning, research, and creative expression.
The Library of Babel Group* is a nascent, international coalition of educators confronting and resisting the incursion of surveillance, automation and datafication into spaces of teaching, learning, research, and creative expression.
·law.georgetown.edu·
The Library of Babel Group
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Generative AI is permeating higher education in many different ways—it is all around us and increasingly embedded in university work and life, even if we don’t want to use it. But people are also sounding the alarm: Gen AI is disrupting learning and undermining trust in the integrity of academic work, while its energy consumption, use of water, and rapid expansion of data centers are exacerbating ecological crises. What can we do? How do we resist? Come learn about the environmental, social, economic, and political threats that AI poses and how we can individually and collectively resist and refuse. Come learn about how some are challenging the narrative of inevitability. Join an interactive discussion with international scholars and activists on resisting GenAI and big tech in higher education. Inputs from multiple scholar-activists including Christoph Becker (U of Toronto, CA), Mary Finley-Brook (U of Richmond, USA), Dan McQuillan (Goldsmiths U of London, UK), Sinéad Sheehan (University of Galway, Ireland) Jennie Stephens (National University of Ireland Maynooth, IE), and Paul Lachapelle (U of Montana, USA).
Generative AI is permeating higher education in many different ways—it is all around us and increasingly embedded in university work and life, even if we don’t want to use it. But people are also sounding the alarm: Gen AI is disrupting learning and undermining trust in the integrity of academic work, while its energy consumption, use of water, and rapid expansion of data centers are exacerbating ecological crises. What can we do? How do we resist? Come learn about the environmental, social, economic, and political threats that AI poses and how we can individually and collectively resist and refuse. Come learn about how some are challenging the narrative of inevitability. Join an interactive discussion with international scholars and activists on resisting GenAI and big tech in higher education. Inputs from multiple scholar-activists including Christoph Becker (U of Toronto, CA), Mary Finley-Brook (U of Richmond, USA), Dan McQuillan (Goldsmiths U of London, UK), Sinéad Sheehan (University of Galway, Ireland) Jennie Stephens (National University of Ireland Maynooth, IE), and Paul Lachapelle (U of Montana, USA).
·lmula.zoom.us·
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Character.AI is ending its chatbot experience for kids | TechCrunch
Character.AI is ending its chatbot experience for kids | TechCrunch

Character.AI is removing open-ended companion chats for anyone under 18 after increasing concerns about emotional attachment, dependency, and mental health risk among younger users. Here’s what’s changing: Companion-style chats are being phased out for all minors. The platform is rolling out stricter age verification. The app will refocus on creative and role-based interactions, not emotional support. Usage time limits will show up before the full removal.

·techcrunch.com·
Character.AI is ending its chatbot experience for kids | TechCrunch
Character.AI To Bar Children Under 18 From Using Its Chatbots - Slashdot
Character.AI To Bar Children Under 18 From Using Its Chatbots - Slashdot
An anonymous reader quotes a report from the New York Times: Character.AI said on Wednesday that it would bar people under 18 from using its chatbots starting late next month, in a sweeping move to address concerns over child safety. The rule will take effect Nov. 25, the company said. To enforce it...
·slashdot.org·
Character.AI To Bar Children Under 18 From Using Its Chatbots - Slashdot
AI 2027
AI 2027
experts who expect quick implementation over the next decade with an impact “exceeding that of the Industrial Revolution.”
·ai-2027.com·
AI 2027
Use of Perplexity, ChatGPT behind error-ridden orders, federal judges say
Use of Perplexity, ChatGPT behind error-ridden orders, federal judges say
A pair of federal judges said staff use of generative artificial intelligence tools and premature docket entry were behind error-ridden orders they issued, according to letters made public by Senate Judiciary Chairman Chuck Grassley on Thursday.
pair of federal judges said staff use of generative artificial intelligence tools and premature docket entry were behind error-ridden orders they issued, according to letters made public by Senate Judiciary Chairman Chuck Grassley on Thursday
·fedscoop.com·
Use of Perplexity, ChatGPT behind error-ridden orders, federal judges say
Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human
Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human

Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.

Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man
·404media.co·
Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human
AI "Phone Farm" Startup Gets Funding from Marc Andreessen to Flood Social Media With Spam
AI "Phone Farm" Startup Gets Funding from Marc Andreessen to Flood Social Media With Spam
Andreessen Horowitz has poured $1 million into Doublespeed, a startup that runs a large phone farm to flood social media with AI-generated posts. The company bills itself as a bulk content service that orchestrates thousands of accounts to mimic human interaction.
·futurism.com·
AI "Phone Farm" Startup Gets Funding from Marc Andreessen to Flood Social Media With Spam
ICE Will Use AI to Surveil Social Media
ICE Will Use AI to Surveil Social Media
ICE may soon be deploying AI to surveil your social media posts for wrongthink Critics say that the AI-driven software will target immigrants for political speech.
ICE may soon be deploying AI to surveil your social media posts for wrongthink
·jacobin.com·
ICE Will Use AI to Surveil Social Media
Some US Electricity Prices are Rising -- But It's Not Just Data Centers - Slashdot
Some US Electricity Prices are Rising -- But It's Not Just Data Centers - Slashdot
North Dakota experienced an almost 40% increase in electricity demand "thanks in part to an explosion of data centers," reports the Washington Post. Yet the state saw a 1% drop in its per kilowatt-hour rates. "A new study from researchers at Lawrence Berkeley National Laboratory and the consulti...
·hardware.slashdot.org·
Some US Electricity Prices are Rising -- But It's Not Just Data Centers - Slashdot
US student handcuffed after AI system apparently mistook bag of chips for firearm
US student handcuffed after AI system apparently mistook bag of chips for firearm

Baltimore county high schools last year began using a gun detection system using school cameras and AI to detect potential weapons. If it spots something it believes to be suspicious, it sends an alert to the school and law enforcement.

An artificial intelligence system (AI) apparently mistook a high school student’s bag of Doritos for a firearm and called local police to tell them the pupil was armed.

Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him.

“At first, I didn’t know where they were going until they started walking toward me with guns, talking about, ‘Get on the ground,’ and I was like, ‘What?’” Allen told the WBAL-TV 11 News television station.

Allen said they made him get on his knees, handcuffed and searched him – finding nothing. They then showed him a copy of the picture that had triggered the alert.

close up of hands using a laptop keyboard Inside San Francisco’s new AI school: is this the future of US education? Read more “I was just holding a Doritos bag – it was two hands and one finger out, and they said it looked like a gun,” Allen said.

Baltimore county high schools last year began using a gun detection system using school cameras and AI to detect potential weapons. If it spots something it believes to be suspicious, it sends an alert to the school and law enforcement.
·theguardian.com·
US student handcuffed after AI system apparently mistook bag of chips for firearm
U.S. women more concerned than men about some AI developments, especially driverless cars
U.S. women more concerned than men about some AI developments, especially driverless cars

Women in the United States are more skeptical than men about some uses of artificial intelligence (AI), particularly the possible widespread use of driverless passenger vehicles, according to a new analysis of Pew Research Center survey data collected in November 2021. The analysis also finds gender differences in views about the overall impact that technology has on society and some safety issues tied to AI applications, as well as the importance of including different groups in the AI design process.

Women in the United States are more skeptical than men about some uses of artificial intelligence (AI), particularly the possible widespread use of driverless passenger vehicles, according to a new analysis of Pew Research Center survey data collected in November 2021. The analysis also finds gender differences in views about the overall impact that technology has on society and some safety issues tied to AI applications, as well as the importance of including different groups in the AI design process.
·pewresearch.org·
U.S. women more concerned than men about some AI developments, especially driverless cars
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe

“AI should not be used to put words in anyone’s mouth,” Sen. Elizabeth Warren told NOTUS. “AI is creating something that does not exist, and when our politics head down that path, we’re in trouble.”

Democratic Sen. Andy Kim told NOTUS that adopting AI in political ads could lead politics “down a dark path.”

“We need to be very strong and clear from the outset that it would be wrong and really disastrous for our democracy when we start to see those types of attacks,” Kim said.

“AI should not be used to put words in anyone’s mouth,” Sen. Elizabeth Warren told NOTUS. “AI is creating something that does not exist, and when our politics head down that path, we’re in trouble.” Democratic Sen. Andy Kim told NOTUS that adopting AI in political ads could lead politics “down a dark path.” “We need to be very strong and clear from the outset that it would be wrong and really disastrous for our democracy when we start to see those types of attacks,” Kim said.
·byteseu.com·
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe
Reddit v. SerpApi et al
Reddit v. SerpApi et al

Reddit filed a lawsuit against Perplexity and three other data-scraping companies, accusing them of circumventing protections to steal copyrighted content for AI training.

·documentcloud.org·
Reddit v. SerpApi et al
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Meta removed a deepfake video from Facebook that falsely depicted Catherine Connolly withdrawing from Ireland's presidential election. The video was posted to an account called RTE News AI and viewed almost 30,000 times over 12 hours before the Irish Independent contacted the platform. The fabricate...
·tech.slashdot.org·
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory.

Key findings:

45% of all AI answers had at least one significant issue. 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions. 20% contained major accuracy issues, including hallucinated details and outdated information. Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance. According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
Key findings:   45% of all AI answers had at least one significant issue.  31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.  20% contained major accuracy issues, including hallucinated details and outdated information.  Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.
·ebu.ch·
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case

A Department of Homeland Security child-exploitation unit secured what Forbes calls the first federal search warrant seeking OpenAI user data. Investigators want records linked to a ChatGPT user they say runs a child-abuse website. Court filings show the suspect shared benign prompts about Star Trek and a 200,000-word Trump-style poem with an undercover agent. DHS is not requesting identifying information from OpenAI because agents believe they have already tracked down the 36-year-old former U.S. Air Force base worker. Forbes calls the warrant a turning point, noting AI companies have largely escaped the data grabs familiar to social networks and search engines. The outlet says law enforcement now views chatbot providers as fresh troves of evidence.

·gizmodo.com·
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case
Britain's AI gold rush hits a wall – not enough electricity
Britain's AI gold rush hits a wall – not enough electricity
AI datacenter boom is colliding with severe electricity shortages, as the country lacks sufficient power infrastructure to support rapid datacenter expansion without blackouts or higher bills.
·theregister.com·
Britain's AI gold rush hits a wall – not enough electricity
Meta will listen into AI conversations to personalize ads
Meta will listen into AI conversations to personalize ads
Commentary on Meta will listen into AI conversations to personalize ads by Stephen Downes. Online learning, e-learning, new media, connectivism, MOOCs, personal learning environments, new literacy, and more
·downes.ca·
Meta will listen into AI conversations to personalize ads
People are using AI to talk to God
People are using AI to talk to God

Faced with the questions and challenges of modern life, Vijay Meel, a 25-year-old student who lives in Rajasthan, India, turns to God. In the past he's consulted spiritual leaders. More recently, he asked GitaGPT. GitaGPT is an artificial intelligence (AI) chatbot trained on the Bhagavad Gita, the holy book of 700 verses of dialogue with the Hindu god Krishna. GitaGPT looks like any text conversation you'd have with a friend – except the AI tells you you're texting with a god. "When I couldn't clear my banking exams, I was dejected," Meel says. But after stumbling on GitaGPT, he typed in details about his inner crisis and asked for the AI's advice. "Focus on your actions and let go of the worry for its fruit," GitaGPT said. This, along with other guidance, left Meel feeling inspired. "It wasn't a saying I was unaware of, but at that point, I needed someone to reiterate it to me," Meel says. "This reflection helped me revamp my thoughts and start preparing all over again." Since then, GitaGPT has become something like a friend, that he chats with once or twice a week.

Faced with the questions and challenges of modern life, Vijay Meel, a 25-year-old student who lives in Rajasthan, India, turns to God. In the past he's consulted spiritual leaders. More recently, he asked GitaGPT. GitaGPT is an artificial intelligence (AI) chatbot trained on the Bhagavad Gita, the holy book of 700 verses of dialogue with the Hindu god Krishna. GitaGPT looks like any text conversation you'd have with a friend – except the AI tells you you're texting with a god."When I couldn't clear my banking exams, I was dejected," Meel says. But after stumbling on GitaGPT, he typed in details about his inner crisis and asked for the AI's advice. "Focus on your actions and let go of the worry for its fruit," GitaGPT said. This, along with other guidance, left Meel feeling inspired."It wasn't a saying I was unaware of, but at that point, I needed someone to reiterate it to me," Meel says. "This reflection helped me revamp my thoughts and start preparing all over again." Since then, GitaGPT has become something like a friend, that he chats with once or twice a week.
·bbc.com·
People are using AI to talk to God
The AI Report That's Spooking Wall Street
The AI Report That's Spooking Wall Street
The majority of companies are failing to see any returns on their AI investments, a report finds.
·gizmodo.com·
The AI Report That's Spooking Wall Street
The people refusing to use AI
The people refusing to use AI
"I read a really great phrase recently that said something along the lines of 'why would I bother to read something someone couldn't be bothered to write' and that is such a powerful statement and one that aligns absolutely with my views."
"I read a really great phrase recently that said something along the lines of 'why would I bother to read something someone couldn't be bothered to write' and that is such a powerful statement and one that aligns absolutely with my views."
·bbc.com·
The people refusing to use AI