Bad AI
Character.AI is removing open-ended companion chats for anyone under 18 after increasing concerns about emotional attachment, dependency, and mental health risk among younger users. Here’s what’s changing: Companion-style chats are being phased out for all minors. The platform is rolling out stricter age verification. The app will refocus on creative and role-based interactions, not emotional support. Usage time limits will show up before the full removal.
Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.
Baltimore county high schools last year began using a gun detection system using school cameras and AI to detect potential weapons. If it spots something it believes to be suspicious, it sends an alert to the school and law enforcement.
An artificial intelligence system (AI) apparently mistook a high school student’s bag of Doritos for a firearm and called local police to tell them the pupil was armed.
Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him.
“At first, I didn’t know where they were going until they started walking toward me with guns, talking about, ‘Get on the ground,’ and I was like, ‘What?’” Allen told the WBAL-TV 11 News television station.
Allen said they made him get on his knees, handcuffed and searched him – finding nothing. They then showed him a copy of the picture that had triggered the alert.
close up of hands using a laptop keyboard Inside San Francisco’s new AI school: is this the future of US education? Read more “I was just holding a Doritos bag – it was two hands and one finger out, and they said it looked like a gun,” Allen said.
Women in the United States are more skeptical than men about some uses of artificial intelligence (AI), particularly the possible widespread use of driverless passenger vehicles, according to a new analysis of Pew Research Center survey data collected in November 2021. The analysis also finds gender differences in views about the overall impact that technology has on society and some safety issues tied to AI applications, as well as the importance of including different groups in the AI design process.
“AI should not be used to put words in anyone’s mouth,” Sen. Elizabeth Warren told NOTUS. “AI is creating something that does not exist, and when our politics head down that path, we’re in trouble.”
Democratic Sen. Andy Kim told NOTUS that adopting AI in political ads could lead politics “down a dark path.”
“We need to be very strong and clear from the outset that it would be wrong and really disastrous for our democracy when we start to see those types of attacks,” Kim said.
Reddit filed a lawsuit against Perplexity and three other data-scraping companies, accusing them of circumventing protections to steal copyrighted content for AI training.
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory.
Key findings:
45% of all AI answers had at least one significant issue. 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions. 20% contained major accuracy issues, including hallucinated details and outdated information. Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance. According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.
A Department of Homeland Security child-exploitation unit secured what Forbes calls the first federal search warrant seeking OpenAI user data. Investigators want records linked to a ChatGPT user they say runs a child-abuse website. Court filings show the suspect shared benign prompts about Star Trek and a 200,000-word Trump-style poem with an undercover agent. DHS is not requesting identifying information from OpenAI because agents believe they have already tracked down the 36-year-old former U.S. Air Force base worker. Forbes calls the warrant a turning point, noting AI companies have largely escaped the data grabs familiar to social networks and search engines. The outlet says law enforcement now views chatbot providers as fresh troves of evidence.
Faced with the questions and challenges of modern life, Vijay Meel, a 25-year-old student who lives in Rajasthan, India, turns to God. In the past he's consulted spiritual leaders. More recently, he asked GitaGPT. GitaGPT is an artificial intelligence (AI) chatbot trained on the Bhagavad Gita, the holy book of 700 verses of dialogue with the Hindu god Krishna. GitaGPT looks like any text conversation you'd have with a friend – except the AI tells you you're texting with a god. "When I couldn't clear my banking exams, I was dejected," Meel says. But after stumbling on GitaGPT, he typed in details about his inner crisis and asked for the AI's advice. "Focus on your actions and let go of the worry for its fruit," GitaGPT said. This, along with other guidance, left Meel feeling inspired. "It wasn't a saying I was unaware of, but at that point, I needed someone to reiterate it to me," Meel says. "This reflection helped me revamp my thoughts and start preparing all over again." Since then, GitaGPT has become something like a friend, that he chats with once or twice a week.