AI

1958 bookmarks
Newest
#174: ChatGPT’s Getting More “Adult,” MAICON 2025 Takeaways, AI’s Impact on Talent, Claude Haiku 4.5 & Anthropic’s Feud with the White House — The Artificial Intelligence Show
#174: ChatGPT’s Getting More “Adult,” MAICON 2025 Takeaways, AI’s Impact on Talent, Claude Haiku 4.5 & Anthropic’s Feud with the White House — The Artificial Intelligence Show
AI isn’t just becoming more capable. It’s becoming more personal. And even more “adult.” This week, Paul and Mike lead off with Sam Altman’s provocative comments about ChatGPT’s role in mental health and the growing debate over our emotional relationships with AI. Then, from blue-collar workers adopting ChatGPT as a daily tool to tech CEOs warning of an impending jobs shock, the episode explores how AI is quietly reshaping both the labor market and human identity. They also unpack major industry releases, from Google’s new Veo 3.1 and Anthropic’s Haiku 4.5 to Spotify’s “artist-first” AI music push, revealing the race to define who benefits from intelligent machines. Over and over in this episode, we ask what’s becoming a defining question of the AI age: Who’s in control of the future we’re building? Show Notes: Access the show notes and show links here Timestamps: 00:00:00 — Intro 00:05:09 —ChatGPT, AI Relationships, and Mental Health 00:18:58 — MAICON 2025 Takeaways 00:29:57 — AI’s Increasing Impact on…
·overcast.fm·
#174: ChatGPT’s Getting More “Adult,” MAICON 2025 Takeaways, AI’s Impact on Talent, Claude Haiku 4.5 & Anthropic’s Feud with the White House — The Artificial Intelligence Show
DHS Ordered OpenAI To Share User Data In First Known Warrant For ChatGPT Prompts
DHS Ordered OpenAI To Share User Data In First Known Warrant For ChatGPT Prompts

In the first known federal search warrant asking OpenAI for user data, reviewed by Forbes after it was unsealed in Maine last week, Homeland Security Investigations revealed it had been chatting with the administrator in an undercover capacity on the child exploitation site when the suspect noted they’d been using ChatGPT.

The suspect then disclosed some prompts and responses they had received

·forbes.com·
DHS Ordered OpenAI To Share User Data In First Known Warrant For ChatGPT Prompts
U.S. women more concerned than men about some AI developments, especially driverless cars
U.S. women more concerned than men about some AI developments, especially driverless cars

Women in the United States are more skeptical than men about some uses of artificial intelligence (AI), particularly the possible widespread use of driverless passenger vehicles, according to a new analysis of Pew Research Center survey data collected in November 2021. The analysis also finds gender differences in views about the overall impact that technology has on society and some safety issues tied to AI applications, as well as the importance of including different groups in the AI design process.

Women in the United States are more skeptical than men about some uses of artificial intelligence (AI), particularly the possible widespread use of driverless passenger vehicles, according to a new analysis of Pew Research Center survey data collected in November 2021. The analysis also finds gender differences in views about the overall impact that technology has on society and some safety issues tied to AI applications, as well as the importance of including different groups in the AI design process.
·pewresearch.org·
U.S. women more concerned than men about some AI developments, especially driverless cars
Stop Saying “Let’s Just Be Flexible with AI”
Stop Saying “Let’s Just Be Flexible with AI”

the tricky part is that AI changes weekly. So how can we be concrete about something so fluid?

Here’s how I’ve started to think about it: Be flexible about tools, but concrete about values.

Students don’t need us to predict the future of AI. They need us to articulate the principles that guide our choices. That might be things like:

Transparency: Always disclose when AI is used. Integrity: Use AI to assist thinking, not replace it. Learning: Choose methods that strengthen your own skills. When students internalize these values, they can adapt them to whatever new tool emerges next semester: Claude, Gemini, Perplexity, or something we haven’t heard of yet.

A good AI policy, like a good syllabus, isn’t a list of prohibitions. It’s a shared framework for reasoning through change.

the tricky part is that AI changes weekly. So how can we be concrete about something so fluid?Here’s how I’ve started to think about it:Be flexible about tools, but concrete about values.Students don’t need us to predict the future of AI. They need us to articulate the principles that guide our choices. That might be things like:Transparency: Always disclose when AI is used.Integrity: Use AI to assist thinking, not replace it.Learning: Choose methods that strengthen your own skills.When students internalize these values, they can adapt them to whatever new tool emerges next semester: Claude, Gemini, Perplexity, or something we haven’t heard of yet.A good AI policy, like a good syllabus, isn’t a list of prohibitions. It’s a shared framework for reasoning through change.
·substack.com·
Stop Saying “Let’s Just Be Flexible with AI”
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe

“AI should not be used to put words in anyone’s mouth,” Sen. Elizabeth Warren told NOTUS. “AI is creating something that does not exist, and when our politics head down that path, we’re in trouble.”

Democratic Sen. Andy Kim told NOTUS that adopting AI in political ads could lead politics “down a dark path.”

“We need to be very strong and clear from the outset that it would be wrong and really disastrous for our democracy when we start to see those types of attacks,” Kim said.

“AI should not be used to put words in anyone’s mouth,” Sen. Elizabeth Warren told NOTUS. “AI is creating something that does not exist, and when our politics head down that path, we’re in trouble.” Democratic Sen. Andy Kim told NOTUS that adopting AI in political ads could lead politics “down a dark path.” “We need to be very strong and clear from the outset that it would be wrong and really disastrous for our democracy when we start to see those types of attacks,” Kim said.
·byteseu.com·
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
The NCRM Annual Lecture 2025 explored the topic of responsible AI in research. The free event took place on Wednesday, 1 October 2025 at The British Academy in London and was streamed online. Four panellists offered expert insight on this crucial topic and answered questions from the audience. The panellists were: Professor Dame Wendy Hall of the University of Southampton, Professor David De Roure of the University of Oxford, Dr Zeba Khanam of BT and Dr Mark Carrigan of The University of Manchester. This video features some of the highlights of the event. Please note: we may be unable to respond to individual questions on this video. The National Centre for Research Methods (NCRM) delivers research methods training through short courses and free online resources. - Visit the NCRM website: https://www.ncrm.ac.uk - Browse our short courses: https://www.ncrm.ac.uk/training/ncrm-courses - Find online resources: https://www.ncrm.ac.uk/resources Follow NCRM on social media: - X: https://x.com/NCRMUK - LinkedIn: https://www.linkedin.com/company/ncrmuk - Bluesky: https://bsky.app/profile/ncrm.ac.uk - Facebook: https://www.facebook.com/ncrmuk - YouTube: https://www.youtube.com/user/NCRMUK Subscribe to the NCRM monthly newsletter: https://www.ncrm.ac.uk/news/subscribe
·m.youtube.com·
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
Reddit v. SerpApi et al
Reddit v. SerpApi et al

Reddit filed a lawsuit against Perplexity and three other data-scraping companies, accusing them of circumventing protections to steal copyrighted content for AI training.

·documentcloud.org·
Reddit v. SerpApi et al
Lovable
Lovable
New Shopify integration for building online stores via prompts
·lovable.dev·
Lovable
GM is bringing Google Gemini-powered AI assistant to cars in 2026 | TechCrunch
GM is bringing Google Gemini-powered AI assistant to cars in 2026 | TechCrunch
General Motors will add a conversational AI assistant powered by Google Gemini to its cars, trucks, and SUVs starting next year, the U.S. automaker said Wednesday during an event in New York City.
General Motors will add a conversational AI assistant powered by Google Gemini to its cars, trucks, and SUVs starting next year, the U.S. automaker said Wednesday during an event in New York City.
·techcrunch.com·
GM is bringing Google Gemini-powered AI assistant to cars in 2026 | TechCrunch
Amazon unveils AI smart glasses for its delivery drivers | TechCrunch
Amazon unveils AI smart glasses for its delivery drivers | TechCrunch

Amazon is trialing AI-powered smart glasses that give delivery drivers hands-free scanning, navigation, safety cues, and proof-of-delivery to speed up last-mile routes.

More Insights:

Glasses overlay hazards and tasks; scan packages, guide turn-by-turn on foot, and capture delivery proof.

Auto-activate when the van parks; help find the right parcel in-vehicle and navigate complex apartments/businesses.

Paired vest controller adds physical controls, a swappable battery, and an emergency button.

Works with prescription and light-adapting lenses; pilots underway in North America ahead of broader rollout.

Roadmap: wrong-address “defect” alerts, pet detection, and low-light adjustments; launched alongside “Blue Jay” warehouse arm and Eluna AI ops tool.

Why it matters: If AR meaningfully cuts seconds per stop and reduces errors, it could reshape the economics—and safety—of last-mile logistics, signaling a future where AI quietly augments every movement of frontline work.

·techcrunch.com·
Amazon unveils AI smart glasses for its delivery drivers | TechCrunch
Greentime
Greentime

We help classroom and environmental educators ethically use AI to create human-centered learning experiences.

We help classroom and environmental educators ethically use AI to create human-centered learning experiences.
·greentime.ai·
Greentime
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Meta removed a deepfake video from Facebook that falsely depicted Catherine Connolly withdrawing from Ireland's presidential election. The video was posted to an account called RTE News AI and viewed almost 30,000 times over 12 hours before the Irish Independent contacted the platform. The fabricate...
·tech.slashdot.org·
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Reddit Sues Perplexity For Scraping Data To Train AI System - Slashdot
Reddit Sues Perplexity For Scraping Data To Train AI System - Slashdot
An anonymous reader shares a report: Social media platform Reddit sued AI startup Perplexity in New York federal court on Wednesday, accusing it and three other companies of unlawfully scraping its data to train Perplexity's AI-based search engine. Reddit said in the complaint that the data-scraping...
·yro.slashdot.org·
Reddit Sues Perplexity For Scraping Data To Train AI System - Slashdot
The Majority AI View - Anil Dash
The Majority AI View - Anil Dash
Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.
·anildash.com·
The Majority AI View - Anil Dash
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory.

Key findings:

45% of all AI answers had at least one significant issue. 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions. 20% contained major accuracy issues, including hallucinated details and outdated information. Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance. According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
Key findings:   45% of all AI answers had at least one significant issue.  31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.  20% contained major accuracy issues, including hallucinated details and outdated information.  Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.
·ebu.ch·
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case

A Department of Homeland Security child-exploitation unit secured what Forbes calls the first federal search warrant seeking OpenAI user data. Investigators want records linked to a ChatGPT user they say runs a child-abuse website. Court filings show the suspect shared benign prompts about Star Trek and a 200,000-word Trump-style poem with an undercover agent. DHS is not requesting identifying information from OpenAI because agents believe they have already tracked down the 36-year-old former U.S. Air Force base worker. Forbes calls the warrant a turning point, noting AI companies have largely escaped the data grabs familiar to social networks and search engines. The outlet says law enforcement now views chatbot providers as fresh troves of evidence.

·gizmodo.com·
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case