AI-GenAI

1579 bookmarks
Newest
U.S. women more concerned than men about some AI developments, especially driverless cars
U.S. women more concerned than men about some AI developments, especially driverless cars

Women in the United States are more skeptical than men about some uses of artificial intelligence (AI), particularly the possible widespread use of driverless passenger vehicles, according to a new analysis of Pew Research Center survey data collected in November 2021. The analysis also finds gender differences in views about the overall impact that technology has on society and some safety issues tied to AI applications, as well as the importance of including different groups in the AI design process.

Women in the United States are more skeptical than men about some uses of artificial intelligence (AI), particularly the possible widespread use of driverless passenger vehicles, according to a new analysis of Pew Research Center survey data collected in November 2021. The analysis also finds gender differences in views about the overall impact that technology has on society and some safety issues tied to AI applications, as well as the importance of including different groups in the AI design process.
·pewresearch.org·
U.S. women more concerned than men about some AI developments, especially driverless cars
Stop Saying “Let’s Just Be Flexible with AI”
Stop Saying “Let’s Just Be Flexible with AI”

the tricky part is that AI changes weekly. So how can we be concrete about something so fluid?

Here’s how I’ve started to think about it: Be flexible about tools, but concrete about values.

Students don’t need us to predict the future of AI. They need us to articulate the principles that guide our choices. That might be things like:

Transparency: Always disclose when AI is used. Integrity: Use AI to assist thinking, not replace it. Learning: Choose methods that strengthen your own skills. When students internalize these values, they can adapt them to whatever new tool emerges next semester: Claude, Gemini, Perplexity, or something we haven’t heard of yet.

A good AI policy, like a good syllabus, isn’t a list of prohibitions. It’s a shared framework for reasoning through change.

the tricky part is that AI changes weekly. So how can we be concrete about something so fluid?Here’s how I’ve started to think about it:Be flexible about tools, but concrete about values.Students don’t need us to predict the future of AI. They need us to articulate the principles that guide our choices. That might be things like:Transparency: Always disclose when AI is used.Integrity: Use AI to assist thinking, not replace it.Learning: Choose methods that strengthen your own skills.When students internalize these values, they can adapt them to whatever new tool emerges next semester: Claude, Gemini, Perplexity, or something we haven’t heard of yet.A good AI policy, like a good syllabus, isn’t a list of prohibitions. It’s a shared framework for reasoning through change.
·substack.com·
Stop Saying “Let’s Just Be Flexible with AI”
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe

“AI should not be used to put words in anyone’s mouth,” Sen. Elizabeth Warren told NOTUS. “AI is creating something that does not exist, and when our politics head down that path, we’re in trouble.”

Democratic Sen. Andy Kim told NOTUS that adopting AI in political ads could lead politics “down a dark path.”

“We need to be very strong and clear from the outset that it would be wrong and really disastrous for our democracy when we start to see those types of attacks,” Kim said.

“AI should not be used to put words in anyone’s mouth,” Sen. Elizabeth Warren told NOTUS. “AI is creating something that does not exist, and when our politics head down that path, we’re in trouble.” Democratic Sen. Andy Kim told NOTUS that adopting AI in political ads could lead politics “down a dark path.” “We need to be very strong and clear from the outset that it would be wrong and really disastrous for our democracy when we start to see those types of attacks,” Kim said.
·byteseu.com·
Artificial Intelligence Is Hitting Politics. Nobody Knows Where It Will End. - Bytes Europe
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
The NCRM Annual Lecture 2025 explored the topic of responsible AI in research. The free event took place on Wednesday, 1 October 2025 at The British Academy in London and was streamed online. Four panellists offered expert insight on this crucial topic and answered questions from the audience. The panellists were: Professor Dame Wendy Hall of the University of Southampton, Professor David De Roure of the University of Oxford, Dr Zeba Khanam of BT and Dr Mark Carrigan of The University of Manchester. This video features some of the highlights of the event. Please note: we may be unable to respond to individual questions on this video. The National Centre for Research Methods (NCRM) delivers research methods training through short courses and free online resources. - Visit the NCRM website: https://www.ncrm.ac.uk - Browse our short courses: https://www.ncrm.ac.uk/training/ncrm-courses - Find online resources: https://www.ncrm.ac.uk/resources Follow NCRM on social media: - X: https://x.com/NCRMUK - LinkedIn: https://www.linkedin.com/company/ncrmuk - Bluesky: https://bsky.app/profile/ncrm.ac.uk - Facebook: https://www.facebook.com/ncrmuk - YouTube: https://www.youtube.com/user/NCRMUK Subscribe to the NCRM monthly newsletter: https://www.ncrm.ac.uk/news/subscribe
·m.youtube.com·
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
Reddit v. SerpApi et al
Reddit v. SerpApi et al

Reddit filed a lawsuit against Perplexity and three other data-scraping companies, accusing them of circumventing protections to steal copyrighted content for AI training.

·documentcloud.org·
Reddit v. SerpApi et al
Lovable
Lovable
New Shopify integration for building online stores via prompts
·lovable.dev·
Lovable
GM is bringing Google Gemini-powered AI assistant to cars in 2026 | TechCrunch
GM is bringing Google Gemini-powered AI assistant to cars in 2026 | TechCrunch
General Motors will add a conversational AI assistant powered by Google Gemini to its cars, trucks, and SUVs starting next year, the U.S. automaker said Wednesday during an event in New York City.
General Motors will add a conversational AI assistant powered by Google Gemini to its cars, trucks, and SUVs starting next year, the U.S. automaker said Wednesday during an event in New York City.
·techcrunch.com·
GM is bringing Google Gemini-powered AI assistant to cars in 2026 | TechCrunch
Amazon unveils AI smart glasses for its delivery drivers | TechCrunch
Amazon unveils AI smart glasses for its delivery drivers | TechCrunch

Amazon is trialing AI-powered smart glasses that give delivery drivers hands-free scanning, navigation, safety cues, and proof-of-delivery to speed up last-mile routes.

More Insights:

Glasses overlay hazards and tasks; scan packages, guide turn-by-turn on foot, and capture delivery proof.

Auto-activate when the van parks; help find the right parcel in-vehicle and navigate complex apartments/businesses.

Paired vest controller adds physical controls, a swappable battery, and an emergency button.

Works with prescription and light-adapting lenses; pilots underway in North America ahead of broader rollout.

Roadmap: wrong-address “defect” alerts, pet detection, and low-light adjustments; launched alongside “Blue Jay” warehouse arm and Eluna AI ops tool.

Why it matters: If AR meaningfully cuts seconds per stop and reduces errors, it could reshape the economics—and safety—of last-mile logistics, signaling a future where AI quietly augments every movement of frontline work.

·techcrunch.com·
Amazon unveils AI smart glasses for its delivery drivers | TechCrunch
Greentime
Greentime

We help classroom and environmental educators ethically use AI to create human-centered learning experiences.

We help classroom and environmental educators ethically use AI to create human-centered learning experiences.
·greentime.ai·
Greentime
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Meta removed a deepfake video from Facebook that falsely depicted Catherine Connolly withdrawing from Ireland's presidential election. The video was posted to an account called RTE News AI and viewed almost 30,000 times over 12 hours before the Irish Independent contacted the platform. The fabricate...
·tech.slashdot.org·
Meta Allows Deepfake of Irish Presidential Candidate To Spread for 12 Hours Before Removal - Slashdot
Reddit Sues Perplexity For Scraping Data To Train AI System - Slashdot
Reddit Sues Perplexity For Scraping Data To Train AI System - Slashdot
An anonymous reader shares a report: Social media platform Reddit sued AI startup Perplexity in New York federal court on Wednesday, accusing it and three other companies of unlawfully scraping its data to train Perplexity's AI-based search engine. Reddit said in the complaint that the data-scraping...
·yro.slashdot.org·
Reddit Sues Perplexity For Scraping Data To Train AI System - Slashdot
The Majority AI View - Anil Dash
The Majority AI View - Anil Dash
Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.
·anildash.com·
The Majority AI View - Anil Dash
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory.

Key findings:

45% of all AI answers had at least one significant issue. 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions. 20% contained major accuracy issues, including hallucinated details and outdated information. Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance. According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
Key findings:   45% of all AI answers had at least one significant issue.  31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.  20% contained major accuracy issues, including hallucinated details and outdated information.  Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.
·ebu.ch·
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case

A Department of Homeland Security child-exploitation unit secured what Forbes calls the first federal search warrant seeking OpenAI user data. Investigators want records linked to a ChatGPT user they say runs a child-abuse website. Court filings show the suspect shared benign prompts about Star Trek and a 200,000-word Trump-style poem with an undercover agent. DHS is not requesting identifying information from OpenAI because agents believe they have already tracked down the 36-year-old former U.S. Air Force base worker. Forbes calls the warrant a turning point, noting AI companies have largely escaped the data grabs familiar to social networks and search engines. The outlet says law enforcement now views chatbot providers as fresh troves of evidence.

·gizmodo.com·
DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case
Netflix ‘all in’ on leveraging AI as the tech creeps into entertainment industry
Netflix ‘all in’ on leveraging AI as the tech creeps into entertainment industry

Netflix’s latest earnings letter tells shareholders the company is “all in” on generative AI across its streaming platform. It frames the technology as essential to sharpening recommendations, boosting its ads business, and accelerating content creation. The service points to Happy Gilmore 2, where AI de-aged characters, and to the Billionaires’ Bunker series, where AI guided wardrobe and set design, as proof of early gains. On the earnings call, CEO Ted Sarandos stressed that AI enhances production speed but “can’t automatically make you a great storyteller.”

·cnbc.com·
Netflix ‘all in’ on leveraging AI as the tech creeps into entertainment industry