Consumer AI

730 bookmarks
Newest
OpenAI no longer forced to save deleted chats—but some users still affected
OpenAI no longer forced to save deleted chats—but some users still affected

OpenAI will stop saving most #ChatGPT users’ deleted #chats

OpenAI will finally stop saving most ChatGPT users' deleted and temporary chats after a court fight compelled the #AI firm to retain the logs "indefinitely."

The preservation order came in a lawsuit filed by The New York Times and other news plaintiffs, who alleged that user attempts to skirt #paywalls with ChatGPT would most likely set their chats as temporary or delete the logs.

copyright

https://arstechnica.com/tech-policy/2025/10/openai-no-longer-forced-to-save-deleted-chats-but-some-users-still-affected/

·arstechnica.com·
OpenAI no longer forced to save deleted chats—but some users still affected
The AI Application Spending Report: Where Startup Dollars Really Go | Andreessen Horowitz
The AI Application Spending Report: Where Startup Dollars Really Go | Andreessen Horowitz

We then identified the top 50 AI-native application layer companies – similar to our Top 100 Gen AI Consumer Apps, but built around spend data versus web traffic data.

Unlike infrastructure providers, which reflect the capabilities startups are enabling (compute, models, developer tools), these companies show where AI is actually being applied in products and workflows and that distinction matters: this ranking gives us a real-time signal of what early stage startups are “buying” in AI.

·a16z.com·
The AI Application Spending Report: Where Startup Dollars Really Go | Andreessen Horowitz
AI Incidents Are Up 30%. It's Time to Build a Playbook for When AI Fails.
AI Incidents Are Up 30%. It's Time to Build a Playbook for When AI Fails.
AI incidents and hazards surged by 30% in the last six months, according to OECD data. These failures are already causing real harm: chatbots allegedly helping craft explosives, Microsoft disrupting $4 billion in AI-enabled fraud, and health insurance systems allegedly incorrectly denying coverage for critical care.
·linkedin.com·
AI Incidents Are Up 30%. It's Time to Build a Playbook for When AI Fails.
Microsoft adds Copilot adoption benchmarks to Viva Insights
Microsoft adds Copilot adoption benchmarks to Viva Insights

According to Microsoft, an "active Copilot user" is one who "performed an intentional action for an AI-powered capability in Copilot within Microsoft Teams, Microsoft 365 Copilot Chat (work), Outlook, Word, Excel, PowerPoint, OneNote, or Loop."

It makes sense to track Copilot use – those licenses aren't cheap – but benchmarking adoption may be seen by some as a step too far for something still struggling to prove its worth, especially with the risk of turning it into a leaderboard game.

·theregister.com·
Microsoft adds Copilot adoption benchmarks to Viva Insights
Octave 2: next-generation multilingual voice AI • Hume AI
Octave 2: next-generation multilingual voice AI • Hume AI
Today we’re launching Octave 2, the second generation of our frontier voice AI model for text-to-speech. We just made a preview of Octave 2 available on our platform and through our API.
·hume.ai·
Octave 2: next-generation multilingual voice AI • Hume AI
Police issue warning over AI home invasion prank
Police issue warning over AI home invasion prank
A trend is going viral on TikTok of people using AI to prank their loved ones that an intruder is in their home. Police are issuing a warning to users that the prank could result in criminal charges. NBC News' Ellison Barber has more on the viral trend.
·nbcnews.com·
Police issue warning over AI home invasion prank
Classrooms embraced AI - training didn’t keep up, CDT warns
Classrooms embraced AI - training didn’t keep up, CDT warns

Prior research and experts warn that spending too much time with AI bots can have a negative effect on in-real-life (IRL) social skills - an outcome which may be more severe for young, developing minds. Teachers who responded to CDT's research appear to agree, as 71 percent said that they're worried AI weakens key academic skills such as writing and critical thinking. . . only 11 percent of teachers said that their training covered how to respond if they suspect a student's use of AI is harming their well-being, for example, hurting self-esteem or encouraging risky behavior.

·theregister.com·
Classrooms embraced AI - training didn’t keep up, CDT warns
The New Talk: The Need To Discuss AI With Kids
The New Talk: The Need To Discuss AI With Kids

“[I]t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterized ChatGPT when he reviewed his son’s conversations with the AI tool. Adam tragically died by suicide. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.

·thefulcrum.us·
The New Talk: The Need To Discuss AI With Kids
AI 2027
AI 2027
experts who expect quick implementation over the next decade with an impact “exceeding that of the Industrial Revolution.”
·ai-2027.com·
AI 2027
Gemini at Work 2025
Gemini at Work 2025
Today, at our Google Cloud event, we’re announcing Gemini Enterprise, the new front door for AI in the workplace.
·blog.google·
Gemini at Work 2025
Sora 2 Watermark Removers Flood the Web
Sora 2 Watermark Removers Flood the Web
Bypassing Sora 2's rudimentary safety features is easy and experts worry it'll lead to a new era of scams and disinformation.
·404media.co·
Sora 2 Watermark Removers Flood the Web
Employees regularly paste company secrets into ChatGPT
Employees regularly paste company secrets into ChatGPT
According to a study by security biz LayerX, a large number of corporate users paste Personally Identifiable Information (PII) or Payment Card Industry (PCI) numbers right into ChatGPT, even if they're using the bot without permission.
·go.theregister.com·
Employees regularly paste company secrets into ChatGPT