Consumer AI

730 bookmarks
Newest
Hundreds of thousands of Grok chats exposed in Google results
Hundreds of thousands of Grok chats exposed in Google results

Hundreds of thousands of user conversations with Elon Musk's artificial intelligence (AI) chatbot Grok have been exposed in search engine results - seemingly without users' knowledge. Unique links are created when Grok users press a button to share a transcript of their conversation - but as well as sharing the chat with the intended recipient, the button also appears to have made the chats searchable online. A Google search on Thursday revealed it had indexed nearly 300,000 Grok conversations. It has led one expert to describe AI chatbots as a "privacy disaster in progress".

Hundreds of thousands of user conversations with Elon Musk's artificial intelligence (AI) chatbot Grok have been exposed in search engine results - seemingly without users' knowledge.Unique links are created when Grok users press a button to share a transcript of their conversation - but as well as sharing the chat with the intended recipient, the button also appears to have made the chats searchable online.A Google search on Thursday revealed it had indexed nearly 300,000 Grok conversations.It has led one expert to describe AI chatbots as a "privacy disaster in progress".
·bbc.com·
Hundreds of thousands of Grok chats exposed in Google results
The End of Handwriting | WIRED
The End of Handwriting | WIRED

Students’ ability to outsource critical thinking to LLMs has left schools and universities scrambling to find ways to prevent plagiarism and cheating. Five semesters after ChatGPT changed education, Inside Higher Ed wrote in June, university professors are considering bringing back tests written longhand. Sales of “blue books”—those anxiety-inducing notebooks used for college exams—are ticking up, according to a report in The Wall Street Journal. Handwriting, in person, may soon become one of the few things a student can do to prove they’re not a bot.

Students’ ability to outsource critical thinking to LLMs has left schools and universities scrambling to find ways to prevent plagiarism and cheating. Five semesters after ChatGPT changed education, Inside Higher Ed wrote in June, university professors are considering bringing back tests written longhand. Sales of “blue books”—those anxiety-inducing notebooks used for college exams—are ticking up, according to a report in The Wall Street Journal. Handwriting, in person, may soon become one of the few things a student can do to prove they’re not a bot.
·archive.is·
The End of Handwriting | WIRED
Google Says It Dropped the Energy Cost of AI Queries By 33x In One Year - Slashdot
Google Says It Dropped the Energy Cost of AI Queries By 33x In One Year - Slashdot
Google has released (PDF) a new analysis of its AI's environmental impact, showing that it has cut the energy use of AI text queries by a factor of 33 over the past year. Each prompt now consumes about 0.24 watt-hours -- the equivalent of watching nine seconds of TV. An anonymous reader shares an ex…
·m.slashdot.org·
Google Says It Dropped the Energy Cost of AI Queries By 33x In One Year - Slashdot
AI Learning Resources & Guides from Anthropic \ Anthropic
AI Learning Resources & Guides from Anthropic \ Anthropic
Access comprehensive guides, tutorials, and best practices for working with Claude. Learn how to craft effective prompts and maximize AI interactions in your workflow.
·anthropic.com·
AI Learning Resources & Guides from Anthropic \ Anthropic
How hands-on AI experience is shaping future business leaders
How hands-on AI experience is shaping future business leaders

“Through the ‘Five AI Buckets’ classroom discussions, I gained a deeper knowledge of how AI reshapes various aspects of our daily lives,” a College of Business student said in a survey. “The lessons highlighted AI's incredible capabilities, especially in areas like problem-solving, information retrieval, ideation, summarization, and its potential for social good. These classroom discussions also made me aware of the ethical challenges that arise from the general use of AI, such as biases in algorithms and data privacy concerns.”

The Five AI Buckets include:

Information Retrieval – Using AI tools to collect and assess research, evaluate sources, and verify credibility. Ideation and Creative Inquiry – Generating ideas aligned with global challenges through guided AI prompts. Problem Solving – Engaging with public datasets to make data-informed decisions on real-world issues. Summarization – Analyzing and condensing academic research using AI to identify key insights. AI for Good – Creating personal impact plans and reflecting on how AI can support social progress.

“Through the ‘Five AI Buckets’ classroom discussions, I gained a deeper knowledge of how AI reshapes various aspects of our daily lives,” a College of Business student said in a survey. “The lessons highlighted AI's incredible capabilities, especially in areas like problem-solving, information retrieval, ideation, summarization, and its potential for social good. These classroom discussions also made me aware of the ethical challenges that arise from the general use of AI, such as biases in algorithms and data privacy concerns.” The Five AI Buckets include:Information Retrieval – Using AI tools to collect and assess research, evaluate sources, and verify credibility.Ideation and Creative Inquiry – Generating ideas aligned with global challenges through guided AI prompts.Problem Solving – Engaging with public datasets to make data-informed decisions on real-world issues.Summarization – Analyzing and condensing academic research using AI to identify key insights.AI for Good – Creating personal impact plans and reflecting on how AI can support social progress.
·ohio.edu·
How hands-on AI experience is shaping future business leaders
Meta’s AI Policy Just Crossed a Line
Meta’s AI Policy Just Crossed a Line
A leaked 200-page policy document just lit a fire under Meta, and not in a good way.
What's In the Problematic Guidelines? Here’s what Meta’s leaked guidelines reportedly allowed: Romantic roleplay with children. Statements arguing black people are dumber than white people, so long as they didn’t “dehumanize” the group. Generating false medical claims about public figures, as long as a disclaimer was included. Sexualized imagery of celebrities, like Taylor Swift, with workarounds that substituted risqué requests with absurd visual replacements. And all of this, according to Meta, was once deemed acceptable behavior for its generative AI tools. The company now claims these examples were “erroneous” and “inconsistent” with official policy.
·marketingaiinstitute.com·
Meta’s AI Policy Just Crossed a Line