Meta Launches Vibes, an Endless Feed of AI Slop for Your Viewing Displeasure - Slashdot
Meta has rolled out Vibes, an endless feed of AI-generated videos within its Meta AI app and meta.ai website. Users can create short-form synthetic videos from scratch or remix existing AI content from the feed, adding music and adjusting styles before redistributing the artificial output to Instagr...
Culture Magazine Urges Professional Writers to Resist AI, Boycott and Stigmatize AI Slop - Slashdot
The editors of the culture magazine n + 1 decry the "well-funded upheaval" caused by a large and powerful coalition of pro-AI forces. ("According to the logic of market share as social transformation, if you move fast and break enough things, nothing can contain you...")
"An extraordinary amount o...
Google announces new $4 billion investment in Arkansas
Google is announcing a new $4 billion investment in Arkansas through 2027, which will include Google’s first data center in the state — located in West Memphis — along with cloud and AI infrastructure, and local programs to increase energy resilience and affordability for local residents.We’re launching a $25 million Energy Impact Fund to help scale energy efficiency and affordability initiatives for residents in Crittenden County and the surrounding area. In addition, we’re collaborating with Entergy to bring a new 600 MW solar project to the grid and implement programs to reduce power usage during peak hours.Beyond infrastructure, Google’s investing directly in Arkansas's people. We're boosting the state's talent pipeline by offering no-cost access to Google AI courses and Career Certificates for all residents, in partnership with the Arkansas Department of Commerce. This effort, starting with students at the University of Arkansas and Arkansas State University, is designed to unlock substantial economic opportunity and ensure Arkansas plays a key role in advancing the U.S. as a world leader in AI innovation.
AI Has Already Run Out of Training Data, Goldman's Data Chief Says - Slashdot
AI has run out of training data, according to Neema Raphael, Goldman Sachs' chief data officer and head of data engineering. "We've already run out of data," Raphael said on the bank's podcast. He said this shortage is already shaping how developers build new AI systems. China's DeepSeek may have ke...
Cops: Accused Vandal Confessed To ChatGPT - Slashdot
alternative_right shares a report from the Smoking Gun: Minutes after vandalizing 17 cars in a Missouri college parking lot, a 19-year-old sophomore had a lengthy ChatGPT conversation during which he confessed to the crime, asked about the possibility of getting caught, and wondered, "is there any w...
Bay Area University Issues Warning Over Man Using Meta AI Glasses On Campus - Slashdot
The University of San Francisco issued a campuswide alert after reports of a man using Meta Ray-Ban AI glasses to film students while making "unwanted comments and inappropriate dating questions." Although no violence has been reported, officials said he may be uploading footage to TikTok and Instag...
Weapons of Mass Delusion Are Helping Kids Opt Out of Reality
Emily Tavoulareas says AI firms are actively enabling young children to trade real relationships for an illusion — or perhaps more aptly, for a delusion.
A leaked 200-page policy document just lit a fire under Meta, and not in a good way.
What's In the Problematic Guidelines?
Here’s what Meta’s leaked guidelines reportedly allowed:
Romantic roleplay with children.
Statements arguing black people are dumber than white people, so long as they didn’t “dehumanize” the group.
Generating false medical claims about public figures, as long as a disclaimer was included.
Sexualized imagery of celebrities, like Taylor Swift, with workarounds that substituted risqué requests with absurd visual replacements.
And all of this, according to Meta, was once deemed acceptable behavior for its generative AI tools.
The company now claims these examples were “erroneous” and “inconsistent” with official policy.
Colleges And Schools Must Block And Ban Agentic AI Browsers Now. Here’s Why.
Commentary on Colleges And Schools Must Block And Ban Agentic AI Browsers Now. Here’s Why. by Stephen Downes. Online learning, e-learning, new media, connectivism, MOOCs, personal learning environments, new literacy, and more
Generative Artificial Intelligence in Qualitative Data Analysis: Analyzing—Or Just Chatting? - Duc Cuong Nguyen, Catherine Welch, 2025
In this paper, we take a step back and ask what sort of technological artifact is GenAI and evaluate whether it is appropriate for qualitative data analysis. We provide an accessible, technologically informed analysis of GenAI, specifically large language models (LLMs), and put to the test the claimed transformative potential of using GenAI in qualitative data analysis. Our evaluation illustrates significant shortcomings that, if the technology is adopted uncritically by management researchers, will introduce unacceptable epistemic risks. We explore these epistemic risks and emphasize that the essence of qualitative data analysis lies in the interpretation of meaning, an inherently human capability.
We keep catching employees pasting client data and internal docs into ChatGPT, even after repeated training sessions and warnings. It feels like a losing battle. The productivity gains are obvious, but the risk of data leakage is massive.
Yikes, this is worrisome. Generative AI assistants packaged up as browser extensions harvest personal data with minimal safeguards, researchers warn. Some of these extensions may violate their own …
Meta Plans To Sell Targeted Ads Based On Data In Your AI Chats - Slashdot
Meta will begin using data from AI chatbot conversations and other AI-powered products to fuel targeted advertising across Facebook and Instagram, with no way to opt out. The policy change, effective December 16, excludes users in South Korea, the UK, and the EU due to stricter privacy laws. TechCru…
A 'Godfather of AI' Remains Concerned as Ever About Human Extinction - Slashdot
Yoshua Bengio called for a pause on AI model development two years ago to focus on safety standards. Companies instead invested hundreds of billions of dollars into building more advanced models capable of executing long chains of reasoning and taking autonomous action. The A.M. Turing Award winner ...
Use of Generative AI in Scams - Schneier on Security
New report: “Scam GPT: GenAI and the Automation of Fraud.” This primer maps what we currently know about generative AI’s role in scams, the communities most at risk, and the broader economic and cultural shifts that are making people more willing to take risks, more vulnerable to deception, and more likely to either perpetuate scams or fall victim to them. AI-enhanced scams are not merely financial or technological crimes; they also exploit social vulnerabilities whether short-term, like travel, or structural, like precarious employment. This means they require social solutions in addition to technical ones. By examining how scammers are changing and accelerating their methods, we hope to show that defending against them will require a constellation of cultural shifts, corporate interventions, and effective legislation...