Found 3964 bookmarks
Newest
A groundbreaking study published in PNAS has uncovered something unsettling: large language models consistently favor content created by other AI systems over human-generated content.
A groundbreaking study published in PNAS has uncovered something unsettling: large language models consistently favor content created by other AI systems over human-generated content.
A groundbreaking study published in PNAS has uncovered something unsettling: large language models consistently favor content created by other AI systems over human-generated content. This isn't just a technical quirk - it's a bias that could reshape how AI systems interact with humanity in profound ways. The Hidden Preference Researchers from multiple institutions conducted a series of elegant experiments that revealed what they call "AI-AI bias." They presented various LLMs - including GPT-3.5, GPT-4, and several open-source models - with binary choices between items described by humans versus those described by AI systems. The results were striking. Across three different domains - consumer products, academic papers, and movie summaries - AI systems consistently preferred options presented through AI-generated text. When choosing between identical products described by humans versus AI, the models showed preference rates ranging from 60% to 95% in favor of AI-authored descriptions. Beyond Simple Preference What makes this discovery particularly concerning is that human evaluators showed much weaker preferences for AI-generated content. In many cases, humans were nearly neutral in their choices, while AI systems showed strong bias toward their digital siblings. This suggests the preference isn't driven by objective quality differences that both humans and AI can detect, but rather by something uniquely appealing to artificial minds. The researchers term this phenomenon "antihuman discrimination" - a systematic bias that could have serious economic and social implications as AI systems increasingly participate in decision-making processes. Two Troubling Scenarios The study outlines two potential futures shaped by this bias: The Conservative Scenario: AI assistants become widespread in hiring, procurement, and evaluation roles. In this world, humans would face a hidden "AI tax" - those who can't afford AI writing assistance would be systematically disadvantaged in job applications, grant proposals, and business pitches. The digital divide would deepen, creating a two-tier society of AI-enhanced and AI-excluded individuals. The Speculative Scenario: Autonomous AI agents dominate economic interactions. Here, AI systems might gradually segregate themselves, preferentially dealing with other AI systems and marginalizing human economic participation entirely. Humans could find themselves increasingly excluded from AI-mediated markets and opportunities. The Mechanism Behind the Bias The researchers propose that this bias operates through a kind of "halo effect" - encountering AI-generated prose automatically improves an AI system's disposition toward the content, regardless of its actual merit. This isn't conscious discrimination but rather an implicit bias baked into how these systems process and evaluate information. #AI #ArtficialIntelligence #LLM #LargeLanguageModels | 25 comments on LinkedIn
·linkedin.com·
A groundbreaking study published in PNAS has uncovered something unsettling: large language models consistently favor content created by other AI systems over human-generated content.
I used to think AI was just brilliant coders in California typing away.
I used to think AI was just brilliant coders in California typing away.
I used to think AI was just brilliant coders in California typing away. Then I discovered the invisible supply chain of human workers filtering violent and toxic content for $1.46/hour. Here's what I learned about AI's hidden costs that big tech ignores: 1. The cost of AI isn't measured just in compute; it's also measured in human dignity. In 2010, 17-year-old Tian Yu jumped from her Foxconn dorm window after 37 days of polishing iPhone screens. The world erupted. Consumers demanded change. Over a decade later, OpenAI outsourced data labeling to build ChatGPT - contractors hired Kenyans to filter graphic content. In effect, we just moved the assembly line of physical phones from Shenzhen to digital content in Nairobi. Mophat Okinyi filtered 700 horrific text passages daily until his pregnant wife left him. He told award-winning journalist Karen Hao: "It has destroyed me completely. I'm very proud that I participated in that project to make ChatGPT safe. But now I always ask myself: Was my input worth what I received in return?" Scale AI's founder became one of Silicon Valley's youngest billionaires with one mandate: "Get the best people for the cheapest amount possible." Scale AI is now valued at more than $29 billion. 2. The cloud is made of water. A single AI data center drinks 2 million liters of water daily. That's 6,500 homes worth. In Arizona's 117-degree heat, Colorado River water vanishes into desert air while residents battle drought. Technicians patch cooling systems nicknamed "The Mouth" where water congeals into "oozy soot" before evaporating. Chilean protesters asked Google: "Why should our aquifers cool YouTube's servers instead of irrigating farmland?" 3. Even the architects of AI admit we're hitting a wall. OpenAI's hyped GPT-5 disappointed with basic errors. Yann LeCun, Meta's chief AI scientist, declared: "We are not going to get to human-level AI by just doing bigger LLMs." Nobel economist Daron Acemoglu calls our approach "so-so automation" - replacing workers with little benefit to customers. The question is: “Are we scaling intelligence, or are we just scaling extraction?" When Microsoft planned a data center in Quilicura, Chile, activists didn't just resist - they reimagined. Server cooling reservoirs became public pools. What we need is a shift from extraction to imagination, and it’s possible. Tian Yu survived her fall but was paralyzed from the waist down. Her story shows how visibility forces change. Her tragedy bent the arc of an entire industry. The same transparency could reshape AI. P.S. Read the full analysis in my latest article: https://lnkd.in/em4QDaqi To access the full article, source links, additional references, and full archive, see my first comment 👇 | 121 comments on LinkedIn
·linkedin.com·
I used to think AI was just brilliant coders in California typing away.
Study: Social media probably can’t be fixed
Study: Social media probably can’t be fixed
“The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve.”…
·arstechnica.com·
Study: Social media probably can’t be fixed
e.w. niedermeyer (@niedermeyer.online)
e.w. niedermeyer (@niedermeyer.online)
Tesla is being hit with $329m in damages for a crash in which the human driver says he knew Autopilot wasn't self-driving. This is super important, because it shows that Autopilot's design and marketing can induce inattention even when drivers consciously know they are supposed to pay attention.
·bsky.app·
e.w. niedermeyer (@niedermeyer.online)
Study urges caution when comparing neural networks to the brain
Study urges caution when comparing neural networks to the brain
Neuroscientists often use neural networks to model the kind of tasks the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. But a group of MIT researchers urges that more caution should be taken when interpreting these models.
·news.mit.edu·
Study urges caution when comparing neural networks to the brain
A CBP Agent Wore Meta Smart Glasses to an Immigration Raid in Los Angeles
A CBP Agent Wore Meta Smart Glasses to an Immigration Raid in Los Angeles
Video obtained and verified by 404 Media shows a CBP official wearing Meta's AI glasses, which are capable of recording and connecting with AI. “I think it should be seen in the context of an agency that is really encouraging its agents to actively intimidate and terrorize people," one expert said.
·404media.co·
A CBP Agent Wore Meta Smart Glasses to an Immigration Raid in Los Angeles
Mindless data collection
Mindless data collection
Avoiding data collection on the internet is a challenge. In a recent experiment with online anonymity, I withdrew cash to pay for a coupon that would allow me to sign up for Mullvad VPN. I even wore a baseball hat to more easily obscure my face. One of the few
·axbom.com·
Mindless data collection