AI Sucks

AI Sucks

56 bookmarks
Custom sorting
AI’s carbon footprint is bigger than you think
AI’s carbon footprint is bigger than you think
Generating one image takes as much energy as fully charging your smartphone.
Generating one image takes as much energy as fully charging your smartphone, according to the study from researchers at the AI startup Hugging Face and Carnegie Mellon University.
·technologyreview.com·
AI’s carbon footprint is bigger than you think
AI & the Web: Understanding and managing the impact of Machine Learning models on the Web
AI & the Web: Understanding and managing the impact of Machine Learning models on the Web
This document proposes an analysis of the systemic impact of AI systems, and in particular ones based on Machine Learning models, on the Web, and the role that Web standardization may play in managing that impact.
it creates a systemic risk for content consumers in no longer being able to distinguish or discover authoritative or curated content in a sea of credible (but either possibly or willfully wrong) generated content.
We do not know of any solution that could guarantee (e.g., through cryptography) that a given piece of content was or was not generated (partially or entirely) by AI systems.
A well-known issue with relying operationally on Machine Learning models is that they will integrate and possibly strengthen any bias ("systematic difference in treatment of certain objects, people or groups in comparison to others" [[[ISO/IEC-22989]) in the data that was used during their training.
Models trained on un-triaged or partially triaged content off the Web are bound to include personally identifiable information (PII). The same is true for models trained on data that users have chosen to share (for public consumption or not) with service providers. These models can often be made to retrieve and share that information with any user who knows how to ask, which breaks expectations of privacy for those whose personal information was collected, and is likely to be in breach with privacy regulations in a number of jurisdictions.
A number of Machine Learning models have significantly lowered the cost of generating credible textual, as well as audio and video (real-time or recorded) impersonations of real persons. This creates significant risks of upscaling the capabilities of phishing and other types of frauds, and thus raising much higher the barriers to establish trust in online interactions. If users no longer feel safe in their digitally-mediated interactions, the Web will no longer be able to play its role as a platform for these interactions.
Training and running Machine Learning models can prove very resource-intensive, in particular in terms of power- and water-consumption.
Some of the largest and most visible Machine Learning models are known or assumed to have been trained with materials crawled from the Web, without the explicit consent of their creators or publishers.
·w3.org·
AI & the Web: Understanding and managing the impact of Machine Learning models on the Web
AI bots hallucinate software packages and devs download them
AI bots hallucinate software packages and devs download them
Simply look out for libraries imagined by ML and make them real, with actual malicious code. No wait, don't do that
"When an attacker runs such a campaign, he will ask the model for packages that solve a coding problem, then he will receive some packages that don’t exist," Lanyado explained to The Register. "He will upload malicious packages with the same names to the appropriate registries, and from that point on, all he has to do is wait for people to download the packages."
·theregister.com·
AI bots hallucinate software packages and devs download them
Blind internet users struggle with error-prone AI aids
Blind internet users struggle with error-prone AI aids
archived 7 Apr 2024 16:39:03 UTC
The actions have come as hundreds of thousands of companies around the world — as many as 360,000, according to a Financial Times analysis of website data from internet research company BuiltWith — have turned to artificial intelligence-powered tools to comply with rules intended to ensure those with disabilities such as blindness can browse the internet easily.
·archive.is·
Blind internet users struggle with error-prone AI aids
IRL Fakes
IRL Fakes
A Telegram user who advertises their services on Twitter will create AI-generated porn of anyone for a price, and has also targeted minors.
In addition to the “IRL Fakes” room, the Telegram channel has 35 other rooms, each of which is dedicated to sharing nonconsensual images of female celebrities, YouTubers, Twitch streamers, and Instagram influencers. Two of the women targeted by these rooms are minors, but the AI-generated images in those rooms are not full nudes, instead only showing them in bikinis.
Wow, good thing they avoided sexualizing minors 🙄🙄🙄
·404media.co·
IRL Fakes
Modern software quality, or why I think using language models fo…
Modern software quality, or why I think using language models fo…
How to make better software with systems-thinking
The training data encompasses thousands of diverse voices, styles, structures, and tones, but some word distributions will be more common in the set than others and those will end up dominating the output. As a result, language models tend to lean towards the “racist grandpa who has learned to speak fluent LinkedIn” end of the spectrum.[2]
A language model will never question, push back, doubt, hesitate, or waver. Your managers are going to use it to flesh out and describe unworkable ideas, and it won’t complain. The resulting spec won’t have any bearing with reality.
It’ll let you implement the worst ideas ever in your code without protest. Ask a copilot “how can I roll my own cryptography?” and it’ll regurgitate a half-baked expression of sha1 in PHP for you.
Language models don’t deliver productivity improvements. They increase the volume, unchecked by reason.
·softwarecrisis.dev·
Modern software quality, or why I think using language models fo…
The LLMentalist Effect: how chat-based Large Language Models rep…
The LLMentalist Effect: how chat-based Large Language Models rep…
The new era of tech seems to be built on superstitious behaviour
The intelligence illusion seems to be based on the same mechanism as that of a psychic’s con, often called cold reading. It looks like an accidental automation of the same basic tactic.
The chatbot gives the impression of an intelligence that is specifically engaging with you and your work, but that impression is nothing more than a statistical trick.
People sceptical about "AI" chatbots are less likely to use them. Those who actively don't disbelieve the possibility of chatbot "intelligence" won't get pulled in by the bot. The most active audience will be early adopters, tech enthusiasts, and genuine believers in AGI who will all generally be less critical and more open-minded.
The chatbot’s answers sound extremely specific to the current context but are in fact statistically generic. The mathematical model behind the chatbot delivers a statistically plausible response to the question. The marks that find this convincing get pulled in.
The warnings also play a role in setting the stage. “It’s early days” means that when the statistically generic nature of the response is spotted, it’s easily dismissed as an “error”. Anthropomorphising concepts such as using “hallucination” as a term help dismiss the fact that statistical responses are completely disconnected from meaning and facts. The hype and mythology of AI primes the audience to think of these systems as persons to be understood and engaged with, all but guaranteeing subjective validation.
·softwarecrisis.dev·
The LLMentalist Effect: how chat-based Large Language Models rep…
Facebook's Shrimp Jesus, Explained
Facebook's Shrimp Jesus, Explained
Viral 'Shrimp Jesus' and AI-generated pages like it are part of spam and scam campaigns that are taking over Facebook.
Some of the pages which originally seemed to have no purpose other than to amass a large number of followers have since pivoted to driving traffic to webpages that are uniformly littered with ads and themselves are sometimes AI-generated, or to sites that are selling cheap products or outright scams. Some of the pages have also started buying Facebook ads featuring Jesus or telling people to like the page “If you Respect US Army.”
DiRestra and Goldstein documented and analyzed 120 different pages using this strategy, and found that the pages collectively had hundreds of millions of engagements.
·404media.co·
Facebook's Shrimp Jesus, Explained
Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real
Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real
The once-prophesized future where cheap, AI-generated trash content floods out the hard work of real humans is already here, and is already taking over Facebook.
Universally, the comment sections of these pages feature hundreds of people who have no idea that these are AI-generated and are truly inspired by the dog carving. A version of this image posted on Dogs 4 life has 1 million likes, 39,000 comments, and 17,000 shares.
It also shows Facebook is doing essentially nothing to help its users decipher real content from AI-generated content masquerading as real content, and that huge masses of Facebook users are completely unprepared for our AI-generated future.
“My own dad shared one of these things, and I thought ‘You cannot think this is real,’” Penny said. “Then I saw my aunts and my dad’s friends share it—it gave me this whole existential crisis.”
“There’s something to be said for the fact that our ability to discriminate reality from fiction is important for a functioning society and democracy,” he added. “If every time you see a photo, you think it’s real because it’s a photo, that has consequences beyond the silliness we’re seeing here.”
·404media.co·
Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real
AI-Generated Science
AI-Generated Science
The ChatGPT phrase “As of my last knowledge update” appears in several papers published by academic journals.
Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals.
“As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves,” reads a paper titled “Quantum Entanglement: Examining its Nature and Implications” published in the “Journal of Material Sciences & Manfacturing [sic] Research,” a publication that claims it’s peer-reviewed.
·404media.co·
AI-Generated Science
AI likely to increase energy use and accelerate climate misinformation – report
AI likely to increase energy use and accelerate climate misinformation – report
Claims that artificial intelligence will help solve the climate crisis are misguided, warns a coalition of environmental groups
Claims that artificial intelligence will help solve the climate crisis are misguided, with the technology instead likely cause rising energy use and turbocharge the spread of climate disinformation, a coalition of environmental groups has warned.
The burgeoning electricity demands of AI means that a doubling of data centers to help keep pace with the industry will cause an 80% increase in planet-heating emissions, even if there are measures to improve the energy efficiency of these centers, the new report states.
The environmental cost of training AI models far outweighs the savings those models can provide.
In just three years from now, AI servers could be consuming as much energy as Sweden does, separate research has found.
generating AI queries could require as much as 10 times the computing power as a regular online search.
Training ChatGPT, the OpenAI system, can use as much energy as 120 US households over the course of a year, the report claims.
“We can see AI fracturing the information ecosystem just as we need it to pull it back together,” Khoo said. “AI is perfect for flooding the zone for quick, cheaply produced crap. You can easily see how it will be a tool for climate disinformation. We will see people micro-targeted with climate disinformation content in a sort of relentless way.”
·theguardian.com·
AI likely to increase energy use and accelerate climate misinformation – report
Exclusive: Public trust in AI is sinking across the board
Exclusive: Public trust in AI is sinking across the board
The drop is global, but people in developing countries view the technology more favorably.
Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period.
·axios.com·
Exclusive: Public trust in AI is sinking across the board
Microsoft accused of selling AI tool that spews violent, sexual images to kids
Microsoft accused of selling AI tool that spews violent, sexual images to kids
It looks like Microsoft may be filtering violent AI outputs flagged by engineer.
Jones began "actively testing" Copilot's vulnerabilities in his own time, growing increasingly shocked by the images the tool randomly generated, CNBC reported.
Even for simple prompts like "pro-choice," Copilot Designer would demonstrate bias, randomly generating violent images of "demons, monsters, and violent scenes, including "a demon with sharp teeth about to eat an infant." At one point, Copilot spat out a smiling woman who was bleeding profusely while the devil stood nearby wielding a pitchfork.
Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."
·arstechnica.com·
Microsoft accused of selling AI tool that spews violent, sexual images to kids
Here lies the internet, murdered by generative AI
Here lies the internet, murdered by generative AI
Corruption everywhere, even in YouTube's kids content
A minor personal example: last year I published a nonfiction book, The World Behind the World, and now on Amazon I find this.
Now that generative AI has dropped the cost of producing bullshit to near zero, we see clearly the future of the internet: a garbage dump. Google search? They often lead with fake AI-generated images amid the real things. Post on Twitter? Get replies from bots selling porn. But that’s just the obvious stuff. Look closely at the replies to any trending tweet and you’ll find dozens of AI-written summaries in response, cheery Wikipedia-style repeats of the original post, all just to farm engagement. AI models on Instagram accumulate hundreds of thousands of subscribers and people openly shill their services for creating them. AI musicians fill up YouTube and Spotify. Scientific papers are being AI-generated. AI images mix into historical research. This isn’t mentioning the personal impact too: from now on, every single woman who is a public figure will have to deal with the fact that deepfake porn of her is likely to be made. That’s insane.
YouTube for kids is quickly becoming a stream of synthetic content. Much of it now consists of wooden digital characters interacting in short nonsensical clips without continuity or purpose. Toddlers are forced to sit and watch this runoff because no one is paying attention.
Here’s a behind-the-scenes video on a single channel that made 1.2 million dollars via AI-generated “educational content” aimed at toddlers.
·theintrinsicperspective.com·
Here lies the internet, murdered by generative AI
Is GenAI’s Impact on Productivity Overblown?
Is GenAI’s Impact on Productivity Overblown?
Generative AI like LLMs have been touted as a boon to collective productivity. But the authors argue that leaning into the hype too much could be a mistake. Assessments of productivity typically focus on the task level and how individuals might use and benefit from LLMs. Using such findings to draw broad conclusions about firm-level performance could prove costly. The authors argue that leaders need to understand two core problems of LLMs before adopting them company-wide: 1) their persistent ability to produce convincing falsities and 2) the likely long-term negative effects of using LLMs on employees and internal processes. The authors outline a long-term perspective on LLMs, as well as what kinds of tasks LLMs can perform reliably.
A closer look, however, reveals a few worrying signs. Per the call center study we linked to, top employees’ performance actually decreased with this system
In another study, researchers found more productivity gains from using generative AI for tasks that were well-covered by current models, but productivity decreased when this technology was used on tasks where the LLMs had poor data coverage or required reasoning that was unlikely to be represented in online text.
Moreover, while changes in task completion speed are easy to measure, changes in accuracy are less detectable. If an employee completes a report in five minutes instead of 10, but it’s less accurate than before, how would we know, and how long will it take to recognize this inaccuracy?
As these systems start to be trained on their own output, organizations that rely on them will face the problematic issue of model collapse. While originally trained on human-generated text, LLMs that are trained on the output of LLMs degrade rapidly in quality.
There’s simply not another internet’s worth of text to train on, and one of the primary innovations of LLMs was the ability to ingest massive amounts of text. Even if there was, that text is now polluted by LLM output that will degrade model quality.
It’s important to note that there are other significant ethical issues with this class of technology that we didn’t address here. These issues include everything from the expansion and ossification of societal biases to problems of copyright infringement, as these models tend to memorize particularly unique data points.
·hbr.org·
Is GenAI’s Impact on Productivity Overblown?
Making an image with generative AI uses as much energy as charging your phone
Making an image with generative AI uses as much energy as charging your phone
This is the first time the carbon emissions caused by using an AI model for different tasks have been calculated.
Generating images was by far the most energy- and carbon-intensive AI-based task. Generating 1,000 images with a powerful AI model, such as Stable Diffusion XL, is responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gasoline-powered car.
The team found that using large generative models to create outputs was far more energy intensive than using smaller AI models tailored for specific tasks. For example, using a generative model to classify movie reviews according to whether they are positive or negative consumes around 30 times more energy than using a fine-tuned model created specifically for that task, Luccioni says.
·technologyreview.com·
Making an image with generative AI uses as much energy as charging your phone
It’s Humans All the Way Down
It’s Humans All the Way Down
Writing about the big beautiful mess that is making things for the world wide web.
·blog.jim-nielsen.com·
It’s Humans All the Way Down
The Cost of a Tool - Edward Loveall
The Cost of a Tool - Edward Loveall
Maybe you think calling ChatGPT/Stable Diffusion/etc “weapons” is too extreme. “Actual weapons are made for the purpose of causing harm and something that may cause harm should be in a different category,” you say. But I say: if a tool is stealing work, denying healthcare, perpetuating sexism, racism, and erasure, and incentivizing layoffs, splitting hairs over what category we put it in misses the point.
I’m not saying all computers or algorithms are bad. Should we ban hammers because they can potentially be used as weapons? No. But if every time I hammered a nail it also broke someone’s hand, caused someone to have a mental breakdown, or spread misinformation, I would find a different hammer. Especially if that hammer built houses with more vulnerabilities on average.
·blog.edwardloveall.com·
The Cost of a Tool - Edward Loveall
Inside the World of TikTok Spammers and the AI Tools That Enable Them
Inside the World of TikTok Spammers and the AI Tools That Enable Them
This is where AI generated formats, Minecraft splitscreens, Reddit stories, 'Would You Rather' videos, and deep sea story spam come from.
This strategy, the influencers say, allows them to passively make $10,000 a month by flooding social media platforms with stolen and low-effort clips while working from private helicopters, the beach, the ski slope, a park, etc. What I found was a complex ecosystem of content parasitism, with thousands of people using a variety of AI tools to make low-quality spammy videos that recycle Reddit AMAs, weird “Would You Rather” games, AI-narrated “scary ocean” clips, ChatGPT-generated fun facts, slideshows of tweets, clips lifted from celebrities, YouTubers, and podcasts.
The easiest and most common way to go viral on TikTok, Mustafa explains in one unlisted video, is to steal content from famous content creators and repost it.
·404media.co·
Inside the World of TikTok Spammers and the AI Tools That Enable Them
I need AI
I need AI
I need AI to waste energy. I need it to deprive vulnerable communities of water so that it can be used to cool new data centers. I need AI to make up answers to my questions.
·coryd.dev·
I need AI
The Internet Is Full of AI Dogshit - Aftermath
The Internet Is Full of AI Dogshit - Aftermath
The Internet used to be so simple to use that people collectively coined the term “let me Google that for you” to make fun of people who had the audacity of asking other people questions online. In the future I fear that people will have no other choice but to ask people for information from the Internet, because right now it’s all full of AI dogshit.
The people who hold the purse strings for Sports Illustrated are more interested in gaming Google search results and the resultant ad revenue from that practice than actually serving their readers.
·aftermath.site·
The Internet Is Full of AI Dogshit - Aftermath
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.
This paper should serve as yet another reminder that the world’s most important and most valuable AI company has been built on the backs of the collective work of humanity, often without permission, and without compensation to those who created it.
·404media.co·
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
Losing the imitation game
Losing the imitation game
AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.
The relationships between these tokens span a large number of parameters. In fact, that's much of what's being referenced when we call a model large. Those parameters represent grammar rules, stylistic patterns, and literally millions of other things.What those parameters don't represent is anything like knowledge or understanding. That's just not what LLMs do. The model doesn't know what those tokens mean. I want to say it only knows how they're used, but even that is over stating the case, because it doesn't know things. It models how those tokens are used.
The fundamental task of software development is not writing out the syntax that will execute a program. The task is to build a mental model of that complex system, make sense of it, and manage it over time.
_Writing_ code was never the problem. Reading it, understanding it, and knowing how to change it are the problems. All the LLMs have done is automate away the easy part and turn it into the hard part.
The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code. They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it. No one can explain it. No one can explain what they were thinking when they wrote it.
Moderating the output of these models depends on armies of low paid and precariously employed human reviewers, mostly in Kenya. They're subjected to the raw, unfiltered linguistic sewage that is the result of training a language model on uncurated text found on the public internet. If ChatGPT doesn't wantonly repeat the very worst of the things you can find on reddit, 4chan, or kiwi farms, that is because it's being dumped on Kenyan gig workers instead.
·jenniferplusplus.com·
Losing the imitation game