AI Sucks

AI Sucks

56 bookmarks
Custom sorting
Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups – report
Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups – report
The Danish welfare authority, Udbetaling Danmark (UDK), risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of artificial intelligence (AI) tools to flag individuals for social benefits fraud investigations, Amnesty International said today in a new report.  The report, Coded Injustice: Surveillance and Discrimination in Denmark’s Automated […]
The Danish welfare authority, Udbetaling Danmark (UDK), risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of artificial intelligence (AI) tools to flag individuals for social benefits fraud investigations, Amnesty International said today in a new report.
To power these fraud-detection algorithms, Danish authorities have enacted laws that enable extensive collection and merging of personal data from public databases of millions of Danish residents.  The data includes information on residency status and movements, citizenship, place of birth, and family relationships — sensitive data points that can also serve as proxies for a person’s race, ethnicity, or sexual orientation
·amnesty.org·
Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups – report
The Human Cost Of Our AI-Driven Future | NOEMA
The Human Cost Of Our AI-Driven Future | NOEMA
Behind AI’s rapid advance and our sanitized feeds, an invisible global workforce endures unimaginable trauma.
Five days a week, eight hours a day, Abrha sat in the Sama warehouse in Nairobi, moderating content from the very conflict he had escaped — even sometimes a bombing from his hometown. Each day brought a deluge of hate speech directed at Tigrayans, and dread that the next dead body might be his father, the next rape victim his sister.
The horror of his work reached a devastating peak when Abrha came across his cousin’s body while moderating content. It was a brutal reminder of the very real and personal stakes of the conflict he was being forced to witness daily through a computer screen.
When the company offered to promote him to content moderator with a slight pay increase, he jumped at the opportunity, unaware of the implications of the decision. Kings soon found himself confronting content that haunted him day and night. The worst was what they coded as CSAM, or child sexual abuse material. Day after day, he sifted through texts, pictures and videos vividly depicting the violation of children.
When Ranta’s sister died, she said her boss gave her a few days off but wouldn’t let her switch to less traumatic content streams when she returned to moderating content — even though there was an opening.
She then learned that the company had stopped making health insurance contributions shortly after she started working, despite continued deductions from her paycheck. Now she was saddled with bills she couldn’t afford to pay.
Every day, content moderators are forced to confront the darkest corners of humanity. They wade through a toxic swamp of violence, hate speech, sexual abuse and graphic imagery.
Many moderators report symptoms of post-traumatic stress and vicarious trauma: nightmares, flashbacks and severe anxiety are common. Some develop a deep-seated mistrust of the world around them, forever changed by the constant exposure to human cruelty. As one worker told me, “I came into this job believing in the basic goodness of people. Now, I’m not sure I believe in anything anymore. If people can do this, then what’s there to believe?”
·noemamag.com·
The Human Cost Of Our AI-Driven Future | NOEMA
Data center emissions likely 662% higher than big tech claims. Can it keep up the ruse?
Data center emissions likely 662% higher than big tech claims. Can it keep up the ruse?
Emissions from in-house data centers of Google, Microsoft, Meta and Apple may be 7.62 times higher than official figures
AI is far more energy-intensive on data centers than typical cloud-based applications. According to Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search, and data center power demand will grow 160% by 2030. Goldman competitor Morgan Stanley’s research has made similar findings, projecting data center emissions globally to accumulate to 2.5bn metric tons of CO2 equivalent by 2030.
In the meantime, all five tech companies have claimed carbon neutrality, though Google dropped the label last year as it stepped up its carbon accounting standards. Amazon is the most recent company to do so, claiming in July that it met its goal seven years early, and that it had implemented a gross emissions cut of 3%.
·theguardian.com·
Data center emissions likely 662% higher than big tech claims. Can it keep up the ruse?
Microsoft’s Hypocrisy on AI
Microsoft’s Hypocrisy on AI
Can artificial intelligence really enrich fossil-fuel companies and fight climate change at the same time? The tech giant says yes.
The idea that AI’s climate benefits will outpace its environmental costs is largely speculative, however, especially given that generative-AI tools are themselves tremendously resource-hungry. Within the next six years, the data centers required to develop and run the kinds of next-generation AI models that Microsoft is investing in may use more power than all of India. They will be cooled by millions upon millions of gallons of water.
Microsoft isn’t a company that exists to fight climate change, and it doesn’t have to assume responsibility for saving our planet. Yet the company is trying to convince the public that by investing in a technology that is also being used to enrich fossil-fuel companies, society will be better equipped to resolve the environmental crisis. Some of the company’s own employees described this idea to me as ridiculous.
Microsoft’s own climate commitments in 2020. These were made during a moment of heightened climate activism; millions around the world, including tech workers, had just rallied to protest the lack of coordinated action to cut back carbon emissions.Microsoft has failed to reduce its annual emissions each year since then. Its latest environmental report, released this May, shows a 29 percent increase in emissions since 2020—a change that has been driven in no small part by recent AI development, as the company explains in the report. “All of Microsoft’s public statements and publications paint a beautiful picture of the uses of AI for sustainability,” Alpine told me. “But this focus on the positives is hiding the whole story, which is much darker.”
Microsoft is reportedly planning a $100 billion supercomputer to support the next generations of OpenAI’s technologies; it could require as much energy annually as 4 million American homes. Abandoning all of this would be like the U.S. outlawing cars after designing its entire highway system around them.
·theatlantic.com·
Microsoft’s Hypocrisy on AI
The hidden costs of the AI boom : Peoples Dispatch
The hidden costs of the AI boom : Peoples Dispatch
In the hands of corporations, the hidden costs of AI will continue to be paid by working people across the globe
In 2023, Google and Microsoft data centers each consumed more electricity than is consumed by over 100 countries.
And there is no sign of this energy consumption slowing down. NVIDIA has estimated it would sell 3.5 million of their newest GPU, which would consume electricity equivalent to 1 million US households.
Google’s greenhouse gas emissions have increased by 48% since 2019.
The whole "zero net carbon by 2030" thing is just a distraction from the horrible current state of affairs
Energy companies are raising electricity prices to fund building more energy infrastructure to support these data centers. Furthermore, these data centers have placed an incredible strain on the grid, increasing the chances of electricity blackouts during peak times.
Researchers at the University of California, Riverside, estimated last year that global AI demand could suck up 1.1 trillion to 1.7 trillion gallons of freshwater by 2027.
In one instance, in Oregon where Google runs three data centers and plans two more, Google filed a lawsuit, aided by the city government, to keep their water consumption a secret from farmers, environmentalists, and Native American tribes. After they faced pressure to release the data, they caved and the records were made public. They showed that Google’s three data centers use more than a quarter of the city’s water supply.
And “Lavender,” which is trained on data about known militants, then parses through surveillance data about almost everyone in Gaza — from photos to phone contacts — to rate each person’s likelihood of being a militant. While the IOF claims that this is still gated on a human making the final call, Israeli soldiers told +972 that they essentially treated the AI’s output “as if it were a human decision,” sometimes only devoting “20 seconds” to looking over a target before bombing, and that the army leadership encouraged them to automatically approve Lavender’s kill lists a couple weeks into the war.”
While the mainstream narrative around AI characterizes AI models as systems that can operate entirely autonomously and are freeing workers by building machines that can do the boring and repetitive tasks, this is actually quite far from the truth. Instead these AI companies are building their AI models by treating many workers like machines. Training these models requires enormous amounts of data, and much of that data is cleaned and annotated by humans. Tech companies leverage the economic disparities between regions and this work is often outsourced to workers in the Global South including Syria, Argentina and Kenya, where workers are paid less than USD 1.50 per hour, with little job security and no clear path to upward mobility, and no protections for workers rights.
Some of the types of tasks might involve watching content and assessing whether it needs to be flagged. This means the workers must watch video sometimes containing suicide, murder, child abuse, and sexual assault and many workers report having developing stress and anxiety disorders from being constantly exposed to this content.
·peoplesdispatch.org·
The hidden costs of the AI boom : Peoples Dispatch
Generative AI is reportedly tripling carbon dioxide emissions from data centers
Generative AI is reportedly tripling carbon dioxide emissions from data centers
Research suggest data centers will emit 2.5 billion tons of greenhouse gas by 2030
A report from Morgan Stanley suggests the datacenter industry is on track to emit 2.5 billion tons by 2030, which is three times higher than the predictions if generative AI had not come into play.
The difficulty in mitigating the environmental impact of data centers is that they can reduce energy consumption through water-cooling systems, but it takes an enormous amount of water to do so. With water becoming a more precious resource, those systems hamper tech giant’s green goals and place huge strains on areas with ‘high water scarcity’.
·techradar.com·
Generative AI is reportedly tripling carbon dioxide emissions from data centers
Generative AI is a climate disaster
Generative AI is a climate disaster
Tech companies are abandoning emissions pledges to chase AI market share
Earlier this week, Google revealed that its emissions have increased by 48% in just five years, despite its previous promises to hit net-zero emissions by 2030. Much of that increase is driven by its push to accelerate AI deployment. The picture is much the same at Microsoft.
With its investment in OpenAI, Microsoft is at the forefront of the generative AI push and it’s also leading the charge to expand data center capacity to enable it. The company now has more than 5 gigawatts (GW) of installed server capacity, which is more than Hong Kong or Portugal’s electricity consumption, and it set an internal goal of significantly accelerating new capacity to an additional 1GW in the first half of 2024, reaching an additional 1.5GW to be added in the first half of 2025 alone. It reportedly spent more than $50 billion on new data center capacity between July 2023 and June 2024.
Tech companies once pitched themselves as the purveyors of a more ethical form of capitalism. They wanted us to believe they would balance corporate profit with environmental sustainability, such that the digital future was marketed as inherently more sustainable than the analog past. It’s clearer than ever that was a lie.
In January, Sam Altman told a crowd at the World Economic Forum that we had to vastly increase energy production to power AI regardless of the consequences, and that we could simply geoengineer the planet to try to minimize the climate impacts.
Geoengineer the planet? So that we can have AI? Simple! /s
·disconnect.blog·
Generative AI is a climate disaster
ChatGPT is biased against resumes with credentials that imply a disability — but it can improve
ChatGPT is biased against resumes with credentials that imply a disability — but it can improve
UW researchers found that ChatGPT consistently ranked resumes with disability-related honors and credentials — such as the “Tom Wilson Disability Leadership Award” — lower than the same...
In a new study, UW researchers found that ChatGPT consistently ranked resumes with disability-related honors and credentials — such as the “Tom Wilson Disability Leadership Award” — lower than the same resumes without those honors and credentials.
When asked to explain the rankings, the system spat out biased perceptions of disabled people. For instance, it claimed a resume with an autism leadership award had “less emphasis on leadership roles” — implying the stereotype that autistic people aren’t good leaders.
·washington.edu·
ChatGPT is biased against resumes with credentials that imply a disability — but it can improve
What Do Google’s AI Answers Cost the Environment?
What Do Google’s AI Answers Cost the Environment?
Google is bringing AI answers to a billion people this year, but generative AI requires much more energy than traditional keyword searches
She and her colleagues calculated that the large language model BLOOM emitted greenhouse gases equivalent to 19 kilograms of CO2 per day of use, or the amount generated by driving 49 miles in an average gas-powered car. They also found that generating two images with AI could use as much energy as the average smartphone charge. Others have estimated in research posted on the preprint server arXiv.org that every 10 to 50 responses from ChatGPT running GPT-3 evaporate the equivalent of a bottle of water to cool the AI’s servers.
Data centers, including those that house AI servers, currently represent about 1.5 percent of global energy usage but are projected to double by 2026, at which point they may collectively use as much power as the country of Japan does today.
·scientificamerican.com·
What Do Google’s AI Answers Cost the Environment?
Data Centers & AI Are Sucking Up Huge Amounts Of Renewable Energy - CleanTechnica
Data Centers & AI Are Sucking Up Huge Amounts Of Renewable Energy - CleanTechnica
The rapid increase in the number and size of data centers worldwide is placing a premium on renewable energy to power them all.
data centers and AI are likely to consume half of all the electricity available from renewable energy resources such as solar and wind farms
There is a new wind farm off the coast of Scotland that is supposed to be able to power 1.3 million homes, but Amazon has just announced it has spoken for more than half of that installation’s 880 MW output.
Data centers need power for two primary purposes, Wired says. The first is to power the chips that enable computers to run algorithms or power video games. The second is to cool the servers so they don’t overheat.
At Davos this year, OpenAI CEO Sam Altman also warned that the status quo was not going to be able to provide AI with the power it needed to advance. “There’s no way to get there without a breakthrough,” he said at a Bloomberg event.
·cleantechnica.com·
Data Centers & AI Are Sucking Up Huge Amounts Of Renewable Energy - CleanTechnica
I watched Nvidia's Computex 2024 keynote and it made my blood run cold
I watched Nvidia's Computex 2024 keynote and it made my blood run cold
Nvidia's pre-Computex keynote address was certainly something, and none of it felt good.
Nvidia's Blackwell cluster, which will come with eight GPUs, pulls down 15kW of power. That's 15,000 watts of power. Divided by eight, that's 1,875 watts per GPU.
Worse still, Huang said that in the future, he expects to see millions of these kinds of AI processors in use at data centers around the world.  One million Blackwell GPUs would suck down an astonishing 1.875 gigawatts of power. For context, a typical nuclear power plant only produces 1 gigawatt of power.
In one segment of the keynote, Huang talked about the potential for Nvidia ACE to power 'digital humans' that companies can use to serve as customer service agents, be the face of an interior design project, and more.
WHY?! Be a human and hire a fucking human to do a human job.
Nvidia ACE may ultimately be the face of the service industry in a decade, and all those workers it will replace will have few options for employment elsewhere, since AI – powered by Nvidia hardware – will be taking over many other kinds of work once thought immune to this kind of disruption.
The problem with this kind of thinking, as it was with the first Industrial Revolution, is that this treats people as obstacles to be cleared away.
A universal basic income won't even paper over this disruption, much less solve it, especially because the ones who will have to pay for a universal basic income won't be the enormous pool of displaced workers, it would be those same tech titans who put all these people out of work in the first place. We can't even get the rich to pay the taxes they owe now. What makes anyone think they'll come in and take care of us in the end, no strings attached?
I think the worst part of Huang's keynote wasn't that none of this mattered, it's that I don't think anyone in Huang's position is really thinking about any of this at all. I hope they're not, which at least means it's possible they can be convinced to change course. The alternative is that they do not care, which is a far darker problem for the world.
·techradar.com·
I watched Nvidia's Computex 2024 keynote and it made my blood run cold
Open letter to President Biden from tech workers in Kenya - Foxglove
Open letter to President Biden from tech workers in Kenya - Foxglove
22 May 2024 Dear President Biden,  Cc: Ambassador Katherine Tai, US Trade Representative,  We are 97 data labellers, content moderators …
US Big Tech companies are systemically abusing and exploiting African workers. In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern day slavery.
We do this work at great cost to our health, our lives and our families. US tech giants export their toughest and most dangerous jobs overseas. The work is mentally and emotionally draining.
We label images and text to train generative AI tools like ChatGPT for OpenAI. Our work involves watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day. Many of us do this work for less than $2 per hour.
We weren’t warned about the horrors of the work before we started.
·foxglove.org.uk·
Open letter to President Biden from tech workers in Kenya - Foxglove
The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’
The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’
African workers who label AI data or screen social posts for US tech giants are calling on President Biden to raise their plight with Kenya's president, William Ruto, who visits the US this week.
On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop “systemically abusing and exploiting African workers.”
A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.
·wired.com·
The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’
Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water
Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water
As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.
In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research.
In a paper due to be published later this year, Ren’s team estimates ChatGPT gulps up 500 milliliters of water (close to what’s in a 16-ounce water bottle) every time you ask it a series of between 5 to 50 prompts or questions.
Google reported a 20% growth in water use in the same period, which Ren also largely attributes to its AI work. Google’s spike wasn’t uniform -- it was steady in Oregon where its water use has attracted public attention, while doubling outside Las Vegas. It was also thirsty in Iowa, drawing more potable water to its Council Bluffs data centers than anywhere else.
In July 2022, the month before OpenAI says it completed its training of GPT-4, Microsoft pumped in about 11.5 million gallons of water to its cluster of Iowa data centers, according to the West Des Moines Water Works.
·apnews.com·
Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water
How does AI impact my job as a programmer?
How does AI impact my job as a programmer?
Four ginever glasses sit on a mirrored table at Distilleerderij’t Nieuwe Diep in Fevopark, Amsterdam. Only one manufacturer makes these glasses, and it’s closing. The distillery stockpi…
Popular narratives focus, not on what big-crunch machine learning models can do now, but what they theoretically could do in the future.
Reading, understanding, and fixing code written by others consumes 90+% of the time a programmer spends in an integrated development environment, command line, or observability interface.
If we use LLMs all the time, the amount of “fix someone else’s code” we’re doing goes from 90% of the time to 100% of the time.
Large language model purveyors and enthusiasts purport to use the tools to help understand code. I’ve tested this claim pretty thoroughly at this point, and my conclusion on the matter is this: much like perusing answers on StackOverflow, this approach saves you time relative to whether you’re already skilled enough to know when to be suspicious, because a large proportion of the answers won’t help you.
But like an IDE, or a framework, or a test harness, utility here requires skill on the part of the operator—and not just ChatGPT jockeying skill: programming skill. Existing subject matter expertise.
That’s not entirely the fault of the model design. It has a lot to do with the fact that the training data, the Common Crawl, feeds it the code debugging and demystification advice of The Internet. How good is the advice about this topic on The Internet? Not that good, because by and large, developers suck at this skill.
Models trained on human data can’t outperform the base error rate in that data.
·chelseatroy.com·
How does AI impact my job as a programmer?
AI datacenters might consume 25% of US electricity by 2030
AI datacenters might consume 25% of US electricity by 2030
Robot overlords demand more energy
Haas estimates that while US power consumption by AI datacenters sits at a modest four percent, he expects the industry to trend towards 20 to 25 percent usage of the US power grid by 2030, per a report from the Wall Street Journal. He specifically lays blame at popular large language models (LLMs) such as ChatGPT, which Haas described as "insatiable in terms of their thirst."
·theregister.com·
AI datacenters might consume 25% of US electricity by 2030
GPT-4 can exploit real vulnerabilities by reading advisories
GPT-4 can exploit real vulnerabilities by reading advisories
While some other LLMs appear to flat-out suck
OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw.
Kang said he expects LLM agents, created by (in this instance) wiring a chatbot model to the ReAct automation framework implemented in LangChain, will make exploitation much easier for everyone. These agents can, we're told, follow links in CVE descriptions for more information.
·theregister.com·
GPT-4 can exploit real vulnerabilities by reading advisories
Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry
Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry
Our investigation into AdVon Commerce, the AI contractor at the heart of scandals at USA Today and Sports Illustrated.
Strikingly, the only text the manager actually inputs herself is the headline — "Best Bicycles for Kids" — and a series of links to Amazon products.Then the AI generates every single word of the article — "riding a bike is a right of passage that every child should experience," MEL advises, adding that biking teaches children important "life skills" like "how to balance and how to be responsible for their actions" — as the manager clicks buttons like "Generate Intro Paragraph" and "Generate Product Awards."
By the end of the video, the manager has produced an article identical in structure to the AdVon content we found at Sports Illustrated and other AdVon-affiliated publications: an intro, followed by a string of generically-described products with affiliate links to Amazon, a "buying guide" packed with SEO keywords, and finally an FAQ.
All five of the microwave reviews include an FAQ entry saying it's okay to put aluminum foil in your prospective new purchase.
do its human employees actually try the products being recommended?"No," laughed one AdVon source. "No. One hundred percent no.""I didn't touch a single one," another recalled.
People searching Google to buy a product, Weissman explains, are easily swayed by reviews in authoritative publications.
CNET got lambasted for publishing dozens of AI-generated articles about personal finance before discovering they were riddled with errors and plagiarism.
·futurism.com·
Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry
UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges
UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges
For the largest health insurer in the US, AI's error rate is like a feature, not a bug.
UnitedHealthcare, the largest health insurance company in the US, is allegedly using a deeply flawed AI algorithm to override doctors' judgments and wrongfully deny critical health coverage to elderly patients.
But, instead of changing course, over the last two years, NaviHealth employees have been told to hew closer and closer to the algorithm's predictions. In 2022, case managers were told to keep patients' stays in nursing homes to within 3 percent of the days projected by the algorithm, according to documents obtained by Stat. In 2023, the target was narrowed to 1 percent.
This is disgusting. These are people, these are their lives. In a time where they need help, compassion, and dignity.
If a nursing home balked at discharging a patient with a feeding tube, case managers should point out that the tube needed to provide "26 percent of daily calorie requirements" to be considered as a skilled service under Medicare coverage rules.
It’s unclear how much UnitedHealth saves by using nH Predict, but Stat estimated it to be hundreds of millions of dollars annually. In 2022, UnitedHealth Group’s CEO made $20.9 million in total compensation. Four other top executives made between about $10 and $16 million each.
·arstechnica.com·
UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges
Nurses Say Hospital Adoption Of Half-Cooked ‘AI’ Is Reckless
Nurses Say Hospital Adoption Of Half-Cooked ‘AI’ Is Reckless
We’ve noted repeatedly that while “AI” (language learning models) hold a lot of potential, the rushed implementation of half-assed early variants are causing no shortage of headac…
Kaiser Permanente, for its part, insists it’s simply leveraging “state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members’ and patients’ needs.”
Translation: we’re using new, unproven garbage whose salespeople told us the stuff we needed to hear so that we can claim a PR victory while bumping our quarterly bonus. The neat thing about new technology is that it’s much harder to refute the stated capabilities
Implementing this kind of transformative but error-prone tech in an industry where lives are on the line requires patience, intelligent planning, broad consultation with every level of employee, and competent regulatory guidance, none of which are American strong suits of late.
·techdirt.com·
Nurses Say Hospital Adoption Of Half-Cooked ‘AI’ Is Reckless
Blind internet users struggle with error-prone AI aids
Blind internet users struggle with error-prone AI aids
Unreliable software installed to comply with rules to help disabled people navigate online has prompted thousands of lawsuits
The club had “no idea” the software, set up with no manual intervention, was that unreliable, Rosin said.
Many of these software companies promoting AI say that one line of code will be enough to ensure compliance.
Blind users often say that overlays can make websites harder to navigate or that they interfere with assistive technologies.
·ft.com·
Blind internet users struggle with error-prone AI aids
On AI assistance
On AI assistance
During work on jorge, for the first time, I felt tempted to delegate some tasks to an AI.
As with any tech fad, AI is being applied carelessly, as if it carried no cost, as if a product got better by just featuring it.
lowering the bar of what consumers are willing to accept —passing the Turing test by making people dumber.
Things that felt like boilerplate or required subtle knowledge of tools that I don’t use frequently1. Regular expressions, crontab schedules, nginx rules, GitHub Actions workflows, Makefile targets, sed and awk incantations, CSS tricks.
ChatGPT cannot be trusted, and yet programmers turn to it for the things they are least qualified to verify
·jorge.olano.dev·
On AI assistance
Supply chain attacks and the many different ways I've backdoored your dependencies
Supply chain attacks and the many different ways I've backdoored your dependencies
By a funny coincidence, just after I sent my last newsletter about how to backdoor Rust crates, an advanced supply chain attack targeting SSH servers was uncovered by a talented developer and agitated the internet for the following weeks, leading to really interesting investigations and other fun-to-read speculations about who
Here is how it works. You ask you favorite LLM: "How to download a file in Node.js?" The LLM answers you the following code: import { Client } from 'http-request'; const client = new Client(); const res = client.get('https://example.com'); await fs.WriteFile('file.txt', res.body); You try it, it works, great! So you include it into your project and call it a day. AI for the win!!! Not so fast! Where does this http-request package come from? Who controls it? Why did the LLM "used" it in the code? You just have been pwned: http-request was a backdoored package distributed by a malicious actor.
·kerkour.com·
Supply chain attacks and the many different ways I've backdoored your dependencies
The unsustainability of the AI Bubble
The unsustainability of the AI Bubble
Found via a Reddit post about a WSJ article quoting a Sequioa presentation In a presentation earlier this month, the venture-capital firm Sequoia estimated that the AI industry spent $50 billion on the Nvidia chips used to train advanced AI models last year, but brought in only $3 billion in revenue.
Remember those news items that said these services were running at a loss? Well, those calculations were based on pure compute costs, which DON’T include capital expenses like chips or hardware and it didn’t include labour costs which are, ironically, quite high for generative models. All of which means the economics for “AI” is much much worse than most people think.
·baldurbjarnason.com·
The unsustainability of the AI Bubble
AI’s impact on nursing and health care
AI’s impact on nursing and health care
Artificial intelligence (AI) is rapidly expanding in all workplaces, and hospitals are no exception. Without our collective action, AI’s expansion will accelerate the hospital industry’s race to the bottom and drastically limit nurses’ ability to provide quality care.
Nurses know that AI technology and algorithms are owned by corporations that are driven by profit — not a desire to improve patient care conditions or advance the nursing profession
This technology forces RNs to respond to excessive, if not faulty, alerts – which sometimes mistakenly flag that a patient’s safety is in jeopardy –  rather than using their knowledge and skills of observation to assess how to meet the needs of all patients. Conversely, patients who are at risk for deterioration are often missed by AI technology, which would have otherwise been caught by a thorough hands-on assessment from highly-trained medical professionals
AI technology also automatically completes note-taking that can miss important details and nuances about the patient.
Patient care requires — and will always require — nurses. Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses. For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their skin tone, affect, or demeanor, are often not detected by AI and algorithms.
AI technologies enable mass surveillance of nurses and other health care workers at facilities, with disturbing opportunities for employers to violate individual privacy and union organizing rights.
·nationalnursesunited.org·
AI’s impact on nursing and health care
AI Computing Is on Pace to Consume More Energy Than India, Arm Says
AI Computing Is on Pace to Consume More Energy Than India, Arm Says
(Bloomberg) -- AI’s voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Holdings Plc Chief Executive Officer Rene Haas. Most Read from BloombergTrump Has Only $6.8 Million for Legal Fees With Trial UnderwayTikTok to Remove Executive Tasked With Fending Off US ClaimsChina Is Front and Center of Gold’s Record-Breaking RallyTesla Spends Weekend Cutting Prices of Cars and FSD SoftwareUS Dolla
For AI systems to get better, they will need more training — a stage that involves bombarding the software with data — and that’s going to run up against the limits of energy capacity, he said.
·finance.yahoo.com·
AI Computing Is on Pace to Consume More Energy Than India, Arm Says
Am I in The Stack?
Am I in The Stack?
This is a tool to see if your code on GitHub is part of the AI training set
·huggingface.co·
Am I in The Stack?
AI isn't useless. But is it worth it?
AI isn't useless. But is it worth it?
AI can be kind of useful, but I'm not sure that a "kind of useful" tool justifies the harm.
But there is a yawning gap between "AI tools can be handy for some things" and the kinds of stories AI companies are telling (and the media is uncritically reprinting). And when it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that "well, they can sometimes be handy..." doesn't offer much of a justification
When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs.
But what these companies and clients fail to recognize is that ChatGPT does not write, it generates text, and anyone who's spotted obviously LLM-generated content in the wild immediately knows the difference.
And the tendency for people to put too much trust into these toolsh is among their most serious problems: no amount of warning labels and disclaimers seem to be sufficient to stop people from trying to use them to provide legal advice or sell AI "therapy" services.
And the idea that we all should be striving to "replace artists" — or any kind of labor — is deeply concerning, and I think incredibly illustrative of the true desires of these companies: to increase corporate profits at any cost.
But I find one common thread among the things AI tools are particularly suited to doing: do we even want to be doing these things? If all you want out of a meeting is the AI-generated summary, maybe that meeting could've been an email. If you're using AI to write your emails, and your recipient is using AI to read them, could you maybe cut out the whole thing entirely? If mediocre, auto-generated reports are passing muster, is anyone actually reading them? Or is it just middle-management busywork?
There's a huge human cost as well. Artificial intelligence relies heavily upon "ghost labor": work that appears to be performed by a computer, but is actually delegated to often terribly underpaid contractors, working in horrible conditions, with few labor protections and no benefits.
Generative AI is being used to harass and sexually abuse. Other AI models are enabling increased surveillance in the workplace and for "security" purposes — where their well-known biases are worsening discrimination by police who are wooed by promises of "predictive policing". The list goes on.
I'm glad that I took the time to experiment with AI tools, both because I understand them better and because I have found them to be useful in my day-to-day life. But even as someone who has used them and found them helpful, it's remarkable to see the gap between what they can do and what their promoters promise they will someday be able to do. The benefits, though extant, seem to pale in comparison to the costs.
·citationneeded.news·
AI isn't useless. But is it worth it?