An eating disorders chatbot offered dieting advice, raising fears about AI in health
The National Eating Disorders Association took down a controversial chatbot, after users showed how the newest version could dispense potentially harmful advice about dieting and calorie counting.
The Trump administration’s plan, in targeting “ideological bias” and “social engineering agendas” in AI, ultimately enforces them, writes Eryk Salvaggio.
No ‘woke AI’ in Washington, Trump says as he launches American AI action plan
Trump has vowed to push back against "woke" AI models and to turn the U.S. into an "AI export powerhouse," signing three AI-focused executive orders Wednesday.
After a much-needed break, I’m back to “kind regards,” and I want to share something that’s been gnawing at me since before my vacation.
After a much-needed break, I’m back to “kind regards,” and I want to share something that’s been gnawing at me since before my vacation.
I attended a conference in Stellenbosch, South Africa on Digital Sovereignty in Africa organised by colleagues Andrew Crawford Mohammad Amir Anwar, where we toured a data center, my first time inside one. And no, it wasn’t the sci‑fi fantasy of endless glowing servers you see in movies. The reality was more mundane, and more troubling.
The place was beautiful, aesthetically, but since I am not talking about pageantry here or interior decor, I won't focus on that. We asked how much energy they were pulling from the grid, and there was a collective gasp when they mentioned the thousands of megawatts. We had seen solar panels on their building and were quite disappointed to find out that they only provided lighting. They proceeded to show us towering generators that, if operational, guzzle almost 500 litres of diesel per hour. We also asked about their water use, and perhaps to stop our shocked gasps again, the staff proudly told us they ran on rainwater and their own boreholes. They then went on a technical run of how the rainwater was recycled and stuff.
From the rooftop, where giant cooling towers and rainwater tanks loomed, I looked across the road. A shanty town stretched opposite: corrugated iron shacks packed tight, electrical wires dangling overhead. Cape Town’s inequality laid bare in a single glance: high-tech servers drinking rainwater to stay cool, while people next door queue for buckets.
People love to point at the sky when they talk about “the cloud.” I recall a conversation with a friend where I was talking about data centers and she asked: “Why do we even need data centers now that everything’s on the cloud?” I chuckled, but here’s the thing: that misunderstanding isn’t just my friend’s. It’s global. Entire tech narratives have trained us to imagine “the cloud” as something ethereal, weightless, almost holy. But here's the thing: the cloud has nothing to do with cumulus or nimbus formations in the sky. The cloud is concrete, glass, and steel. It's thousands of megawatts pulled from the grid, diesel generators guzzling 500 liters per hour, and communities going without water so these digital "clouds" can stay online.
Every photo you upload. Every Netflix binge. Every ChatGPT query. It all runs through buildings like this. AI evangelists call it weightless intelligence. But there’s nothing weightless here: lithium mines, power grids, drought‑stricken lands turned into server farms.
Tech Bros have sold us the lie that AI will solve the climate crisis, but it's actually AI that's accelerating it. Just Google how much energy and water your favorite AI model consumes and then come argue with me.
The next time someone tells you to "upload it to the cloud," don't look up. Look around.
Do people click on links in Google AI summaries? | Pew Research Center
In a March 2025 analysis, Google users who encountered an AI summary were less likely to click on links to other websites than users who did not see one.
Plus, OpenAI's absurd listening tour, top AI scientists say AI is evolving beyond our control, Facebook is putting data centers in tents, and the AI bubble question — answered?
Spotify Publishes AI-Generated Songs From Dead Artists Without Permission
"They could fix this problem. One of their talented software engineers could stop this fraudulent practice in its tracks, if they had the will to do so."
DRIVEN by national digitalisation strategies, rapid advances in artificial intelligence (AI) and booming cloud computing, South-east Asia is accelerating its data infrastructure build-out. These facilities – critical for AI training, Big Data processing and digital services – are now central to the region’s economic competitiveness and technological growth. Read more at The Business Times.
Can ChatGPT Diagnose this Car? | Chevy Trax P0171, P1101, P0420
Let's see if an "intelligent" large language model can correctly diagnose a broken 2015 Chevy Trax with a 1.4L turbo.
Send us a postcard:
Watch Wes Work
P.O. Box 106
Fulton, IL 61252
Send us an email:
mail@watchweswork.com
Tesla’s former head of AI warns against believing that self-driving is solved
Tesla’s former head of artificial intelligence, Andrej Karpathy, who worked on the automaker’s self-driving effort until 2022, warns against believing...
As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.
This June I attended a summer school on at AI governance.
This June I attended a summer school on at AI governance. Throughout the lectures, It was super clear and frustrating that the lecturers with their different backgrounds (law, international relations, AI governance, etc.) push only the narrative of Big tech companies about superintelligence and its dangers referring only to research done by Big tech.
It was equally surprising, frustrating and disappointing, the the lecturers never heard of Timnit Gebru, Emily M. Bender or Alex Hanna, Ph.D. In fact they did not seem to be familiar with the research on critical AI studies. I was looked crazy as I asked them how come they don't know these researchers and they don't include their research in their understanding of AI and AI capabilities. I could not understand how come? why?.
Today as I was reading chapter 36 In the "Handbook of critical studies of Artificial intelligence" under the title "Barriers to regulating AI: critical observations from a fractured field", by Ashlin Lee, Will Orr, Walter G.Johnson, Jenna Imad Hard and Kathryn Henne, I finally understood why.
The authors argue that since nation states want to support the growth of AI, they decided to defer regulatory responsibilities to external stakeholder groups including think tanks and corporations (My summer school was organised by a think tank). This process is called hybridising governance. With this type of governance, these groups are allowed to define the formal and informal regulations for the state with little direction. The authors go on to explain that "This [type of governance] creates a disorderly regulatory environment that cement power among those already invested in AI while making it difficult for those outside these privileges groups [researchers on critical AI and people harmed by AI] to contribute their knowledge and experience." The authors then go on and explain that "External stakeholders stand to benefit from hybridising regulation of AI, with the public potentially less well served by this arrangement."
This explains why AI governance, in its current format, ultimately overly focuses on Ethical AI guidelines as a mechanism of self-regulation over enforceable regulations. This also explains why the heads of the AI governance school were pushing for the same narrative that Big tech companies keeps repeating that we need to regulate the scary futuristic super intelligence rather than regulating the currently harmful AI systems.
I upload here this section of the chapter for you to read, it is very interesting.
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade.
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade.
In a new essay, Arvind Narayanan and I argue that AI's impact could be precisely the opposite: AI could *slow* rather than hasten science. Link to essay: https://lnkd.in/e_sD7dzg
1) The production-progress paradox
Scientific papers have increased 500-fold since 1900, with funding and researchers growing exponentially. Yet genuine progress—measured by disruptive discoveries, new scientific terminology, Nobel-worthy breakthroughs, and research productivity—has remained constant or declined. Multiple metascience studies confirm this troubling disconnect between production and progress.
AI could worsen this by making it even easier to chase productivity metrics while homogenizing research approaches.
2) Science is not ready for software, let alone AI
Scientists are notoriously poor software engineers, lacking basic practices like testing and version control. Papers rarely share code, and when they do, it's often error-riddled. AI has already led to widespread errors across 600+ papers in 30 fields, with many COVID-19 diagnosis papers proving clinically useless. Science needs to catch up to 50 years of software engineering—fast.
3) AI might prolong the reliance on flawed theories
AI excels at prediction without understanding, like adding epicycles to the geocentric model (which improved predictive accuracy) rather than discovering heliocentrism.
Scientific progress requires theoretical advances, not just predictive accuracy. AI might trap fields in intellectual ruts by making flawed theories more useful without revealing their fundamental errors.
4) Human understanding remains essential
Science isn't just about finding solutions—it's about building human understanding. AI risks short-circuiting this process, like using a forklift at the gym. Evidence shows AI-adopting papers focus on known problems rather than generating new ones.
5) Implications for the future of science
Individual researchers should develop software skills and avoid using AI as a crutch. More importantly, institutions must invest in meta-science research, reform publish-or-perish incentives, and rethink AI tools to target actual bottlenecks like error detection rather than flashy discoveries. Evaluation should consider collective impacts, not just individual efficiency.
6) Final thoughts
While we ourselves use AI enthusiastically in our workflows, we warn against conflating individual benefits with institutional impacts. Science lacks the market mechanisms that provide quality control in industry, making rapid AI adoption particularly risky. We're optimistic that scientific norms will eventually adapt, but expect a bumpy ride ahead. | 75 comments on LinkedIn