Enough of the billionaires and their big tech. ‘Frugal tech’ will build us all a better world
Titans like Musk would love us to believe innovation means top-down solutions that only enrich the wealthy. In fact, we all have the power, says Eleanor Drage, research fellow at Cambridge University
There's something almost nobody is talking about in AI - but it affects everything from asking ChatGPT for advice to companies deploying AI globally.
There's something almost nobody is talking about in AI - but it affects everything from asking ChatGPT for advice to companies deploying AI globally.
A fascinating study tested major AI Models - the foundations powering tools millions use daily - against cultural values from 107 countries worldwide.
The result? Each one reflected the same assumptions - those of English-speaking, Western European societies. None aligned with how people in Africa, Latin America, or the Middle East actually build trust, show respect, or resolve conflicts.
Why does this matter? Imagine you're a global company rolling out AI customer service. Your system learns "best practice": when customers complain about late orders, "apologise briefly, offer a discount, and focus on quick resolution".
In Germany, the direct, efficient approach works perfectly. Customer satisfied.
But in Japan, that brief apology violates meiwaku - the cultural need to deeply acknowledge when you've caused someone inconvenience. Your "efficient" response feels dismissive and damages customer relationships.
And in the UAE, the discount offer backfires completely. It feels like charity rather than respect.
One AI system, similar contexts, completely different cultural outcomes.
This isn't intentional though - it's inevitable. LLMs absorb embedded patterns about communication from their training data, and most of that data comes from billions of English web pages and content. The result? AI systems that, unless thoughtfully shaped, are blind to the diversity of human interaction.
Klarna, the global payments company, made headlines in 2024 when they introduced an AI system that "did the work of 700 customer service reps", handled 2.5 million conversations in 35 languages, and cut response time by 82%. Technical triumph.
14 months later: "Klarna reverses AI strategy and is hiring humans again". Their CEO admitted it had led to "lower quality". Some reports said they'd seen a 20%+ decrease in customer satisfaction.
What I think really happened: Klarna optimised for 35 languages while completely missing 35 different ways humans expect to be treated.
The challenge? Most companies are focusing on technical integration and completely missing cultural intelligence. We measure response time and cost savings, but never ask, "which human complexities are we overlooking?"
The goal isn't neutrality though - that's impossible and undesirable. It's conscious awareness. Understanding that the output from AI models is filtered through a specific cultural lens.
For companies building AI strategies, key questions worth asking:
* Which cultural assumptions are embedded in our AI systems?
* How do we test cultural intelligence alongside technical performance?
* Who provides this expertise in our AI teams?
The individuals and organisations that develop this conscious awareness will make better decisions, while others unknowingly apply one-size-fits-all approaches to beautifully diverse human contexts. | 168 comments on LinkedIn
Enough of the billionaires and their big tech. ‘Frugal tech’ will build us all a better world
Titans like Musk would love us to believe innovation means top-down solutions that only enrich the wealthy. In fact, we all have the power, says Eleanor Drage, research fellow at Cambridge University
No, there is not plenty of water for data centers: And, yes, we should worry about it, along with the facilities’ power use — Jonathan P. Thompson (LandDesk.org) #ColoradoRiver #COriver #RioGrande #aridification
A satellite view of Mesa, Arizona, showing a handful of the 91 energy- and water-intensive data centers in the greater Phoenix metro area. Source: Google Earth. Click the link to read the article o…
As more and more decisions about human fates are made by algorithms, a lack of accountability and transparency will elevate heartless treatment driven by efficiency devoid of empathy. Humans become mere data shadows.
The wall is real..
I am not sure who needs to hear it right now, but the transformer wall is real.
Another day, another paper confirms what many of us already know too well. And like the others, it will probably be ignored by those still high on hyperscale hopium.. But Peter Coveney from University College London and Sauro Succi from the Italian Institute of Technology just put the wall into hard math. They formalized it using scaling laws, entropy bounds, and statistical mechanics. They show that LLMs cannot escape a builtin wall or a limit. It is not something temporary. It is not due to lack of or bad data, or insufficient tuning.. It is structural (as has been shown a million times already).
Transformers work by predicting the next token based on learned statistical patterns. They are trained to minimize the divergence between the LLM/LRMs probability distribution and the distribution of the training data. But that divergence, measured as Kullback Leibler bound, cannot be reduced to zero. There is always a nonzero lower value, and no amount of scaling can push through it..
The transformer, as it scales, runs into diminishing returns because it lacks the representational capacity to fully capture the structure of natural language or the world it references. It cannot reliably compress a complex semantic signal into a smooth Gaussian latent and then recover the full structure when decoding.. The LLM acts as a lossy compressor, amplifying nonGaussian output from Gaussian priors, which fuels hallucinations..
The asymptotic flattening of improvement is a direct result of how the model approximates statistical relationships rather than learning grounded semantic meaning. The hallucinations and confident errors are not outliers. They are structural consequences of this limitation (duhhhhhhhh).
The transformer was not built to model human cognition. It was built to solve language to language translation. It was designed to map between two structured domains of text with some stochastic variability added to allow for flexibility in phrasing. That is it.. Yet here we are, doing stupid shit with it.. Scaling a system that was never intended to understand. Wrapping retrieval hacks around it. Feeding it synthetic reinforcement loops and curated context.. Meh.
What we are building are high resolution approximators. Useful in narrow closed world applications or for brainstorming ideas. Dangerous in anything that requires actual reasoning, consistency or correctness.
The LLM wall is real.. It is measurable and it is being hit by frontier labs. And pretending that another gazillion parameters will unlock cognition is not innovation.. It is delulu bullshit and yet most of us gobble it all up.
We are not advancing toward general intelligence. We are just hyperscaling a tool designed for translation and acting surprised when it fails to actually think.. When all it can really do is context conditioned approximate pattern retrieval
#ai | 26 comments on LinkedIn
New data centers expected to demand 7 billion gallons of water a year
As data centers demand more and more water and energy, experts suggest communities adopt policies that prevent energy bills from rising and water supplies from shrinking.
Jacy Reese Anthis, Kristian Lum, Michael Ekstrand, Avi Feller, Chenhao Tan. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025.
Delta moves toward eliminating set prices in favor of AI that determines how much you personally will pay for a ticket
The airline touted a partnership with an AI-enabled revenue system as a step on the road to fully personalized ticket pricing, part of its goal to raise profit margins long-term.
Not to be alarmist or anything, but we're basically witnessing the commodification of individual economic psychology in real-time.
Not to be alarmist or anything, but we're basically witnessing the commodification of individual economic psychology in real-time.
You just paid $847 for a flight. The person next to you paid $332. Same seat. Same flight. Same airline. ✈️
Enter: Delta, AI & the death of price transparency.
Delta just quietly announced they're rolling out AI that basically reads your digital soul and charges accordingly. Seriously. This is the beta test for post-market capitalism.
How it works?
The AI doesn't just see supply and demand - it sees YOU.
Your browsing desperation at 2am. 🌙
Your zip code.
How you shop.
Whether you clear cookies.
The algorithm builds a psychological profile and asks: "What will THIS human pay?" Then they charge you what they think you will pay.
So here’s the ethical/philosophical Q:
If AI can predict exactly what you'll pay, do you really even have a choice anymore?
Think about it. This goes way beyond just harvesting your data to show you ads that may or may not influence you. It’s harvesting your data to extract maximum value from everything you already need to buy.
Every CEO is watching.
Your Uber.
Your Amazon cart. 🛒
Your morning Starbucks.
The prices at a grocery store.
All coming soon to an AI near you and charging you whatever it thinks you will pay. 🫠
Is it innovation or exploitation?
Are you ready for an economy where knowing you IS controlling you? And maybe taking away your choices?
NGL, I kind of hate it. 🙃
#AI #Economics #SurveillanceCapitalism | 113 comments on LinkedIn
Catch me on BBC's More or Less podcast talking about all things AI and energy!
Catch me on BBC's More or Less podcast talking about all things AI and energy! ⚡⚡
TL;DR? As always - it's so hard to get any exact numbers with what's out there! We need more transparency, more accountability, and more data about AI's impact on our planet 🌍
https://lnkd.in/eHVhQ5H8
When Swiping Supplants Scissors: The Hidden Cost of Touchscreens — and how Designers Can Help
The history of technology is full of innovators who got their start creating with their hands. Steve Jobs cites a calligraphy class at Reed College as influencing the design of the Mac; Susan Kare…
Hey! Before we go any further — if you want to support my work, please sign up for the premium version of Where’s Your Ed At, it’s a $7-a-month (or $70-a-year) paid product where every week you get a premium newsletter, all while supporting my free work too.
Also, subscribe to my podcast Better Offline, which is free. Go and subscribe then download every single episode.
One last thing: This newsletter is nearly 14,500 words. It’s long. Perhaps consider making a pot of coffee before you start
An eating disorders chatbot offered dieting advice, raising fears about AI in health
The National Eating Disorders Association took down a controversial chatbot, after users showed how the newest version could dispense potentially harmful advice about dieting and calorie counting.
The Trump administration’s plan, in targeting “ideological bias” and “social engineering agendas” in AI, ultimately enforces them, writes Eryk Salvaggio.
No ‘woke AI’ in Washington, Trump says as he launches American AI action plan
Trump has vowed to push back against "woke" AI models and to turn the U.S. into an "AI export powerhouse," signing three AI-focused executive orders Wednesday.