Tesla’s former head of AI warns against believing that self-driving is solved
Tesla’s former head of artificial intelligence, Andrej Karpathy, who worked on the automaker’s self-driving effort until 2022, warns against believing...
As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.
This June I attended a summer school on at AI governance.
This June I attended a summer school on at AI governance. Throughout the lectures, It was super clear and frustrating that the lecturers with their different backgrounds (law, international relations, AI governance, etc.) push only the narrative of Big tech companies about superintelligence and its dangers referring only to research done by Big tech.
It was equally surprising, frustrating and disappointing, the the lecturers never heard of Timnit Gebru, Emily M. Bender or Alex Hanna, Ph.D. In fact they did not seem to be familiar with the research on critical AI studies. I was looked crazy as I asked them how come they don't know these researchers and they don't include their research in their understanding of AI and AI capabilities. I could not understand how come? why?.
Today as I was reading chapter 36 In the "Handbook of critical studies of Artificial intelligence" under the title "Barriers to regulating AI: critical observations from a fractured field", by Ashlin Lee, Will Orr, Walter G.Johnson, Jenna Imad Hard and Kathryn Henne, I finally understood why.
The authors argue that since nation states want to support the growth of AI, they decided to defer regulatory responsibilities to external stakeholder groups including think tanks and corporations (My summer school was organised by a think tank). This process is called hybridising governance. With this type of governance, these groups are allowed to define the formal and informal regulations for the state with little direction. The authors go on to explain that "This [type of governance] creates a disorderly regulatory environment that cement power among those already invested in AI while making it difficult for those outside these privileges groups [researchers on critical AI and people harmed by AI] to contribute their knowledge and experience." The authors then go on and explain that "External stakeholders stand to benefit from hybridising regulation of AI, with the public potentially less well served by this arrangement."
This explains why AI governance, in its current format, ultimately overly focuses on Ethical AI guidelines as a mechanism of self-regulation over enforceable regulations. This also explains why the heads of the AI governance school were pushing for the same narrative that Big tech companies keeps repeating that we need to regulate the scary futuristic super intelligence rather than regulating the currently harmful AI systems.
I upload here this section of the chapter for you to read, it is very interesting.
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade.
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade.
In a new essay, Arvind Narayanan and I argue that AI's impact could be precisely the opposite: AI could *slow* rather than hasten science. Link to essay: https://lnkd.in/e_sD7dzg
1) The production-progress paradox
Scientific papers have increased 500-fold since 1900, with funding and researchers growing exponentially. Yet genuine progress—measured by disruptive discoveries, new scientific terminology, Nobel-worthy breakthroughs, and research productivity—has remained constant or declined. Multiple metascience studies confirm this troubling disconnect between production and progress.
AI could worsen this by making it even easier to chase productivity metrics while homogenizing research approaches.
2) Science is not ready for software, let alone AI
Scientists are notoriously poor software engineers, lacking basic practices like testing and version control. Papers rarely share code, and when they do, it's often error-riddled. AI has already led to widespread errors across 600+ papers in 30 fields, with many COVID-19 diagnosis papers proving clinically useless. Science needs to catch up to 50 years of software engineering—fast.
3) AI might prolong the reliance on flawed theories
AI excels at prediction without understanding, like adding epicycles to the geocentric model (which improved predictive accuracy) rather than discovering heliocentrism.
Scientific progress requires theoretical advances, not just predictive accuracy. AI might trap fields in intellectual ruts by making flawed theories more useful without revealing their fundamental errors.
4) Human understanding remains essential
Science isn't just about finding solutions—it's about building human understanding. AI risks short-circuiting this process, like using a forklift at the gym. Evidence shows AI-adopting papers focus on known problems rather than generating new ones.
5) Implications for the future of science
Individual researchers should develop software skills and avoid using AI as a crutch. More importantly, institutions must invest in meta-science research, reform publish-or-perish incentives, and rethink AI tools to target actual bottlenecks like error detection rather than flashy discoveries. Evaluation should consider collective impacts, not just individual efficiency.
6) Final thoughts
While we ourselves use AI enthusiastically in our workflows, we warn against conflating individual benefits with institutional impacts. Science lacks the market mechanisms that provide quality control in industry, making rapid AI adoption particularly risky. We're optimistic that scientific norms will eventually adapt, but expect a bumpy ride ahead. | 75 comments on LinkedIn
Lessons from a Chimp: AI "Scheming" and the Quest for Ape Language
We examine recent research that asks whether current AI systems may be developing a capacity for "scheming" (covertly and strategically pursuing misaligned goals). We compare current research practices in this field to those adopted in the 1970s to test whether non-human primates could master natural language. We argue that there are lessons to be learned from that historical research endeavour, which was characterised by an overattribution of human traits to other agents, an excessive reliance on anecdote and descriptive analysis, and a failure to articulate a strong theoretical framework for the research. We recommend that research into AI scheming actively seeks to avoid these pitfalls. We outline some concrete steps that can be taken for this research programme to advance in a productive and scientifically rigorous fashion.
Gartner: Over 40% of Agentic AI Projects Will Be Canceled by End 2027
Over 40% of agentic AI projects will be canceled by the end of 2027, due to escalating costs, unclear business value or inadequate risk controls, according to Gartner.
The party trick called LLM - blowing away smoke and break some mirrors - De Staat van het Web!
Large Language Models fool you. They don't produce language, but place words in a row. But it's understandable that you think you are dealing with a clever computer. One that occasionally says something that resembles the truth and sounds nice and reliable. You are excused to believe in this magic of ‘AI’ but not after I tell you the trick.
Fake QR codes are popping up on meters — don’t scan them, says Montreal parking agency
The agency in charge of parking in the city hung signs on meters to encourage people to download their new parking app, Mobicité. Some of the signs were vandalized with fake QR codes, which might direct people to a fraudulent website.
I’m Is there a point at which the sane part of the world just goes “maybe, just maybe we should stop, take a breath and ask ourselves ‘Is this a direction we want to be travelling’” before enacting first principles based regulation to alter the default trajectory?
I’m Is there a point at which the sane part of the world just goes “maybe, just maybe we should stop, take a breath and ask ourselves ‘Is this a direction we want to be travelling’” before enacting first principles based regulation to alter the default trajectory?
How a leader in the field unilaterally ran a test to see if there’s any pushback to him amplifying his voice via an AI trained to defer to his opinions? What possible additional warning would you need? How can we have binders of law and regulation on TV and Radio and Print, for good reasons and … fail for decades to regulate platforms?
How long can the cognitive dissonance between “there will be growth and this is all teething problems on the way to tech utopia” and the clear and present trajectory to civilization collapse before model collapse can be maintained?
How much longer will we be forced to endure performative AI ethics summits talking about aligning a technology whose makers have taken control of societies control mechanism and are long beyond alignment?
Are we so far down the drain that no global government dares to pull the off switch on X out of fear of the oligarch controlling it or the US government and the platform having zero economic upsides, barely any jobs anymore?
Can we get serious now?
Break the whole AI and Agent ketamine haze for a moment and actually talk about how we deal with this all before it’s too late?
Can we talk about how fundamental flaws like prompt injection destroy any chance for the technology to be made useful in the far marjority of hinted usecases in the next few years?
How it forces us to abandon cybersecurity and corporate sovereignty to reap the “benefits”, how the “AI Layoffs” are not because the technology is working out but because it’s expensive. - before talking about “AI adoption”.
How it’s essentially outsourcing, with all warts like knowledge transfer and disintermediation risk, but you pay for failed results too, per token, and the vendor forces you to do all the QA and shoulders no responsibility?
Can we have leaders who lead instead
of endlessly trying to just keep things running? Because we’re going to need that. We need bold leaders with understanding and vision. Globally. Not managers who just try to keep the box from falling apart, because the box is on the train tracks. | 17 comments on LinkedIn
Jevons Paradox in action! 📈
NVIDIA likes talking about how each generation of AI hardware is getting more efficient -- with Blackwell using 50x less energy per token than Hopper in 2022 -- but LLM usage (conservatively) went up from 100B tokens per month to 2T tokens per month 𝗷𝘂𝘀𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗮𝘀𝘁 𝘆𝗲𝗮𝗿, according to Open Router (https://lnkd.in/evAWESXX).
This means that efficiency gains are being outpaced by the growth in usage, by far! Focusing on efficiency alone is missing the forest for the trees... | 13 comments on LinkedIn
Morally corrupt innovations are the easiest innovations to create – It’s the lazy approach with dangerous consequences - The CEO Retort with Tim El-Sheikh
We live in an era where moral corruption is the current economic priority – an economy built on shortcuts, cutting corners, and breaking the law. All, of course…
AI coders think they’re 20% faster — but they’re actually 19% slower
Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
The NYT published a fascinating article last month on the conundrum of AI accuracy and reliability. They found that even as AI models were getting more powerful, they generated more errors, not fewer. In OpenAI’s own tests, their newest models hallucinated at higher rates than their previous models. One of their benchmarks is called a
Technologies of control and our right of refusal | Seeta Peña Gangadharan | TEDxLondon
Most of us don’t realise how much digital systems govern access to our basic public services, like education, health and housing. Even more terrifying is how...