Found 5 bookmarks
Custom sorting
The AI Hype is a Dead Man Walking.
The AI Hype is a Dead Man Walking.
The AI Hype is a Dead Man Walking. The Math Finally Proves It. For the past two years, the AI industry has been operating on a single, seductive promise: that if we just keep scaling our current models, we'll eventually arrive at AGI. A wave of new research, brilliantly summarized in a recent video analysis, has finally provided the mathematical proof that this promise is a lie. This isn't just another opinion; it's a brutal, two-pronged assault on the very foundations of the current AI paradigm: 1. The Wall of Physics: The first paper reveals a terrifying reality about the economics of reliability. To reduce the error rate of today's LLMs by even a few orders of magnitude—to make them truly trustworthy for enterprise use—would require 10^20 times more computing power. This isn't just a challenge; it's a physical impossibility. We have hit a hard wall where the cost of squeezing out the last few percentage points of reliability is computationally insane. The era of brute-force scaling is over. 2. The Wall of Reason: The second paper is even more damning. It proves that "Chain-of-Thought," the supposed evidence of emergent reasoning in LLMs, is a "brittle mirage". The models aren't reasoning; they are performing a sophisticated pattern-match against their training data. The moment a problem deviates even slightly from that data, the "reasoning" collapses entirely. This confirms what skeptics have been saying all along: we have built a world-class "statistical parrot," not a thinking machine. This is the end of the "Blueprint Battle." The LLM-only blueprint has failed. The path forward is not to build a bigger parrot, but to invest in the hard, foundational research for a new architecture. The future belongs to "world models," like those being pursued by Yann LeCun and others—systems that learn from interacting with a real or virtual world, not just from a library of text. The "disappointing" GPT-5 launch wasn't a stumble; it was the first, visible tremor of this entire architectural paradigm hitting a dead end. The hype is over. Now the real, foundational work of inventing the next paradigm begins. | 554 comments on LinkedIn
·linkedin.com·
The AI Hype is a Dead Man Walking.
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities.
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities.
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities. They divided participants into three groups: one using ChatGPT, one using search engines, and one using just their brains. Through EEG monitoring, interviews, and analysis of the essays, they discovered some not surprising results about how AI use impacts learning and cognitive engagement. There were five key takeaways for me (although this is not an exhaustive list), within the context of this particular study: 1. The Cognitive Debt Issue The study indicates that participants who used ChatGPT exhibited the weakest neural connectivity patterns when compared to those relying on search engines or unaided cognition. This suggests that defaulting to generative AI may function as an intellectual shortcut, diminishing rather than strengthening cognitive engagement. Researchers are increasingly describing the tradeoff between short-term ease and productivity and long-term erosion of independent thinking and critical skills as “cognitive debt.” This parallels the concept of technical debt, when developers prioritise quick solutions over robust design, leading to hidden costs, inefficiencies, and increased complexity downstream. 2. The Memory Problem Strikingly, users of ChatGPT had difficulty recalling or quoting from essays they had composed only minutes earlier. This undermines the notion of augmentation; rather than supporting cognitive function, the tool appears to offload essential processes, impairing retention and deep processing of information. 3. The Ownership Gap Participants who used ChatGPT reported a reduced sense of ownership over their work. If we normalise over-reliance on AI tools, we risk cultivating passive knowledge consumers rather than active knowledge creators. 4. The Homogenisation Effect Analysis showed that essays from the LLM group were highly uniform, with repeated phrases and limited variation, suggesting reduced cognitive and expressive diversity. In contrast, the Brain-only group produced more varied and original responses. The Search group fell in between. 5. The Potential for Constructive Re-engagement 🧠 🤖 🤖 🤖 There is, however, promising evidence for meaningful integration of AI when used in conjunction with prior unaided effort: “Those who had previously written without tools (Brain-only group), the so-called Brain-to-LLM group, exhibited significant increase in brain connectivity across all EEG frequency bands when allowed to use an LLM on a familiar topic. This suggests that AI-supported re-engagement invoked high levels of cognitive integration, memory reactivation, and top-down control.” This points to the potential for AI to enhance cognitive function when it is used as a complement to, rather than a substitute for, initial human effort. At over 200 pages, expect multiple paper submissions out of this extensive body of work. https://lnkd.in/gzicDHp2 | 16 comments on LinkedIn
·linkedin.com·
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities.
Spot the Bias.
Spot the Bias.
In diesem Sommersemester habe ich ein neues Spiel mit meinen Studis ausprobiert: Spot the Bias. 1. Öffne Midjourney oder eine andere Bild KI 2. Gib einen… | 10 Kommentare auf LinkedIn
·linkedin.com·
Spot the Bias.