Reinforcement Learning Explained with Code | The AI Breakthrough Behind the Turing Award
Andrew G. Barto and Richard S. Sutton have just won the Turing Award for their groundbreaking work on Reinforcement Learning! In this video, I’ll break down ...
This study was linked to by The Verge today, and as someone who keeps looking at the energy costs of AI wondering why he’s missing something, one of the opening lines in the study’s executive summary really caught my eye:
Electricity consumption from data centres, artificial intelligence (AI)
Microsoft Cancels Leases for AI Data Centers, Analyst Says
Microsoft Corp. has canceled some leases for US data center capacity, according to TD Cowen, raising broader concerns over whether it’s securing more AI computing capacity than it needs in the long term.
Die KI-Revolution geht weiter: Nach OpenAI und Google präsentiert Elon Musk mit Grok 3 die neue Version seines KI-Chatbots. Das Versprechen: Mehr Leistung, neue Features - und weniger politische Korrektheit.
KI und Rechenzentren: Woher die ganze Energie nehmen? - AlgorithmWatch
Hilft Künstliche Intelligenz dabei, die Klimakrise in den Griff zu bekommen, oder verschlimmert sie sie nur? So oder so werden durch die Flut von KI-Anwendungen immer mehr Rechenzentren gebraucht, für die derzeit Energieressourcen fehlen.
AI chatbots are still hopelessly terrible at summarizing news
BBC News has run as much content-free AI puffery as any other media outlet. But they had their noses rubbed hard in what trash LLMs were when Apple Intelligence ran mangled summaries of BBC stories…
I can't believe I have to write this, but people keep demanding it. Here are my reasons as to why this kind of #LLM-usage is bad, wrong and needs to be stopped: 1) It's a kind of grave-digging and incredibly disrespectful to the real Anne Frank and her family. She, her memory and the things she wrote get abused for our enjoyment, with no regard or care for the real person. How anyone thinks this is even remotely appropriate is beyond me. https://fedihum.org/@lavaeolus/113842459724961937 #AI #LudditeAI 🧵1/3
Playing GPT-x at chess reveals a major limitation of Large Language Models: a complete lack of dynamic reasoning. So many tasks require the ability to plan ahead and evaluate potential outcomes (e.g., is this a good chess move or a bad chess move?) It's arguably the whole point of intelligence. This is coupled with a complete lack of temporal reasoning. LLMs have no sense of history or ordering of events.
Angehängt: 1 Bild Es wurde hier schon ein paar mal geteilt, aber die Studie zur Korrektur mit KI ist schon der Hammer. Aufgabe: Argumentation zum Thema Wahlalter ab 14. Ein nonsense Aufsatz, der vom Schwimmbadbesuch mit der Oma erzählt, wird von der #Fobizz Korrektur-KI (#ChatGPT) mit 1 bis 14 Punkte (Note 1) bewertet 🤣 Unbedingt GANZ lesen!!! https://arxiv.org/pdf/2412.06651 #FediLZ
Zwei Jahre Chat-GPT: Vom Hype bleibt ein mässig nützliches Werkzeug und jede Menge Kosten
KI sei für die Menschheit so revolutionär wie die Nutzbarmachung der Elektrizität – das war das Versprechen. Doch daraus wird nichts, weil das Hauptproblem dieser Technologie ungelöst bleibt.
@ronaldtootall@tech.lgbt @hannu_ikonen@zeroes.ca LLM are not reliable enough to "check facts" this isn't what they are even designed to do well. What they are designed to do is generate plausible seeming streams of text similar to existing sets of text. That is all. There is no logic behind that, no verification. It's pure chance. Do not use them to check facts, please.
Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.
Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find - Slashdot
Long-time Slashdot reader Geoffrey.landis writes: Despite its impressive output, a recent study from MIT suggests generative AI doesn't have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are impli…
Prof. Emily M. Bender(she/her) (@emilymbender@dair-community.social)
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. A thread, with links: Chirag Shah and I wrote about this in two academic papers: 2022: https://dl.acm.org/doi/10.1145/3498366.3505816 2024: https://dl.acm.org/doi/10.1145/3649468 We also have an op-ed from Dec 2022: https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334