KI und Rechenzentren: Woher die ganze Energie nehmen? - AlgorithmWatch
Hilft Künstliche Intelligenz dabei, die Klimakrise in den Griff zu bekommen, oder verschlimmert sie sie nur? So oder so werden durch die Flut von KI-Anwendungen immer mehr Rechenzentren gebraucht, für die derzeit Energieressourcen fehlen.
AI chatbots are still hopelessly terrible at summarizing news
BBC News has run as much content-free AI puffery as any other media outlet. But they had their noses rubbed hard in what trash LLMs were when Apple Intelligence ran mangled summaries of BBC stories…
I can't believe I have to write this, but people keep demanding it. Here are my reasons as to why this kind of #LLM-usage is bad, wrong and needs to be stopped: 1) It's a kind of grave-digging and incredibly disrespectful to the real Anne Frank and her family. She, her memory and the things she wrote get abused for our enjoyment, with no regard or care for the real person. How anyone thinks this is even remotely appropriate is beyond me. https://fedihum.org/@lavaeolus/113842459724961937 #AI #LudditeAI 🧵1/3
Playing GPT-x at chess reveals a major limitation of Large Language Models: a complete lack of dynamic reasoning. So many tasks require the ability to plan ahead and evaluate potential outcomes (e.g., is this a good chess move or a bad chess move?) It's arguably the whole point of intelligence. This is coupled with a complete lack of temporal reasoning. LLMs have no sense of history or ordering of events.
Angehängt: 1 Bild Es wurde hier schon ein paar mal geteilt, aber die Studie zur Korrektur mit KI ist schon der Hammer. Aufgabe: Argumentation zum Thema Wahlalter ab 14. Ein nonsense Aufsatz, der vom Schwimmbadbesuch mit der Oma erzählt, wird von der #Fobizz Korrektur-KI (#ChatGPT) mit 1 bis 14 Punkte (Note 1) bewertet 🤣 Unbedingt GANZ lesen!!! https://arxiv.org/pdf/2412.06651 #FediLZ
Zwei Jahre Chat-GPT: Vom Hype bleibt ein mässig nützliches Werkzeug und jede Menge Kosten
KI sei für die Menschheit so revolutionär wie die Nutzbarmachung der Elektrizität – das war das Versprechen. Doch daraus wird nichts, weil das Hauptproblem dieser Technologie ungelöst bleibt.
@ronaldtootall@tech.lgbt @hannu_ikonen@zeroes.ca LLM are not reliable enough to "check facts" this isn't what they are even designed to do well. What they are designed to do is generate plausible seeming streams of text similar to existing sets of text. That is all. There is no logic behind that, no verification. It's pure chance. Do not use them to check facts, please.
Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.
Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find - Slashdot
Long-time Slashdot reader Geoffrey.landis writes: Despite its impressive output, a recent study from MIT suggests generative AI doesn't have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are impli…
Prof. Emily M. Bender(she/her) (@emilymbender@dair-community.social)
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. A thread, with links: Chirag Shah and I wrote about this in two academic papers: 2022: https://dl.acm.org/doi/10.1145/3498366.3505816 2024: https://dl.acm.org/doi/10.1145/3649468 We also have an op-ed from Dec 2022: https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334
Introduction to the special issue on AI systems for the public interest | Internet Policy Review
As the debate on public interest AI is still a young and emerging one, we see this special issue as a way to help establish this field and its community by bringing together interdisciplinary positions and approaches.
Es gibt viel Unsicherheit über Datenschutz und Datensicherheit rund um KI-Textgeneratoren wie ChatGPT oder Gemini. Was darf man ihnen anvertrauen? Was soll
Nein! Doch! Oooh! #KI ist teuer und bringt keinen ROI, sagt Gartner??? Gartner sounds alarm on AI cost, data challenges | CX Dive https://www.customerexperiencedive.com/news/gartner-symposium-keynote-AI/731122/
🚑Crazy case yesterday in the ER: fulminant Glianorex infection with REALLY high Neurostabilin levels. Figured I'd ask ChatGPT for help and it honestly would… | 35 comments on LinkedIn
Wobei KI besonders gut helfen kann, ist Muster aus Daten, Texten oder in Bildern oder Videos zu erkennen. Was wiederholt sich, was ergänzt sich, wo ist ein „Bruch“ in einer Folge… Heute 3 Experimente dazu: Experiment 1 – KI Profilbewertung – Was ich nicht weiß Nadja Schwind hat vor ein paar Tagen folgendes Experiment geteilt: […]