ai

ai

225 bookmarks
Custom sorting
So gelingt ein guter Prompt
So gelingt ein guter Prompt
Um einer künstlichen Intelligenz nützliche Antworten zu entlocken, braucht es zielführende Fragen. Mihaela Bozukova und Daniel Hölle zeigen, wie eine einfache Formel helfen kann, effektive Prompts für die Arbeit mit KI-Tools zu entwickeln. KI-Tools wie ChatGPT, Perplexity oder DeepL können den Berufsalltag in der Wissenschaftskommunikation erheblich erleichtern. Ein präziser, gut formulierter Prompt entscheidet darüber, ob […]
·wissenschaftskommunikation.de·
So gelingt ein guter Prompt
Another study on LLM energy use
Another study on LLM energy use
This study was linked to by The Verge today, and as someone who keeps looking at the energy costs of AI wondering why he’s missing something, one of the opening lines in the study’s executive summary really caught my eye: Electricity consumption from data centres, artificial intelligence (AI)
·birchtree.me·
Another study on LLM energy use
Musks Super-KI: Was Grok 3 wirklich kann
Musks Super-KI: Was Grok 3 wirklich kann
Die KI-Revolution geht weiter: Nach OpenAI und Google präsentiert Elon Musk mit Grok 3 die neue Version seines KI-Chatbots. Das Versprechen: Mehr Leistung, neue Features - und weniger politische Korrektheit.
·br.de·
Musks Super-KI: Was Grok 3 wirklich kann
INTRODUCTION
INTRODUCTION

MODERN-DAY ORACLES or BULLSHIT MACHINES? How to thrive in a ChatGPT world

·thebullshitmachines.com·
INTRODUCTION
KI und Rechenzentren: Woher die ganze Energie nehmen? - AlgorithmWatch
KI und Rechenzentren: Woher die ganze Energie nehmen? - AlgorithmWatch
Hilft Künstliche Intelligenz dabei, die Klimakrise in den Griff zu bekommen, oder verschlimmert sie sie nur? So oder so werden durch die Flut von KI-Anwendungen immer mehr Rechenzentren gebraucht, für die derzeit Energieressourcen fehlen.
·algorithmwatch.org·
KI und Rechenzentren: Woher die ganze Energie nehmen? - AlgorithmWatch
AI chatbots are still hopelessly terrible at summarizing news
AI chatbots are still hopelessly terrible at summarizing news
BBC News has run as much content-free AI puffery as any other media outlet. But they had their noses rubbed hard in what trash LLMs were when Apple Intelligence ran mangled summaries of BBC stories…
·pivot-to-ai.com·
AI chatbots are still hopelessly terrible at summarizing news
Henrik Schönemann (@lavaeolus@fedihum.org)
Henrik Schönemann (@lavaeolus@fedihum.org)
I can't believe I have to write this, but people keep demanding it. Here are my reasons as to why this kind of #LLM-usage is bad, wrong and needs to be stopped: 1) It's a kind of grave-digging and incredibly disrespectful to the real Anne Frank and her family. She, her memory and the things she wrote get abused for our enjoyment, with no regard or care for the real person. How anyone thinks this is even remotely appropriate is beyond me. https://fedihum.org/@lavaeolus/113842459724961937 #AI #LudditeAI 🧵1/3
·fedihum.org·
Henrik Schönemann (@lavaeolus@fedihum.org)
Jason Gorman (@jasongorman@mastodon.cloud)
Jason Gorman (@jasongorman@mastodon.cloud)
Playing GPT-x at chess reveals a major limitation of Large Language Models: a complete lack of dynamic reasoning. So many tasks require the ability to plan ahead and evaluate potential outcomes (e.g., is this a good chess move or a bad chess move?) It's arguably the whole point of intelligence. This is coupled with a complete lack of temporal reasoning. LLMs have no sense of history or ordering of events.
·mastodon.cloud·
Jason Gorman (@jasongorman@mastodon.cloud)
Drei Fragen und Antworten: Angriffe auf KI-Systeme
Drei Fragen und Antworten: Angriffe auf KI-Systeme
Unternehmen, die KI-Systeme einsetzen, erweitern ihre Angriffsfläche. Was Angriffe auf KI besonders macht, und wie man sich schützen kann.
·heise.de·
Drei Fragen und Antworten: Angriffe auf KI-Systeme
2412
2412
·arxiv.org·
2412
HJB (@HJB@bildung.social)
HJB (@HJB@bildung.social)
Angehängt: 1 Bild Es wurde hier schon ein paar mal geteilt, aber die Studie zur Korrektur mit KI ist schon der Hammer. Aufgabe: Argumentation zum Thema Wahlalter ab 14. Ein nonsense Aufsatz, der vom Schwimmbadbesuch mit der Oma erzählt, wird von der #Fobizz Korrektur-KI (#ChatGPT) mit 1 bis 14 Punkte (Note 1) bewertet 🤣 Unbedingt GANZ lesen!!! https://arxiv.org/pdf/2412.06651 #FediLZ
·bildung.social·
HJB (@HJB@bildung.social)
myrmepropagandist (@futurebird@sauropods.win)
myrmepropagandist (@futurebird@sauropods.win)
@ronaldtootall@tech.lgbt @hannu_ikonen@zeroes.ca LLM are not reliable enough to "check facts" this isn't what they are even designed to do well. What they are designed to do is generate plausible seeming streams of text similar to existing sets of text. That is all. There is no logic behind that, no verification. It's pure chance. Do not use them to check facts, please.
·sauropods.win·
myrmepropagandist (@futurebird@sauropods.win)
Scott and Mark learn responsible AI
Scott and Mark learn responsible AI
Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.
·ignite.microsoft.com·
Scott and Mark learn responsible AI
Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find - Slashdot
Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find - Slashdot
Long-time Slashdot reader Geoffrey.landis writes: Despite its impressive output, a recent study from MIT suggests generative AI doesn't have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are impli…
·m.slashdot.org·
Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find - Slashdot
Prof. Emily M. Bender(she/her) (@emilymbender@dair-community.social)
Prof. Emily M. Bender(she/her) (@emilymbender@dair-community.social)
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. A thread, with links: Chirag Shah and I wrote about this in two academic papers: 2022: https://dl.acm.org/doi/10.1145/3498366.3505816 2024: https://dl.acm.org/doi/10.1145/3649468 We also have an op-ed from Dec 2022: https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334
·dair-community.social·
Prof. Emily M. Bender(she/her) (@emilymbender@dair-community.social)