Prof. Emily M. Bender(she/her) (@emilymbender@dair-community.social)
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. A thread, with links: Chirag Shah and I wrote about this in two academic papers: 2022: https://dl.acm.org/doi/10.1145/3498366.3505816 2024: https://dl.acm.org/doi/10.1145/3649468 We also have an op-ed from Dec 2022: https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334
Introduction to the special issue on AI systems for the public interest | Internet Policy Review
As the debate on public interest AI is still a young and emerging one, we see this special issue as a way to help establish this field and its community by bringing together interdisciplinary positions and approaches.
Es gibt viel Unsicherheit über Datenschutz und Datensicherheit rund um KI-Textgeneratoren wie ChatGPT oder Gemini. Was darf man ihnen anvertrauen? Was soll
Nein! Doch! Oooh! #KI ist teuer und bringt keinen ROI, sagt Gartner??? Gartner sounds alarm on AI cost, data challenges | CX Dive https://www.customerexperiencedive.com/news/gartner-symposium-keynote-AI/731122/
🚑Crazy case yesterday in the ER: fulminant Glianorex infection with REALLY high Neurostabilin levels. Figured I'd ask ChatGPT for help and it honestly would… | 35 comments on LinkedIn
Wobei KI besonders gut helfen kann, ist Muster aus Daten, Texten oder in Bildern oder Videos zu erkennen. Was wiederholt sich, was ergänzt sich, wo ist ein „Bruch“ in einer Folge… Heute 3 Experimente dazu: Experiment 1 – KI Profilbewertung – Was ich nicht weiß Nadja Schwind hat vor ein paar Tagen folgendes Experiment geteilt: […]
‘Thirsty’ ChatGPT uses four times more water than previously thought
The massive computer clusters powering artificial intelligence consume vast quantities to answer the world’s queries, but how is Big Tech redressing the balance?
Cash incinerator OpenAI secures its $6.6 billion lifeline — ‘in the spirit of a donation’
In the largest venture-capital-backed investment round of all time, OpenAI has successfully raised $6.6 billion from its most gullible brilliant and handsome friends. This gives OpenAI an imaginary…
Papers-Literature-ML-DL-RL-AI/General-Machine-Learning/The Hundred-Page Machine Learning Book by Andriy Burkov/Links to read the chapters online.md at master · tirthajyoti/Papers-Literature-ML-DL-RL-AI · GitHub
Highly cited and useful papers related to machine learning, deep learning, AI, game theory, reinforcement learning - tirthajyoti/Papers-Literature-ML-DL-RL-AI
Visualisierung der Aufmerksamkeit, das Herz eines Transformators | Kapitel 6, Deep Learning - YouTube
Demystifying attention, the key mechanism inside transformers and LLMs.
Instead of sponsored ad reads, these lessons are funded directly by viewers: https://3b1b.co/support
Special thanks to these supporters: https://www.3blue1brown.com/lessons/attention#thanks
An equally valuable form of support is to simply share the videos.
Demystifying self-attention, multiple heads, and cross-attention.
Instead of sponsored ad reads, these lessons are funded directly by viewers: https://3b1b.co/support
The first pass for the translated subtitles here is machine-generated, and therefore notably imperfect. To contribute edits or fixes, visit https://translate.3blue1brown.com/
And yes, at 22:00 (and elsewhere), "breaks" is a typo.
------------------
Here are a few other relevant resources
Build a GPT from scratch, by Andrej Karpathy
https://youtu.be/kCc8FmEb1nY
If you want a conceptual understanding of language models from the ground up, @vcubingx just started a short series of videos on the topic:
https://youtu.be/1il-s4mgNdI?si=XaVxj6bsdy3VkgEX
If you're interested in the herculean task of interpreting what these large networks might actually be doing, the Transformer Circuits posts by Anthropic are great. In particular, it was only after reading one of these that I started thinking of the combination of the value and output matrices as being a combined low-rank map from the embedding space to itself, which, at least in my mind, made things much clearer than other sources.
https://transformer-circuits.pub/2021/framework/index.html
Site with exercises related to ML programming and GPTs
https://www.gptandchill.ai/codingproblems
History of language models by Brit Cruise, @ArtOfTheProblem
https://youtu.be/OFS90-FX6pg
An early paper on how directions in embedding spaces have meaning:
https://arxiv.org/pdf/1301.3781.pdf
------------------
Timestamps:
0:00 - Recap on embeddings
1:39 - Motivating examples
4:29 - The attention pattern
11:08 - Masking
12:42 - Context size
13:10 - Values
15:44 - Counting parameters
18:21 - Cross-attention
19:19 - Multiple heads
22:16 - The output matrix
23:19 - Going deeper
24:54 - Ending
------------------
These animations are largely made using a custom Python library, manim. See the FAQ comments here:
https://3b1b.co/faq#manim
https://github.com/3b1b/manim
https://github.com/ManimCommunity/manim/
All code for specific videos is visible here:
https://github.com/3b1b/videos/
The music is by Vincent Rubinetti.
https://www.vincentrubinetti.com
https://vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown
https://open.spotify.com/album/1dVyjwS8FBqXhRunaG5W5u
------------------
3blue1brown is a channel about animating math, in all senses of the word animate. If you're reading the bottom of a video description, I'm guessing you're more interested than the average viewer in lessons here. It would mean a lot to me if you chose to stay up to date on new ones, either by subscribing here on YouTube or otherwise following on whichever platform below you check most regularly.
Mailing list: https://3blue1brown.substack.com
Twitter: https://twitter.com/3blue1brown
Instagram: https://www.instagram.com/3blue1brown
Reddit: https://www.reddit.com/r/3blue1brown
Facebook: https://www.facebook.com/3blue1brown
Patreon: https://patreon.com/3blue1brown
Website: https://www.3blue1brown.com
Google’s GameNGen AI Doom video game generator: dissecting a rigged demo
Did you know you can play Doom on a diffusion model now? It’s true, Google just announced it! Just don’t read the paper too closely. In their paper “Diffusion models are real-time game engines,” Go…
Le "nuove" tre leggi della robotica nell'era della AI - Gravita Zero: comunicazione scientifica e istituzionale
Negli ultimi decenni, il mondo della robotica ha visto enormi cambiamenti, portando alla nascita di nuove tecnologie e, soprattutto, di nuove sfide etiche e legali. Isaac Asimov, celebre scrittore di fantascienza, aveva anticipato queste problematiche negli anni ’40, quando formulò le sue tre leggi della robotica. Queste leggi immaginate da Asimov, nonostante siano state sviluppate …
AI generates covertly racist decisions about people based on their dialect
Nature - Despite efforts to remove overt racial prejudice, language models using artificial intelligence still show covert racism against speakers of African American English that is triggered by...
Productivity gains in Software Development through AI
Especially in IT and software development numbers keep popping up about “savings” through AI. Amazon for example claims to have “saved” 4500 person years of work. These numbers have to be taken with a grain of salt and shouldn’t be interpreted as “oh, we will save massive amounts of work by using AI, let’s fire […]
A schism lies at the heart of the field of artificial intelligence. Since its inception, the field has been defined by an intellectual tug-of-war between two opposing philosophies: connectionism and symbolism. These two camps have deeply divergent visions as to how to "solve" intelligence, with differing research agendas and sometimes bitter relations.