AI Literacy

170 bookmarks
Custom sorting
The complete history of Artificial Intelligence: Alan Turing to ChatGPT
The complete history of Artificial Intelligence: Alan Turing to ChatGPT
The world of AI is growing at such an insane pace, so I wanted to make a video that takes a step back and looks at the entire timeline of it all - how fast are we moving, and can we even grasp the fact that it’s going to take off at an even greater pace? Join me as I try to answer that question and look at every major breakthrough, innovation and event that shaped the world of tech and AI that we are currently living in. Discover More: 🛠️ Explore AI Tools & News: https://futuretools.io/ 📰 Weekly Newsletter: https://futuretools.io/newsletter 🎙️ The Next Wave Podcast: https://youtube.com/@TheNextWavePod Socials: 🖼️ Instagram: https://instagram.com/mr.eflow ❌ Personal Twiter/X: https://x.com/mreflow ❌ Future Tools Twiter/X: https://x.com/futuretoolsio 🧵 Threads: https://www.threads.net/@mr.eflow 🟦 LinkedIn: https://www.linkedin.com/in/matt-wolfe-30841712/ Let’s work together! - Brand, sponsorship & business inquiries: mattwolfe@smoothmedia.co #AINews #AITools #ArtificialIntelligence Time Stamps: 0:00 Intro 1:24 Rule-Based Era 4:07 Machine Learning Era 7:44 Deep Learning Era 12:00 Outro
·youtube.com·
The complete history of Artificial Intelligence: Alan Turing to ChatGPT
Performing AI literacy
Performing AI literacy
Photo by Jess Bailey on Unsplash A new international test of young people’s “AI literacy” has been announced by the OECD. Providing a global measurement of the competencies to engage with AI, the t…
·codeactsineducation.wordpress.com·
Performing AI literacy
Engineered for Attachment: The Hidden Psychology of AI Companions | Punya Mishra's Web
Engineered for Attachment: The Hidden Psychology of AI Companions | Punya Mishra's Web
Dishonest Anthropomorphism is about the kinds of design choices made by these companies to leverage our ingrained tendency to attribute human-like qualities to non-human entities. Emulated empathy describes how AI systems intentionally seek to simulate genuine emotional understanding, misleading users about the true nature of the interaction.
·punyamishra.com·
Engineered for Attachment: The Hidden Psychology of AI Companions | Punya Mishra's Web
Teaching AI Ethics 2025: Bias
Teaching AI Ethics 2025: Bias
This post initiates a nine-article series revisiting the "Teaching AI Ethics" resources from 2023 exploring bias in GenAI.
·leonfurze.com·
Teaching AI Ethics 2025: Bias
4o Image Generation in ChatGPT and Sora
4o Image Generation in ChatGPT and Sora
Sam Altman, Gabriel Goh, Prafulla Dhariwal, Lu Liu, Allan Jabri, and Mengchao Zhong introduce and demo 4o image generation.
·youtube.com·
4o Image Generation in ChatGPT and Sora
Demystifying the Transformer Model
Demystifying the Transformer Model
Note: this blog post is a final paper for my UCSB WRIT105SW course. As such, it is a slight deviation from my standard writing and may…
Note: this blog post is a final paper for my UCSB WRIT105SW course. As such, it is a slight deviation from my standard writing and may assume a lot less prerequisite math and computer science knowledge than some other posts on this account.Nonetheless, I attest it’s still top notch :)Part 1: The Turing TestIn 1950, mathematician and computer scientist Alan Turing pondered if there was a fundamentally philosophical way to answer the question of “can machines think?” In light of this question, he proposed the philosophical thought experiment of the “Turing Test,” a theoretical construct that could discern a machine from a human based on how it responds to questions. They often were simple questions that prompted complex answers:Describe yourself using only colors and shapes. (A machine would struggle with the abstraction from complex human characteristics to simple shape and colors)Do more people go to Russia than me? (This sentence is syntactically correct but semantically nonsense — a human cannot answer this properly)Describe why time flies like an arrow but fruit flies like a banana? (A machine would struggle to interpret whether the second “flies” is a noun or verb)Back in 1950, machines were simply far too computationally inept to make a dent in any of these questions. After all, it was already mathematically proven in 1936 by Turing himself that no algorithm could prove if a program terminated or looped forever. Computer scientists began to realize that computers couldn’t do everything. Someone could almost believe that it’s as if there was a theoretical boundary of computing here too — some sort of human-computer interaction limit.But they never found it. In fact, it may not exist.I vividly remember having to write a paper on Polka tradition for my ethnomusicology course when I first heard the whispers of a possibly ground-breaking tool called ChatGPT. I couldn’t believe my eyes — the essay was done. It was coherent, well-structured, and it captured every nuance that I gave in the prompt.The onset of transformer models like ChatGPT single-handedly shattered my perception of everything I knew about computers. It surpasses every previous language model by miles. It passes countless Turing Tests. And through it all, it can masquerade as a human and emulate responses that signify an understanding of the meaning of sentences given to it.How is this all possible? Behold, the transformer model.Part 2: What does generative mean?In ChatGPT, the GPT stands for “generative pre-trained transformer.” In the context of machine learning, a generative model is one that outputs new creative content based on a prompt or some input sequence. In the context of transformer models, the prediction is made one word at a time based on the previous string of words that came before it.The model predicts that the word “over” is the most likely to follow in the sequenceTake the above example. The crux of the prediction process is that each word is generated one at a time—this is actually the reason why ChatGPT slowly types out a response word-by-word instead of giving you a block text all at once. For predicting a single word:All of the previous words (or rather, at least enough to establish context) are fed into the model.Then, the model gives back not one word, but instead a probability distribution for every single word in the dictionary. This model is trained on a large sample of example text and predicts based on its observations of which words tend to follow other words.Finally, the highest probability word is outputted. Then, the cycle repeats.Once over is predicted, the cycle repeats and the model generates the word “the”Up to now, this model is alright, but an issue that you immediately run into is that the model is not able to understand context. E.g. if you have the following sentences:“I caught a bass in the lake.”“I connected my electric bass to the speaker.”The model literally cannot discern whether the input word bass refers to the fish or the instrument. Fortunately, the key insight of the transformer model is how it utilizes a tool called attention heads to preserve context of words—this will be explained in a bit.But to understand that, let’s first take a look at how meaning can even be encoded at all.Part 3: EmbeddingsTo understand how machines even make sense of words, we first have to take a look at embeddings, or mathematical representations of word meaning.In a machine, words are really hard to assign meaning to. E.g. you can tell a human that the word serene conveys the meaning of tranquility and peace — a computer has no inherent understanding of what tranquility means. However, the mathematical best-effort approach to approximate meaning is a concept called embeddings.Let’s look at an example of the word embeddings for the words elephant and small.Examples of word embeddings, drawn out in two dimensionsIn this example, our 2-d plane has a dimension representing “living-ness” and one that represents “size
·medium.com·
Demystifying the Transformer Model
More Teens Than You Think Have Been 'Deepfake' Targets
More Teens Than You Think Have Been 'Deepfake' Targets
A growing number of teenagers know someone who has been the target of AI-generated pornographic images or videos.
One in 8 young people aged 13 to 20—and 1 in 10 teenagers aged 13 to 17—said they “personally know someone” who has been the target of deepfake nude imagery, and 1 in 17 have been targets themselves. Thirteen percent of teenagers said they knew someone who had used AI to create or redistribute deepfake pornography of minors.
·edweek.org·
More Teens Than You Think Have Been 'Deepfake' Targets
Introducing 4o Image Generation | OpenAI
Introducing 4o Image Generation | OpenAI
At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful.
·openai.com·
Introducing 4o Image Generation | OpenAI
Instagram partners with schools to prioritize reports of online bullying and student safety | TechCrunch
Instagram partners with schools to prioritize reports of online bullying and student safety | TechCrunch
Instagram on Tuesday announced a new school partnership program aimed at expediting the handling of moderation reports submitted by verified school Instagram's new program lets schools report posts and student accounts that may violate the app's guidelines to be prioritized for removal.
·techcrunch.com·
Instagram partners with schools to prioritize reports of online bullying and student safety | TechCrunch