AI Literacy

162 bookmarks
Custom sorting
Engineered for Attachment: The Hidden Psychology of AI Companions | Punya Mishra's Web
Engineered for Attachment: The Hidden Psychology of AI Companions | Punya Mishra's Web
Dishonest Anthropomorphism is about the kinds of design choices made by these companies to leverage our ingrained tendency to attribute human-like qualities to non-human entities. Emulated empathy describes how AI systems intentionally seek to simulate genuine emotional understanding, misleading users about the true nature of the interaction.
·punyamishra.com·
Engineered for Attachment: The Hidden Psychology of AI Companions | Punya Mishra's Web
Teaching AI Ethics 2025: Bias
Teaching AI Ethics 2025: Bias
This post initiates a nine-article series revisiting the "Teaching AI Ethics" resources from 2023 exploring bias in GenAI.
·leonfurze.com·
Teaching AI Ethics 2025: Bias
4o Image Generation in ChatGPT and Sora
4o Image Generation in ChatGPT and Sora
Sam Altman, Gabriel Goh, Prafulla Dhariwal, Lu Liu, Allan Jabri, and Mengchao Zhong introduce and demo 4o image generation.
·youtube.com·
4o Image Generation in ChatGPT and Sora
Demystifying the Transformer Model
Demystifying the Transformer Model
Note: this blog post is a final paper for my UCSB WRIT105SW course. As such, it is a slight deviation from my standard writing and may…
Note: this blog post is a final paper for my UCSB WRIT105SW course. As such, it is a slight deviation from my standard writing and may assume a lot less prerequisite math and computer science knowledge than some other posts on this account.Nonetheless, I attest it’s still top notch :)Part 1: The Turing TestIn 1950, mathematician and computer scientist Alan Turing pondered if there was a fundamentally philosophical way to answer the question of “can machines think?” In light of this question, he proposed the philosophical thought experiment of the “Turing Test,” a theoretical construct that could discern a machine from a human based on how it responds to questions. They often were simple questions that prompted complex answers:Describe yourself using only colors and shapes. (A machine would struggle with the abstraction from complex human characteristics to simple shape and colors)Do more people go to Russia than me? (This sentence is syntactically correct but semantically nonsense — a human cannot answer this properly)Describe why time flies like an arrow but fruit flies like a banana? (A machine would struggle to interpret whether the second “flies” is a noun or verb)Back in 1950, machines were simply far too computationally inept to make a dent in any of these questions. After all, it was already mathematically proven in 1936 by Turing himself that no algorithm could prove if a program terminated or looped forever. Computer scientists began to realize that computers couldn’t do everything. Someone could almost believe that it’s as if there was a theoretical boundary of computing here too — some sort of human-computer interaction limit.But they never found it. In fact, it may not exist.I vividly remember having to write a paper on Polka tradition for my ethnomusicology course when I first heard the whispers of a possibly ground-breaking tool called ChatGPT. I couldn’t believe my eyes — the essay was done. It was coherent, well-structured, and it captured every nuance that I gave in the prompt.The onset of transformer models like ChatGPT single-handedly shattered my perception of everything I knew about computers. It surpasses every previous language model by miles. It passes countless Turing Tests. And through it all, it can masquerade as a human and emulate responses that signify an understanding of the meaning of sentences given to it.How is this all possible? Behold, the transformer model.Part 2: What does generative mean?In ChatGPT, the GPT stands for “generative pre-trained transformer.” In the context of machine learning, a generative model is one that outputs new creative content based on a prompt or some input sequence. In the context of transformer models, the prediction is made one word at a time based on the previous string of words that came before it.The model predicts that the word “over” is the most likely to follow in the sequenceTake the above example. The crux of the prediction process is that each word is generated one at a time—this is actually the reason why ChatGPT slowly types out a response word-by-word instead of giving you a block text all at once. For predicting a single word:All of the previous words (or rather, at least enough to establish context) are fed into the model.Then, the model gives back not one word, but instead a probability distribution for every single word in the dictionary. This model is trained on a large sample of example text and predicts based on its observations of which words tend to follow other words.Finally, the highest probability word is outputted. Then, the cycle repeats.Once over is predicted, the cycle repeats and the model generates the word “the”Up to now, this model is alright, but an issue that you immediately run into is that the model is not able to understand context. E.g. if you have the following sentences:“I caught a bass in the lake.”“I connected my electric bass to the speaker.”The model literally cannot discern whether the input word bass refers to the fish or the instrument. Fortunately, the key insight of the transformer model is how it utilizes a tool called attention heads to preserve context of words—this will be explained in a bit.But to understand that, let’s first take a look at how meaning can even be encoded at all.Part 3: EmbeddingsTo understand how machines even make sense of words, we first have to take a look at embeddings, or mathematical representations of word meaning.In a machine, words are really hard to assign meaning to. E.g. you can tell a human that the word serene conveys the meaning of tranquility and peace — a computer has no inherent understanding of what tranquility means. However, the mathematical best-effort approach to approximate meaning is a concept called embeddings.Let’s look at an example of the word embeddings for the words elephant and small.Examples of word embeddings, drawn out in two dimensionsIn this example, our 2-d plane has a dimension representing “living-ness” and one that represents “size
·medium.com·
Demystifying the Transformer Model
More Teens Than You Think Have Been 'Deepfake' Targets
More Teens Than You Think Have Been 'Deepfake' Targets
A growing number of teenagers know someone who has been the target of AI-generated pornographic images or videos.
One in 8 young people aged 13 to 20—and 1 in 10 teenagers aged 13 to 17—said they “personally know someone” who has been the target of deepfake nude imagery, and 1 in 17 have been targets themselves. Thirteen percent of teenagers said they knew someone who had used AI to create or redistribute deepfake pornography of minors.
·edweek.org·
More Teens Than You Think Have Been 'Deepfake' Targets
Introducing 4o Image Generation | OpenAI
Introducing 4o Image Generation | OpenAI
At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful.
·openai.com·
Introducing 4o Image Generation | OpenAI
Instagram partners with schools to prioritize reports of online bullying and student safety | TechCrunch
Instagram partners with schools to prioritize reports of online bullying and student safety | TechCrunch
Instagram on Tuesday announced a new school partnership program aimed at expediting the handling of moderation reports submitted by verified school Instagram's new program lets schools report posts and student accounts that may violate the app's guidelines to be prioritized for removal.
·techcrunch.com·
Instagram partners with schools to prioritize reports of online bullying and student safety | TechCrunch
Why ChatGPT Can’t Help Schools
Why ChatGPT Can’t Help Schools
Plagiarism, the definition of the word ‘help,’ and technology training
·medium.com·
Why ChatGPT Can’t Help Schools
(2) Post | LinkedIn
(2) Post | LinkedIn
🧩 What do children think about AI? Hear their views from the Children’s AI Summit Yesterday our film captured at last week’s Children’s AI Summit had its debut screening in Paris, ahead of the Paris AI Action Summit. It explores key thoughts and ideas from some of the 150 children who attended the event. It was shown at the Global AI Forum on Children, Education, Youth and Wellbeing with Common Sense Media, the LEGO Group and the NSPCC. Mhairi Aitken from the Turing’s Children and AI team joined a panel with NSPCC’s CEO Chris Sherwood and young people from the Voice of the Youth Online to discuss how generative AI is reshaping youth experiences and development. The Children’s AI Summit was hosted by the Alan Turing Institute and Queen Mary University of London, and supported by the LEGO Group, Elevate Great and EY. #AI #AIActionSummit #AIforGood #ChildrensRights #ResponsibleAI
·linkedin.com·
(2) Post | LinkedIn
Code.org
Code.org
Anyone can learn computer science. Make games, apps and art with code.
·studio.code.org·
Code.org
(Re)Introduction to GenAI for Schools 2025 – Leon Furze
(Re)Introduction to GenAI for Schools 2025 – Leon Furze
It has been a couple of years since the release of ChatGPT threw the education sector into a tailspin, with OpenAI's chatbot racking up hundreds of millions of users worldwide - many of them students. But the passage of time doesn't necessarily mean we have more of a handle on the technology now than we…
·leonfurze.com·
(Re)Introduction to GenAI for Schools 2025 – Leon Furze