google_teaching_responsible_ai.pdf
AI Literacy
Why Toronto Police Didn’t Lay Charges Against a Teen Who Created AI-Generated Porn – The White Hatter
Intimate Images Protection Act,
Meta’s new AI chatbot is yet another tool for harvesting data to potentially sell you stuff
We’re just as likely to share intimate info with a chatbot as we are with a fellow human. It’s a privacy risk.
Engineered for Attachment: The Hidden Psychology of AI Companions | Punya Mishra's Web
Dishonest Anthropomorphism is about the kinds of design choices made by these companies to leverage our ingrained tendency to attribute human-like qualities to non-human entities. Emulated empathy describes how AI systems intentionally seek to simulate genuine emotional understanding, misleading users about the true nature of the interaction.
Teaching AI Ethics 2025: Bias
This post initiates a nine-article series revisiting the "Teaching AI Ethics" resources from 2023 exploring bias in GenAI.
Google will put Gemini AI in the hands of kids under 13
The move comes as experts warn about the dangers of AI for minors.
Introducing the Meta AI App: A New Way to Access Your AI Assistant
We're launching the Meta AI app, our first step in building a more personal AI.
You can now fact check anybody’s post in WhatsApp – here’s how
Perplexity to remove your perplexity
Misinformation Susceptibility Test (MIST) · Streamlit
Test how resilient you are to misinformation! Find out if you recognize fake news when you see it...
What Trump’s Draft Order Really Teaches: Obey, Submit, Don’t Ask
Trump’s draft executive order would force AI into every classroom. This is not innovation. This is indoctrination.
North GA man indicted on dozens of counts of using AI to make nude images of kids
Ronald Richardson refilled the vending machines at Gilmer High School.
FPF-Deep-Fake_illo03-FPF-AI.pdf
FPF-Deep-Fake_illo03-FPF-AI.pdf
Teens are embracing AI — but largely not for cheating, survey finds
While teens are more eager and tech savvy than their parents when utilizing AI tools, experts encourage both to explore how to best use these tools together.
AI Agents and Agentic AI in Python: Powered by Generative AI
Offered by Vanderbilt University. Learn AI Agent ... Enroll for free.
New in NotebookLM: Discover sources from around the web
NotebookLM has launched Discover Sources, which lets you add sources from the web to your notebook.
4o Image Generation in ChatGPT and Sora
Sam Altman, Gabriel Goh, Prafulla Dhariwal, Lu Liu, Allan Jabri, and Mengchao Zhong introduce and demo 4o image generation.
(1) The Hidden AI Poisoning That Shapes Our Children’s Knowledge. | LinkedIn
Every day, children turn to the internet for homework, research, and general curiosity. Wikipedia pages, Google searches, and AI chatbots such as ChatGPT or Gemini provide fast answers.
Demystifying the Transformer Model
Note: this blog post is a final paper for my UCSB WRIT105SW course. As such, it is a slight deviation from my standard writing and may…
Note: this blog post is a final paper for my UCSB WRIT105SW course. As such, it is a slight deviation from my standard writing and may assume a lot less prerequisite math and computer science knowledge than some other posts on this account.Nonetheless, I attest it’s still top notch :)Part 1: The Turing TestIn 1950, mathematician and computer scientist Alan Turing pondered if there was a fundamentally philosophical way to answer the question of “can machines think?” In light of this question, he proposed the philosophical thought experiment of the “Turing Test,” a theoretical construct that could discern a machine from a human based on how it responds to questions. They often were simple questions that prompted complex answers:Describe yourself using only colors and shapes. (A machine would struggle with the abstraction from complex human characteristics to simple shape and colors)Do more people go to Russia than me? (This sentence is syntactically correct but semantically nonsense — a human cannot answer this properly)Describe why time flies like an arrow but fruit flies like a banana? (A machine would struggle to interpret whether the second “flies” is a noun or verb)Back in 1950, machines were simply far too computationally inept to make a dent in any of these questions. After all, it was already mathematically proven in 1936 by Turing himself that no algorithm could prove if a program terminated or looped forever. Computer scientists began to realize that computers couldn’t do everything. Someone could almost believe that it’s as if there was a theoretical boundary of computing here too — some sort of human-computer interaction limit.But they never found it. In fact, it may not exist.I vividly remember having to write a paper on Polka tradition for my ethnomusicology course when I first heard the whispers of a possibly ground-breaking tool called ChatGPT. I couldn’t believe my eyes — the essay was done. It was coherent, well-structured, and it captured every nuance that I gave in the prompt.The onset of transformer models like ChatGPT single-handedly shattered my perception of everything I knew about computers. It surpasses every previous language model by miles. It passes countless Turing Tests. And through it all, it can masquerade as a human and emulate responses that signify an understanding of the meaning of sentences given to it.How is this all possible? Behold, the transformer model.Part 2: What does generative mean?In ChatGPT, the GPT stands for “generative pre-trained transformer.” In the context of machine learning, a generative model is one that outputs new creative content based on a prompt or some input sequence. In the context of transformer models, the prediction is made one word at a time based on the previous string of words that came before it.The model predicts that the word “over” is the most likely to follow in the sequenceTake the above example. The crux of the prediction process is that each word is generated one at a time—this is actually the reason why ChatGPT slowly types out a response word-by-word instead of giving you a block text all at once. For predicting a single word:All of the previous words (or rather, at least enough to establish context) are fed into the model.Then, the model gives back not one word, but instead a probability distribution for every single word in the dictionary. This model is trained on a large sample of example text and predicts based on its observations of which words tend to follow other words.Finally, the highest probability word is outputted. Then, the cycle repeats.Once over is predicted, the cycle repeats and the model generates the word “the”Up to now, this model is alright, but an issue that you immediately run into is that the model is not able to understand context. E.g. if you have the following sentences:“I caught a bass in the lake.”“I connected my electric bass to the speaker.”The model literally cannot discern whether the input word bass refers to the fish or the instrument. Fortunately, the key insight of the transformer model is how it utilizes a tool called attention heads to preserve context of words—this will be explained in a bit.But to understand that, let’s first take a look at how meaning can even be encoded at all.Part 3: EmbeddingsTo understand how machines even make sense of words, we first have to take a look at embeddings, or mathematical representations of word meaning.In a machine, words are really hard to assign meaning to. E.g. you can tell a human that the word serene conveys the meaning of tranquility and peace — a computer has no inherent understanding of what tranquility means. However, the mathematical best-effort approach to approximate meaning is a concept called embeddings.Let’s look at an example of the word embeddings for the words elephant and small.Examples of word embeddings, drawn out in two dimensionsIn this example, our 2-d plane has a dimension representing “living-ness” and one that represents “size
More Teens Than You Think Have Been 'Deepfake' Targets
A growing number of teenagers know someone who has been the target of AI-generated pornographic images or videos.
One in 8 young people aged 13 to 20—and 1 in 10 teenagers aged 13 to 17—said they “personally know someone” who has been the target of deepfake nude imagery, and 1 in 17 have been targets themselves. Thirteen percent of teenagers said they knew someone who had used AI to create or redistribute deepfake pornography of minors.
Google Gemini will now watch YouTube videos for you — here's how it works
Why sit through a whole video if Gemini can do it for you?
Introducing 4o Image Generation | OpenAI
At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful.
Instagram partners with schools to prioritize reports of online bullying and student safety | TechCrunch
Instagram on Tuesday announced a new school partnership program aimed at expediting the handling of moderation reports submitted by verified school Instagram's new program lets schools report posts and student accounts that may violate the app's guidelines to be prioritized for removal.
Opinion | Don’t Throw Our Boys to the Wolves Online
We underestimate the manosphere at our peril.
Search LibGen, the Pirated-Books Database That Meta Used to Train AI
Millions of books and scientific papers are captured in the collection’s current iteration.
LibraryReady.AI
New ways to collaborate and get creative with Gemini
Check out the Gemini app’s latest features, like Canvas and Audio Overview.
Why ChatGPT Can’t Help Schools
Plagiarism, the definition of the word ‘help,’ and technology training
(2) Post | LinkedIn
🧩 What do children think about AI? Hear their views from the Children’s AI Summit
Yesterday our film captured at last week’s Children’s AI Summit had its debut screening in Paris, ahead of the Paris AI Action Summit. It explores key thoughts and ideas from some of the 150 children who attended the event.
It was shown at the Global AI Forum on Children, Education, Youth and Wellbeing with Common Sense Media, the LEGO Group and the NSPCC.
Mhairi Aitken from the Turing’s Children and AI team joined a panel with NSPCC’s CEO Chris Sherwood and young people from the Voice of the Youth Online to discuss how generative AI is reshaping youth experiences and development.
The Children’s AI Summit was hosted by the Alan Turing Institute and Queen Mary University of London, and supported by the LEGO Group, Elevate Great and EY.
#AI #AIActionSummit #AIforGood #ChildrensRights #ResponsibleAI
Undress Apps & Deepfake Videos Are Evolving Quickly - Here’s What You Need to Know. | LinkedIn
We have covered "deepfakes" before in a previous newsletter, focusing on AI-generated scams and voice cloning (read it here). However, deepfake technology is advancing at an alarming pace, and the risks for young people are growing.