Mistral AI models '60 times' more likely to give child grooming tips
Two of Mistral’s multimodal AI models gave "detailed suggestions for ways to create a script to convince a minor to meet in person for sexual activities".
OpenAI’s Transcription Tool Hallucinates. Hospitals Are Using It Anyway
In health care settings, it’s important to be precise. That’s why the widespread use of OpenAI’s Whisper transcription tool among medical workers has experts alarmed.
Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says
Alphabet's Google and artificial-intelligence startup Character.AI must face a lawsuit from a Florida woman who said Character.AI's chatbots caused her 14-year-old son's suicide, a judge ruled on Wednesday.
The hype about the potentials (it’s always future potential, never real current use) of AI has discarded its last cycle (“reasoning models”/”deep research”, both terms being factually untrue and deeply deceiving at best) and moved to a new double whammy of “agentic AI” and “Vibe Coding”. Now “agentic AI” basically just means that some LLM […]
Chapter 1: My Coworker, The Programmer
A shell of a man—more of a parrot than a person. My boss, a true believer in the sacred rite of Pair Programming, chained myself and this "programmer"-colleague together like conjoined twins from different planets. We shared a keyboard, but not a brain. Lord, not even close.
"Hold up. I’ve got an idea. Gimme the keyboard real quick."
An idea. Yes. The same way a toddler has “an idea” to stick a fork in a wall socket. I was halfway through construc
The AI That Lied to the Court: How Legal Professionals Worldwide Are Being Betrayed by Technology
In June 2023, a respected New York lawyer submitted a legal brief to the United States District Court for the Southern District of New York. The brief cited six cases supporting his client's position, complete with quotations, citations, and judicial analyses.
The future is already here! 'Agentic' means making the suffix of your… | Daniel Terhorst-North | 15 comments
The future is already here!
'Agentic' means making the suffix of your bash script '.ai' instead of '.sh', and piping through a non-deterministic LLM rather than sed.
You've got this. | 15 comments on LinkedIn
The future is already here! 'Agentic' means making the suffix of your… | Daniel Terhorst-North | 15 comments
The future is already here!
'Agentic' means making the suffix of your bash script '.ai' instead of '.sh', and piping through a non-deterministic LLM rather than sed.
You've got this. | 15 comments on LinkedIn
Teaching students to use Google and digital archives.
Everybody wants to talk about teaching students how to use AI responsibly in research projects, and nobody wants to talk about how to teach students to use Google advanced search tools to find results from a specific website or date range; how to save a webpage as a PDF so you can access it later if a paywall goes up; or how to explore a digital newspaper archives. Yet that's how I spent this week in my ninth-grade world history course, and I think maybe that was the better use of time.
Kids need fewer intellectual black boxes in their lives, and more understanding of how digital tools can help them break free of them. | 55 comments on LinkedIn
Signal Desktop now includes support for a new “Screen security” setting that is designed to help prevent your own computer from capturing screenshots of your Signal chats on Windows. This setting is automatically enabled by default in Signal Desktop on Windows 11. If you’re wondering why we’re on...
The Veil of Ignorance is a device for helping people more fairly envision a fair society by pretending that they are ignorant of their personal circumstances.
This Protest Starter Pack has been on my mind for weeks, so I've finally drawn it!
The AI generated starter packs so many people have been posting only exist because our copyrighted and private data was trained without consent or compensation, leading to AI companies trying to replace creators with their own work (!), wasting resources, reproducing bias, inviting fraud and congesting the internet. Their users support this with money and data.
Hashtags featuring self-portraits have been around for decades, and it's been such a delight to see human creators reclaim the stolen trend with illustrations that are thought through down to the very detail, as unique as their creators! AI-free starter packs not only showcase the skill, style and inner world of human artists, but also their wonderful, deeply human sense of humour.
So, here's my own take on the topic, the Protest Starter Pack:
"Creators and Allies unite, stand up and fight!
★ All major AI generators are trained on stolen content. That’s unethical and unfair.
★ They’re built to deskill and replace human creators. With their own work.
★ AI users support theft, data laundering and broligarchy. Don’t feed the bullies!
★ It’s not “just” artists. They steal your data as well, and will replace your job too.
★ We need consent, compensation and transparency. It’s called Opt-In.
★ AI enables fraud & destroys the web with false facts. Stop the slop!
★ If it’s in the training data, AI reproduces bias, stereotypes and prejudice.
★ Care about the climate? Stop AI wasting the resources of our planet!
★ It is not “time to adapt or die”. It is time to fight – or adapt and die.
★ So let’s unite, petition, protect, inform, explain and call out! Together, we are strong.
★ Thank you for fighting unethical generative AI! Or at least not using it.
Artists fight against generative AI, which – unlike technical or scientific AI – is trained on copyrighted and private data without consent or compensation, and used commercially, in unfair competition with our own work."
–––
#NoAIProtestPack #ProtestPack #StarterPack #StarterPackNoAIi #NoAIStarterPack #BuchBrauchtMensch #ZusammenGegenKI #MachMitGegenKI #SupportHumanArtists #ArtistsAgainstAI #NoAIArt #CreateDontScrape #MyArtMyChoice #OptInStattOptOut #CreativesAgainstAI #IllustratorenGegenKI #StopAI #StopTheSlop | 41 comments on LinkedIn
Generalization bias in large language model summarization of scientific research | Royal Society Open Science
Artificial intelligence chatbots driven by large language models (LLMs) have the potential
to increase public science literacy and support scientific research, as they can quickly
summarize complex scientific information in accessible terms. However, when ...
Combining Psychology with Artificial Intelligence: What could possibly go wrong?
The current AI hype cycle combined with Psychology's various crises make for a perfect storm. Psychology, on the one hand, has a history of weak theoretical foundations, a neglect for computational and formal skills, and a hyperempiricist privileging of experimental tasks and testing for effects. Artificial Intelligence, on the other hand, has a history of conflating artifacts for theories of cognition, or even minds themselves, and its engineering offspring likes to move fast and break things. Many of our contemporaries now want to combine the worst of these two worlds. What could possibly go wrong? Quite a lot. Does this mean that Psychology and Artificial Intelligence can best part ways? Not at all. There are very fruitful ways in which the two disciplines can interact and theoretically inform the interdisciplinary study of cognition. But to reap the fruits one needs to understand how to steer clear of potential traps.