Found 2677 bookmarks
Newest
Meet Thaura | Your Ethical AI Companion
Meet Thaura | Your Ethical AI Companion
Thaura AI is designed as an ethical LLM. It doesn't train models on your private data, has transparency about their business model, and advertises that it uses 94% less energy than ChatGPT.
·thaura.ai·
Meet Thaura | Your Ethical AI Companion
Something Big Is Happening
Something Big Is Happening
I'm not entirely convinced by this article, but I try to read opinions from a variety of sources and perspectives on AI. This author explains that the pace of AI is moving much faster than most people realize, and then argues that the amount of disruption AI will cause will be more significant and quicker than most people expect.
·shumer.dev·
Something Big Is Happening
A Deep Dive into Desirable Difficulties
A Deep Dive into Desirable Difficulties
I've seen some misunderstandings on desirable difficulties on social media recently. This article has an understandable explanation of what desirable difficulties are (techniques that may initially cause errors and short-term performance issues but in the long run improve learning and task performance). The techniques include varied practice, spacing, reduced feedback and guidance, retrieval, and interleaving. If you're new to the idea of desirable difficulties, this will give you a solid foundation.
Difficulties are desirable when they boost learning, not performance.
For example, when learning to drive, it would be easier to practice by driving round the same block multiple times, with an instructor sitting beside you and telling you exactly what to do. As a learner under such conditions, you’d make very few errors, if any.
However, once our lessons are over, we have to drive without an instructor telling us what to do, on complex and sometimes unfamiliar roads. The desirable difficulties framework would suggest, therefore, that practice should resemble that realistic situation, with a variety of road conditions to deal with, and reduced guidance or feedback.
·firth.substack.com·
A Deep Dive into Desirable Difficulties
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
This article includes some great research translation by Tom McDowall about Event Segmentation Theory. We talk about "chunking" content to support learning, but we often rely on time or intuition to determine where to break up content. Event Segmentation Theory provides an evidence-informated approach to more meaningful divisions so you can improve the effectiveness of your training based just on where you add breaks. Tom includes lots of citations for further reading.
Your brain doesn’t process the world as one unbroken stream. It automatically divides ongoing experience into discrete chunks, which researchers call “events,” and does so continuously, without you deciding to do so or being aware that it’s happening.
Information present at a boundary, the moment when one event ends and another begins, gets encoded more strongly than information in the middle of an event. The boundary acts like an attentional gate: it opens briefly to let new information in, and that information gets a better foothold in long-term memory as a result (Kurby and Zacks, 2008).
There’s a trade-off, though. While boundaries improve memory for what happens at the transition point, they impair memory for temporal order across the boundary. Items that span a boundary are harder to sequence correctly and are remembered as being further apart in time than they were (Ezzyat and Davachi, 2014).
Six features of a situation reliably trigger event boundaries: spatial or location changes, character entrances or exits, new object interactions, goal shifts, changes in causal structure, and temporal discontinuities (Speer, Zacks, and Reynolds, 2007). In practical terms, the most reliable triggers for workplace training are changes in what you’re trying to achieve (the goal), changes in where you are or what you’re looking at (the environment), and changes in why the current action matters (the causal structure).
The most direct way to apply EST is to structure process training and standard operating procedures around the natural event structure of the task. Rather than organising steps by convenience or by how they appear in a system, map them to the hierarchical structure of the activity: major phases first (the coarse events), then detailed steps within each phase (the fine events).
·idtips.substack.com·
Event Segmentation Theory: Why Some Training Feels Clear and Some Feels Like One Continuous Mistake
"Steal My Wins" | Kimberly Scott
"Steal My Wins" | Kimberly Scott
Kim Scott has been sharing lots of details on her job search and the strategies that are working for her. As a consultant, I've been out of the job market for a long time, so it's helpful to have folks like Kim that I can point others to who are looking for work in this lousy job market.
·linkedin.com·
"Steal My Wins" | Kimberly Scott
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
Thinking about a "human in the loop" is a good start, but what does that really look like if you're using chat bots or AI coaches at scale? I really like the example of levels of risk for safety in mental health and when chats should be reported or escalated to a human for intervention.
·parrotbox.ai·
Don’t Panic: When Should AI Coaches and Assistants Request Human Intervention? – Parrotbox
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
Cory Doctorow has written a long article on the risks and problems of AI, particularly in the way that AI companies promote and hype the benefits of AI.
In automation theory, a “centaur” is a person who is assisted by a machine. Driving a car makes you a centaur, and so does using autocomplete. A reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop’. It’s their signature on the diagnosis.” This is a reverse centaur, and it is a specific kind of reverse centaur: it is what Dan Davies calls an “accountability sink”. The radiologist’s job is not really to oversee the AI’s work, it is to take the blame for the AI’s mistakes.
This is another key to understanding – and thus deflating – the AI bubble. The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job.
For AI to be valuable, it has to replace high-wage workers, and those are precisely the workers who might spot some of those statistically camouflaged AI errors.
After more than 20 years of being consistently wrong and terrible for artists’ rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans.
The fact that every AI-created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them or give them away for nothing. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission.
·theguardian.com·
AI companies will fail. We can salvage something from the wreckage | AI (artificial intelligence) | The Guardian
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking

Debbie Richards writes about critical issues related to working with AI: our own human cognitive biases. AI can reflect and amplify our own mental shortcuts. Being aware of our cognitive biases can make us more effective at working with AI.

" The researchers identify three critical stages where our own thinking can steer AI off course:

Before Prompting: Our past experiences create a "halo" or "horns" effect. If you’ve had great results, you might over-trust the tool for tasks it isn't ready for. Conversely, if you've been spooked by headlines about hallucinations, you might avoid it even when it could be genuinely helpful.
During Prompting: How we frame a question matters. "Leading question bias" happens when we bake the answer into the prompt, like asking "Why is product X the best?" This encourages the AI to ignore weaknesses. There is also "expediency bias," where we settle for the first "good enough" answer because we’re under time pressure.
After Prompting: Once we have an output, the "endowment effect" can make us overvalue it simply because of the effort we put into the prompt. We also have to watch the "framing effect." How we present that AI-driven data can completely change how our audience feels about it.:
·linkedin.com·
The Hidden Mirror: Why Your AI is Only as Good as Your Thinking
Do AI Voices and Avatars Improve Learning? Here’s What the Data Says
Do AI Voices and Avatars Improve Learning? Here’s What the Data Says
TechSmith conducted a global study to determine how AI voices and avatars affect learning. I was surprised at how well the high-quality AI voices performed. We seem to have crossed the threshold where high-quality AI voices perform comparably to human voice actors. I was also surprised at how well the AI avatars did, although their recommendations for specific use cases do make some sense. I wish they'd also done a separate control with no narrator visible on screen (AI or human). The fact that AI avatars can be comparable to humans in some instances isn't that shocking, I guess, but I really want to see how it compares to just having the slide content and no face on screen.
What really makes learners pay attention? A voice that sounds clear, warm, and polished — not whether it’s human or AI. As voice quality improved in the study, so did professionalism ratings. In fact, 92% of viewers said the high-quality AI voice made the video feel professionally produced.
Results from the “pop quiz” portion of our study make the pattern clear: correct answers increased as voice quality improved. In fact, the high-quality AI voice produced the strongest retention numbers, aside from one low-quality human outlier.
But are AI voices distracting overall? It depends. Low-quality, synthetic voices are unmistakable and draw attention away from the content. When the AI voice sounds natural, many viewers can’t distinguish it from a human voice. The difference is less jarring, and information retention holds steady or even improves.
AI avatars aren’t distracting by default, but size matters. When an avatar fills the screen, viewers are more likely to notice robotic traits like lip sync issues, eye contact, limited facial movement, awkward blinking, or unnatural breathing.
The right format depends on your video’s purpose. Use this quick decision guide: Screen-heavy, procedural, and frequently updated content: High-quality AI voice with screen recording, plus an optional AI avatar in PiP. Emotionally sensitive, culture-setting, or leadership-driven content: Human presenter with a human voice.  Long-form, concept-heavy learning: A mix — human-led modules for core ideas, supported by AI-voiced micro-lessons and refreshers.
·techsmith.com·
Do AI Voices and Avatars Improve Learning? Here’s What the Data Says
How to fix your LinkedIn feed in one hour
How to fix your LinkedIn feed in one hour

If you find scrolling on LinkedIn terribly annoying, you may not have trained its algorithm well. Follow these tips to improve the quality of your LinkedIn feed.

" You manage your feed by giving AI the signal.

Signal for what you want. Signal for what you do not want. Then you reinforce it until the algorithm adjusts to your taste. That is it. Not complicated. But most people never do it. "

·linkedin.com·
How to fix your LinkedIn feed in one hour
User Experience Research Techniques for Instructional Design
User Experience Research Techniques for Instructional Design
Rather than guessing which designs will work better for users (which we do a lot of in L&D), borrow techniques from UX. Connie Malamed summarizes multiple UX research techniques. Note that a lot of UX research can be done pretty cheaply and simply. You don't need hundreds of people to test for many of these. Small scale usability testing with 4-6 people can give you useful results.
·theelearningcoach.com·
User Experience Research Techniques for Instructional Design
Fairly Trained certified models
Fairly Trained certified models
Fairly Trained is a nonprofit that certifies AI models for using only licensed content for training their AI. The list of certified models mostly includes AI music generation tools currently, but this is an interesting idea for improving transparency around AI training.
·fairlytrained.org·
Fairly Trained certified models
Articulate Rise: The Emperor’s Getting Dressed
Articulate Rise: The Emperor’s Getting Dressed
Zainab Fawzul takes a critical look at Articulate Rise. She argues that even though Articulate has been doing some more substantive updates to Rise recently, it's still lacking some highly useful requested features. Multiple external additions have come out to help fill the gaps in Rise's capabilities.
·linkedin.com·
Articulate Rise: The Emperor’s Getting Dressed
AI and Branding 2026: Copyright Risks for Content Creators
AI and Branding 2026: Copyright Risks for Content Creators
Harriet Moser generates a lot of fantastic AI images; she's one of the people I follow on LinkedIn for inspiration with her delightful visuals. This blog post on her site is much more serious though. Just because you can put celebrities and brands in your AI images and videos doesn't mean you should. Get an overview of the copyright risks for content creators in this post.
My Recommendation: Invest in properly licensed AI tools and original content creation Maintain human oversight and creative direction Develop distinctive brand identities rather than imitating others Communicate transparently about AI use Respect intellectual property rights as a fundamental ethical standard
·askharriet.com·
AI and Branding 2026: Copyright Risks for Content Creators
Why is Everyone So Wrong About AI Water Use?
Why is Everyone So Wrong About AI Water Use?
Hank Green explains why it's hard to figure out how much water AI actually uses and why different sources report wildly different results. It depends on how you measure the use (including training). The quick answer is that you should be skeptical of any single number for AI water use that doesn't include the explanation of how they got to that number.
·youtube.com·
Why is Everyone So Wrong About AI Water Use?
The Shape of AI: Jaggedness, Bottlenecks and Salients
The Shape of AI: Jaggedness, Bottlenecks and Salients
Ethan Mollick describes one of the challenges of working with AI: its capabilities are very jagged. AI can be really good at some tasks but terrible at others, and it's not always easy to predict where it's most useful. When weaknesses that create bottlenecks are identified, AI companies focus development in those areas, Just because something is a weakness now doesn't necessarily mean AI will never be able to do that task.
You can see how AI is indeed superhuman in some areas, but in others it is either far below human level or not overlapping at all. If this is true, then AI will create new opportunities working in complement with human beings, since we both bring different abilities to the table.
The exact abilities of AI are often a mystery, so it is no wonder AI is harder to use than it seems.
A system is only as functional as its worst components. We call these problems bottlenecks. Some bottlenecks are because the AI is stubbornly subhuman at some tasks.
Bottlenecks can create the impression that AI will never be able to do something, when, in reality, progress is held back by a single jagged weakness. When that weakness becomes a reverse salient, and AI labs suddenly fix the problem, the entire system can jump forward.
·oneusefulthing.org·
The Shape of AI: Jaggedness, Bottlenecks and Salients
How Can I Capture an Electronic Signature?
How Can I Capture an Electronic Signature?
In one of my recent projects, we had a question about capturing an electronic signature for an acknowledgement in Storyline. This tutorial from Yukon Learning explains how to set up a short answer survey question where people can type their names as a signature. It's obviously not as secure as something like Docusign, but it's sufficient for some purposes.
·thearticulatetrainer.com·
How Can I Capture an Electronic Signature?
Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning
Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning
While this article incorrectly states limitations of earlier image generation tools (you can upload reference images and color schemes to several tools; you can get diverse images with better prompting; you can get consistency in visual style and characters), I love the ideas here for generating instructional images. Nano Banana really is much better for creating these instructional images with text. The main focus of the article is sharing use cases to support learning: visualization, analogy, worked examples, contrasting cases, and elaboration. The examples are great and show you how to go past the typical busy infographic we see with Nano Banana.
·drphilippahardman.substack.com·
Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning
Marketing is Broken! ...and AI is to Blame. - Issuu
Marketing is Broken! ...and AI is to Blame. - Issuu
I built my network on LinkedIn before the algorithm changed and before AI changed a lot of the marketing. It's part of my pipeline for how clients find me. However, for people who don't already have a following, it's a lot harder to break through the noise on LinkedIn. This article explains how marketing yourself and building a personal brand on LinkedIn has changed.
·issuu.com·
Marketing is Broken! ...and AI is to Blame. - Issuu
Do AI avatars teach as well as humans? The results might surprise you! - Media and Learning Association
Do AI avatars teach as well as humans? The results might surprise you! - Media and Learning Association
This research was done in partnership with Synthesia, so some skepticism is warranted. But this study found that people recalled information similarly whether it was a human or AI avatar explaining it. This research didn't compare to other forms of video or learning though, and talking head videos in general are often less effective than other instructional methods.
1. Memory Performances were similar: It did not really matter whether learners got their information from AI or a human, through video or text – they remembered nearly the same amount at recognition and recall levels. 2. Recall performance depended on visual design: This meant tracing back to the video period corresponding to the questions, some particular visual designs were easier to memorise.
·media-and-learning.eu·
Do AI avatars teach as well as humans? The results might surprise you! - Media and Learning Association
Design Training That Actually Sticks: A Practical Starter Kit for Workplace Learning
Design Training That Actually Sticks: A Practical Starter Kit for Workplace Learning
Mike Taylor has provided a summary of five fundamental evidence-based principles of learning. For each principle, he lists what it is, why it matters, what it looks like, and a resource to help people learn more. This is a great place to get started with some basic learning science.
·linkedin.com·
Design Training That Actually Sticks: A Practical Starter Kit for Workplace Learning
eLearning Authoring Tools Comparison: Interactive Data & Rankings | Articulate Alternatives
eLearning Authoring Tools Comparison: Interactive Data & Rankings | Articulate Alternatives
Mike Stein (ID Atlas) completed an immensely valuable project to compare ID authoring tools by building the same course in multiple tools. Their site compares and ranks tools based on a rubric that includes development time, usability, responsiveness, and accessibility.
·idatlas.org·
eLearning Authoring Tools Comparison: Interactive Data & Rankings | Articulate Alternatives
Artificial intelligence and the environment: Putting the numbers into perspective - Artificial intelligence
Artificial intelligence and the environment: Putting the numbers into perspective - Artificial intelligence
Using AI does use energy, water, and other resources. But when you consider it as part of your decision-making, it's important to put it in perspective. Being on a Zoom call or watching Netflix for an hour uses more electricity and water than prompting ChatGPT numerous times.
·nationalcentreforai.jiscinvolve.org·
Artificial intelligence and the environment: Putting the numbers into perspective - Artificial intelligence