Meet Thaura | Your Ethical AI Companion
Thaura AI is designed as an ethical LLM. It doesn't train models on your private data, has transparency about their business model, and advertises that it uses 94% less energy than ChatGPT.
Debbie Richards writes about critical issues related to working with AI: our own human cognitive biases. AI can reflect and amplify our own mental shortcuts. Being aware of our cognitive biases can make us more effective at working with AI.
" The researchers identify three critical stages where our own thinking can steer AI off course:
Before Prompting: Our past experiences create a "halo" or "horns" effect. If you’ve had great results, you might over-trust the tool for tasks it isn't ready for. Conversely, if you've been spooked by headlines about hallucinations, you might avoid it even when it could be genuinely helpful.
During Prompting: How we frame a question matters. "Leading question bias" happens when we bake the answer into the prompt, like asking "Why is product X the best?" This encourages the AI to ignore weaknesses. There is also "expediency bias," where we settle for the first "good enough" answer because we’re under time pressure.
After Prompting: Once we have an output, the "endowment effect" can make us overvalue it simply because of the effort we put into the prompt. We also have to watch the "framing effect." How we present that AI-driven data can completely change how our audience feels about it.:If you find scrolling on LinkedIn terribly annoying, you may not have trained its algorithm well. Follow these tips to improve the quality of your LinkedIn feed.
" You manage your feed by giving AI the signal.
Signal for what you want. Signal for what you do not want. Then you reinforce it until the algorithm adjusts to your taste. That is it. Not complicated. But most people never do it. "