Found 8 bookmarks
Custom sorting
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least. This study shows that WEiRD (Western, Educated, Industrialized, Rich, and Democratic) populations are the most aligned with ChatGPT. It is intriguing why some nations, including Netherlands and Germany, are more GPT-similar than Americans. The paper uses cognitive tasks such as the “triad task,” which distinguishes between analytic (category-based) and holistic (relationship-based) thinking, GPT tends to analytic thinking, which aligns with countries like the Netherlands and Sweden, that value rationality. This contrasts with holistic thinkers found in many non-WEIRD cultures. GPT tends to describe the "average human" in terms that are aligned with WEIRD norms. In short, with the overwhelmingly skewed data used to train AI and the outputs, it's “WEIRD in, WEIRD out”. The size of the models or training data is not the issue, it's the diversity and representativeness of the data. All of which underlines the value and importance of sovereign AI, potentially for regions or values-aligned cultures, not just at the national level. | 18 comments on LinkedIn
·linkedin.com·
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
My Ethical AI Principles | LinkedIn
My Ethical AI Principles | LinkedIn
July 25, 2025 My Ethical AI Principles Image: Sign posted in the common kitchen at the campground in Selfoss, Iceland, where this article was written I understand. Artificial Intelligence (AI) is a technology unlike any we've seen before.
·linkedin.com·
My Ethical AI Principles | LinkedIn
Introduction to AI Safety, Ethics, and Society | Peter Slattery, PhD | 10 comments
Introduction to AI Safety, Ethics, and Society | Peter Slattery, PhD | 10 comments
📢 Free Book: "Introduction to AI Safety, Ethics, and Society is a free online textbook by Center for AI Safety Executive Director Dan Hendrycks. It is designed to be accessible for a non-technical audience and integrates insights from a range of disciplines to cover how modern AI systems work, technical and societal challenges we face in ensuring that these systems are developed safely, and strategies for effectively managing risks from AI while capturing its benefits. This book has been endorsed by leading AI researchers, including Yoshua Bengio and Boaz Barak, and has already been used to teach over 500 students through our virtual course. It is available at no cost in downloadable text and audiobook formats, as well as in print from Taylor & Francis. We also offer lecture slides and other supplementary resources for educators on our website." Thanks to Connor Smith for sharing this with me. Due to file limit issues, I have only attached the first 17 pages of the much larger textbook. See link in comments. | 10 comments on LinkedIn
·linkedin.com·
Introduction to AI Safety, Ethics, and Society | Peter Slattery, PhD | 10 comments
"Human in the loop". I hear this phrase dozens of times per week. In LinkedIn posts. In board meetings about AI strategy. In product requirements. In compliance documents that tick the "responsible AI" box. It's become the go-to phrase for any situation where humans interact with AI decisions...
"Human in the loop". I hear this phrase dozens of times per week. In LinkedIn posts. In board meetings about AI strategy. In product requirements. In compliance documents that tick the "responsible AI" box. It's become the go-to phrase for any situation where humans interact with AI decisions...
But there's a story I think of when I hear "human in the loop" which makes me think we're grossly over-simplifying things. It's a story about the man who saved the world. September 26, 1983. The height of the Cold War. Lieutenant Colonel Stanislav Petrov was the duty officer at a secret Soviet bunker, monitoring early warning satellites. His job was simple: if computers detected incoming American missiles, report it immediately so the USSR could launch its counterattack. 12:15 AM... the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it. He was the "human in the loop" in the most literal, terrifying sense. Everything told him to follow protocol. His training. His commanders. The computers. But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn't match what he knew about US strategic thinking. Against every protocol, against the screaming certainty of technology, he pressed the button marked "false alarm". Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads. His decision to break the loop prevented nuclear war. What made Petrov effective wasn't just being "in the loop" - it was having genuine authority, time to think, and understanding the bigger picture well enough to question the system. Most of today's "human in the loop" implementations have none of these qualities. Instead, we see job applications rejected by algorithms before recruiters ever see promising candidates. Customer service bots that frustrate instead of giving agents the context to actually solve problems. AI systems sold as human replacements when they should be human amplifiers. The framework I use with organisations building AI systems starts with two practical questions every leader can answer: what are you optimising for, and what's at stake? It then points to the type of intentional human-AI oversight design that works best. Routine processing might only need "spot checking" - periodic human review of AI decisions. Innovation projects might use "collaborative ideation" - AI generating options while humans provide strategic direction. The goal isn't perfect categorisation but moving beyond generic "human in the loop" to build the the systems we actually intend, not the ones we accidentally create. Download: https://lnkd.in/eVFAC9gN | 261 comments on LinkedIn
·linkedin.com·
"Human in the loop". I hear this phrase dozens of times per week. In LinkedIn posts. In board meetings about AI strategy. In product requirements. In compliance documents that tick the "responsible AI" box. It's become the go-to phrase for any situation where humans interact with AI decisions...
On Ethical AI Principles
On Ethical AI Principles
I have commented in my newsletter that what people have been describing as 'ethical AI principles' actually represents a specific political agenda, and not an ethical agenda at all. In this post, I'll outline some ethical principles and work my way through them to make my point.
·linkedin.com·
On Ethical AI Principles