data privacy/ethics

66 bookmarks
Custom sorting
(5) Post | LinkedIn
(5) Post | LinkedIn
What does AI really know? Despite the hype, AI doesn't "know" much. It simulates knowledge based on the data it was trained on. And that data is staggeringly narrow. In fact, less than 10 percent of all human knowledge has ever been digitized. That figure comes from a widely cited study by Hilbert and López published in Science in 2011. Of that digitized content, only a small fraction, roughly 1 to 5 percent, is actually used to train large language models. The best estimate we have is that today's most powerful AI systems are built on just 3 percent of the total knowledge humans have produced. And that 3 percent isn't random. It comes primarily from the past 50 years. It’s drawn disproportionately from English-language, male-authored, and commercially indexed sources. Early data collection practices in computing were often structured around male data entry workers inputting content from Western institutions; government records, legal documents, medical research, newspaper archives. Women's contributions, oral knowledge, Indigenous epistemologies, and non-Western systems of meaning were either excluded or never digitized in the first place. So when people claim AI is “smart,” we should be asking: Smart about what? It is good at repeating what it has seen. But what it has seen is a partial, highly filtered sliver of reality. It has seen code and headlines, forum posts and Wikipedia pages. It has not seen the oral histories passed down through generations, the embodied practices of midwives and farmers, or the lived experience of entire continents that remain underrepresented online. If we want AI to "know" more, and to serve more of humanity, we need to rethink what counts as data, who gets to contribute to it, and what forms of knowledge we prioritize for digitization and inclusion. More than merely fixing bias, we must recognise that AI's current outputs reflect deep gaps in the knowledge pipeline. We are entrusting decision-making to systems trained on a thin, inherited sample of the world. Until we fix the foundation, intelligence will remain simulated, partial, performative, and skewed. The world's technological capacity to store, communicate, and compute information. Martin Hilbert & Priscila López https://lnkd.in/dYVwjAuu
Women's contributions, oral knowledge, Indigenous epistemologies, and non-Western systems of meaning were either excluded or never digitized
·linkedin.com·
(5) Post | LinkedIn
The Gentle Singularity
The Gentle Singularity
We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be. Robots...
·blog.samaltman.com·
The Gentle Singularity
AI and the Death of Communication
AI and the Death of Communication
This article explores the impact of AI and digital platforms on human communication, cautioning against the misinformation and superficial truths proliferating online.
·leonfurze.com·
AI and the Death of Communication
Teaching AI Ethics 2025: Environment
Teaching AI Ethics 2025: Environment
This article, part of a series updating "Teaching AI Ethics," explores the environmental impact of artificial intelligence, particularly generative AI. It emphasizes the need for transparency in AI's energy usage, highlights the resource-intensive nature of training and using AI models, and prompts educational discussions on sustainable technology practices.
EnglishIn the English curriculum, students critically evaluate texts, perspectives, and themes, making it an ideal space to explore environmental narratives and “greenwashing” in media. Students can analyse persuasive or expository texts on digital sustainability, asking “Is this tech company truly ‘green’?”, “How do tech advertisements use environmental language to mislead?”, or “What narratives are missing in the conversation about e-waste?” These lend themselves well to persuasive writing, media analysis, or debates on digital environmental responsibility.Science (Environmental Science / Earth & Space Science)Science courses frequently focus on sustainability, ecosystems, and the impact of human activity on the planet. This aligns naturally with questions such as “What is the carbon footprint of cloud computing?”, “How do server farms affect local ecosystems?”, or “Can digital technology help monitor and mitigate climate change?” Students can conduct research projects comparing environmental costs and benefits of digital solutions or use data sets to model pollution caused by e-waste.Digital Technologies / Computer ScienceDigital Technologies curricula (including the Australian, UK, and US) explicitly include sustainability and environmental impact. Students might examine “How energy-efficient is the code we write?”, “What happens to old hardware when we upgrade?”, or “Can we design low-impact apps or systems?” Projects could include building sustainable tech prototypes, auditing energy use in computing, or exploring circular design principles in software and hardware development.GeographyGeography investigates human-environment interactions and is a great context for studying digital technologies’ ecological footprints. Students could ask, “Where do raw materials for smartphones come from?”, “How does digital infrastructure affect urban and rural land use?”, or “What role do satellites and GIS play in environmental monitoring?” Case studies on mining for rare earth elements, digital deserts, or tech-fuelled deforestation would deepen geographical inquiry skills.Design and TechnologiesThis subject encourages students to design solutions with awareness of social and environmental sustainability. Key teaching questions include “How can we reduce the lifecycle impact of tech products?”, “What is eco-design in the context of digital devices?”, or “How can we apply the principles of ‘cradle-to-cradle’ or closed-loop design to electronics?” Students might engage in sustainable redesign challenges or audit the energy use of different design tools.Civics and CitizenshipAs students examine democratic responsibility, rights, and participation, they can explore digital environmental justice: “Who bears the environmental burden of digital consumption?”, “Should governments regulate e-waste exports?”, or “What policies support equitable access to green technologies?” These topics are excellent for role-play debates, policy pitches, or mock UN climate tech summits.Mathematics (Statistics & Data)In mathematics, students interpret data and trends, opening up questions like “How much CO2 is generated by a Google search?”, “What do energy use graphs of tech companies reveal?”, or “How can we model the global growth of e-waste?” Students could analyse real-world datasets on energy consumption, digital product lifecycles, or climate projections influenced by digital tools.Visual Arts / Media ArtsVisual Arts and Media Arts students often explore themes of communication, critique, and message. Prompts might include “How can digital art raise awareness of e-waste or tech pollution?”, “What does digital decay or “digital plastic” look like?”, or “Can we create installations from recycled AI output?” Students could design visual campaigns, create artworks from obsolete tech, or critically assess how digital art platforms contribute to or challenge environmental issues.Theory of Knowledge (IB)In TOK, students explore how knowledge is constructed and evaluated, making it fertile ground for questions like “How does digital surveillance impact the environment and our understanding of ethical responsibility?”, or “Do we have a moral obligation to consider the environmental cost of digital knowledge systems?” These are excellent for essays, presentations, and cross-disciplinary inquiry.
·leonfurze.com·
Teaching AI Ethics 2025: Environment
AI Shows Racial Bias When Grading Essays — and Can’t Tell Good Writing From Bad
AI Shows Racial Bias When Grading Essays — and Can’t Tell Good Writing From Bad
Smith: Study finds ChatGPT replicates human prejudices and fails to recognize exceptional work — reinforcing the inequalities it's intended to fix.
That’s why schools and educators must carefully consider when and how to utilize AI for scoring. Rather than replacing grading, they could provide feedback on grammar or paragraph structure while leaving the final assessment to the teacher.
·the74million.org·
AI Shows Racial Bias When Grading Essays — and Can’t Tell Good Writing From Bad
Implicit Bias in Large Language Models: Experimental Proof and Implications for Education
Implicit Bias in Large Language Models: Experimental Proof and Implications for Education
We provide experimental evidence of implicit racial bias in a large language model (specifically ChatGPT) in the context of an authentic educational task and discuss implications for the use of these tools in educational contexts. Specifically, we presented ChatGPT with identical student writing passages alongside various descriptions of student demographics, include race, socioeconomic status, and school type.
·scale.stanford.edu·
Implicit Bias in Large Language Models: Experimental Proof and Implications for Education
AI Is Not Your Friend
AI Is Not Your Friend
How the “opinionated” chatbots destroyed AI’s potential, and how we can fix it
But the technology has evolved rapidly over the past year or so. Today’s systems can incorporate real-time search and use increasingly sophisticated methods for “grounding”—connecting AI outputs to specific, verifiable knowledge and sourced analysis. They can footnote and cite, pulling in sources and perspectives not just as an afterthought but as part of their exploratory process;
·theatlantic.com·
AI Is Not Your Friend
Stop Calling AI a Tool It’s Not a Tool
Stop Calling AI a Tool It’s Not a Tool
Generative AI doesn’t extend the artist, it replaces the conditions under which art is possible
I’ve received feedback that my 12–18-minute-long deep dives into the intersection of generative AI and art (specifically music) may be too long. So here’s my argument in under 5 minutes.Medium has recently featured several prominent articles claiming generative AI is just another tool. Two stand out: The Tools Will Change. Your Craft Doesn’t Have To by Agustin Sanchez (currently in the coveted top-right Staff Picks column) and Stop Pretending You Write Alone: AI, Authorship, and the End of the Solo Genius Myth by 404 (featured in a recent newsletter).I recommend reading these articles (direct links in my citation section below), but let me reprint my response to Sanchez’s piece, which succinctly summarizes my argument for why generative AI is not a tool:I have to push back very strongly on this claim that generative AI is just another tool. It is absolutely not. This is a category error that leads to a dangerous misreading of what is actually happening.We need to look at this through a different lens. The key distinction is between tools that extend the body and systems that are designed to replace it.A camera is a tool. It extends the eye. A paintbrush extends the hand and arm. A guitar extends the internal temporal rhythms of human experience into sound. Drawing on Susanne K. Langer, music objectifies time; it turns inner, lived temporality into something that can be shared.What these tools have in common is that they are inert until we act through them. They do nothing on their own. They do not have logic or agency. They sit quietly until we pick them up and use them to express something grounded in experience.AI is not that. AI does not extend the body. It is built to render it obsolete.AI markets itself as a tool, but it functions like an agent. Generative AI produces material within predefined parameters using massive datasets and its outputs are optimized to capture attention. It is not passive like a camera or a paintbrush. It acts on us.There is a logic built into it. That logic traces directly to the origins of cybernetics. Early cybernetic systems were not designed to enhance human capacity, they were designed to bypass it. During World War II, anti-aircraft systems used predictive feedback loops to automate tracking and firing. The human was treated as a lagging component in a system that prioritized precision and speed. Eventually, the human was cut out of the process entirely.This same logic drives today’s AI. These systems are not waiting for intention. They are designed to anticipate and override it. They do not follow our input. They predict it. They shape it. Generative AI does not assist human expression. It replaces the conditions under which expression is even necessary. It does not extend the body. It encodes and replaces it.You’re not shaping who you are in a new context. You’re accelerating your own obsolescence.To Wrap This UpLet’s be very clear about the ground of debate here: it isn’t about whether collaboration with generative AI is ‘real’ or whether technology belongs in art. That’s a diversion and builds a strawman argument.The question that matters, which I’ve explored extensively in my articles on Medium, is what kind of meaning-making we’re talking about.Art is not an output problem.I wrote this on LinkedIn yesterday:In 25 years as a professional session musician, playing on hundreds of records, I’ve never once heard someone in the studio say, “I wish I could do this faster.” Not once. That phrase just doesn’t exist in the vocabulary of people who are actually making music.And yet, especially on LinkedIn, my feed is full of people with music in their titles pushing products designed to speed up and scale the production of music.But speed and scale are not musical values. They are values of content. Or more precisely: content for the sake of content, and that’s not the same thing as music. It never was.Art is not a volume game, nor about speed or efficiency. Art is a wager — a grand one undergirded by infinite risk. Art is a symbolic act grounded in time, memory, relation. Tools can express that, but they don’t do the expressing for us.Generative AI isn’t helping us express more. It’s changing what expression means at the ontological level. It replaces uncertainty — the infinite risk inherent in art — with prediction and a narrowing of future possible paths. It swaps tension and ambiguity for certainty, which can only be manufactured. It strips art of the time it needs to reveal.Since I personally tarry in the medium of music, I’ll say this plainly:Music is freedom. Content is compliance.I call AI-generated art content rather than art because it is produced through logics that are antithetical to the creation of art.This is why I see the future of music offline and the future of content on Spotify. The same goes for all media produced through these logics, across every associated medium.
·medium.com·
Stop Calling AI a Tool It’s Not a Tool
An Algorithm Deemed This Nearly Blind 70-Year-Old Prisoner a “Moderate Risk.” Now He’s No Longer Eligible for Parole.
An Algorithm Deemed This Nearly Blind 70-Year-Old Prisoner a “Moderate Risk.” Now He’s No Longer Eligible for Parole.
A Louisiana law cedes much of the power of the parole board to an algorithm that bars thousands of prisoners from a shot at early release. Civil rights attorneys say it could disproportionately harm Black people — and may even be unconstitutional.
·propublica.org·
An Algorithm Deemed This Nearly Blind 70-Year-Old Prisoner a “Moderate Risk.” Now He’s No Longer Eligible for Parole.
Thomson Reuters wins AI copyright 'fair use' ruling against one-time competitor
Thomson Reuters wins AI copyright 'fair use' ruling against one-time competitor
A federal judge in Delaware on Tuesday said that a former competitor of Thomson Reuters was not permitted by U.S. copyright law to copy the information and technology company's content to build a competing artificial intelligence-based legal platform.
A federal judge in Delaware on Tuesday said that a former competitor of Thomson Reuters (TRI.TO), opens new tab was not permitted by U.S. copyright law to copy the information and technology company's content to build a competing artificial intelligence-based legal platform.U.S. Circuit Judge Stephanos Bibas' decision, opens new tab against defunct legal-research firm Ross Intelligence marks the first U.S. ruling on the closely watched question of fair use in AI-related copyright litigation.
·reuters.com·
Thomson Reuters wins AI copyright 'fair use' ruling against one-time competitor
Sora has a bias problem
Sora has a bias problem
Sora seems to think all academics are men, and predominantly white men at that. And this is a problem.
·futureofbeinghuman.com·
Sora has a bias problem
Suno's mission is to make it possible for everyone to make music. We imagine a future where music is a bigger, more valuable, and more meaningful part of people's lives than it even is today. Technology enables a future where the whole world can explore, create, and be active…
Suno's mission is to make it possible for everyone to make music. We imagine a future where music is a bigger, more valuable, and more meaningful part of people's lives than it even is today. Technology enables a future where the whole world can explore, create, and be active…
— Suno (@suno_ai_)
Suno's mission is to make it possible for everyone to make music. We imagine a future where music is a bigger, more valuable, and more meaningful part of people's lives than it even is today. Technology enables a future where the whole world can explore, create, and be active participants in an art form most have only ever consumed. From professional musicians seeking inspiration to friends and family writing songs for each other, we are exploring new ways to create, listen to, and experience music. So far, more than 12 million people are engaging with music in new ways that wouldn't be possible without Suno. We see this as early but promising progress. Major record labels see this vision as a threat to their business. Each and every time there's been innovation in music — from the earliest forms of recorded music, to sampling, to drum machines, to remixing, MP3s, and streaming music — the record labels have attempted to limit progress. They have spent decades attempting to control the terms of how we create and enjoy music, and this time is no different. So, it is perhaps not a surprise that on June 24th, members of the Recording Industry Association of America, which represents the major record labels, filed a lawsuit against Suno, alleging that the data used in training our music generation technologies infringed on the copyrights of the major record labels that they represent. This lawsuit is fundamentally flawed on both the facts and the law, and is nothing more than yet another instance where they chose litigation over innovation. For starters, the major record labels clearly hold misconceptions about how our technology works. Suno helps people create music through a similar process to one humans have used forever: by learning styles, patterns, and forms (in essence, the "grammar" or music), and then inventing new music around them. The major record labels are trying to argue that neural networks are mere parrots — copying and repeating — when in reality model training looks a lot more like a kid learning to write new rock songs by listening religiously to rock music. Like that kid, Suno gets better the more our AI learns. We train our models on medium- and high-quality music we can find on the open internet — just as Google's Gemini, Microsoft's Copilot, Anthropic's Claude, OpenAI's ChatGPT, and even Apple's new Apple Intelligence train their models on the open internet. Much of the open internet indeed contains copyrighted materials, and some of it is owned by major record labels. But, just like the kid writing their own rock songs after listening to the genre — or a teacher or a journalist reviewing existing materials to draw new insights — learning is not infringing. It never has been, and it is not now. The timing of this lawsuit was somewhat surprising. When this lawsuit landed, Suno was, in fact, having productive discussions with a number of the RIAA's major record label members to find ways of expanding the pie for music together. We did so not because we had to, but because we believe that the music industry could help us lead this expansion of opportunity for everyone, rather than resisting it. Whether this lawsuit is the result of over-eager lawyers throwing their weight around, or a conscious strategy to gain leverage in our commercial discussions, we believe that this lawsuit is an unnecessary impediment to a larger and more valuable future for music. This is particularly the case because Suno is a new kind of musical instrument, one that enables a new kind of creative process for everyone and opens new business opportunities for the industry. Suno is designed for original music, and we prize originality, both in how we build our product and in how people use it. People who use Suno are using the product to create their own, original music. They are not trying to recreate an existing song that can be heard somewhere else on the internet for free. But, even if they were trying to copy existing music, we have myriad controls in place to encourage originality and prevent duplicative use cases. We do so more aggressively than any other company in the industry, including other startups. Some of our originality-guarding features include checking for and preventing copyrighted content in audio uploads, and disallowing artist-based descriptions in requests to generate music. Why do we work to encourage originality? We do this because it makes for a more fun and engaging experience to create entirely original compositions on Suno. We do it because we think it makes Suno incredibly valuable to be a place where new musical talent can shine. AI allows anyone to realize the songs in their head, regardless of the money, equipment, or connections that they have. The future is an explosion of new artists that are creating music in new ways, building fan bases, finding new reasons to smile, and getting famous. We hope that the major record labels realize that we can build a stronger foundation
·x.com·
Suno's mission is to make it possible for everyone to make music. We imagine a future where music is a bigger, more valuable, and more meaningful part of people's lives than it even is today. Technology enables a future where the whole world can explore, create, and be active…