Applied Statistics - aka A.I./Machine Learning/Large Language Models

42 bookmarks
Custom sorting
The brain is a computer is a brain: neuroscience's internal...
The brain is a computer is a brain: neuroscience's internal...
The Computational Metaphor, comparing the brain to the computer and vice versa, is the most prominent metaphor in neuroscience and artificial intelligence (AI). Its appropriateness is highly...
·arxiv.org·
The brain is a computer is a brain: neuroscience's internal...
“This robot is dictating her next steps in life”: disability justice and relational AI ethics - AI & SOCIETY
“This robot is dictating her next steps in life”: disability justice and relational AI ethics - AI & SOCIETY
As automated technologies, particularly artificial intelligence (AI) and automated decision-making (ADM), become integral to social life, there is growing concern about their ethical implications. While issues of accountability, transparency, and fairness dominate discussions on “ethical” AI, little attention has been given to how socially disadvantaged groups most impacted by ADM systems form ethical judgments about them. Drawing on insights from relational ethics, this study uses dialogue groups with disabled people to explore how people distinguish between ‘more just’ or ‘less just’ uses of technology, and the contextual, situational, and relational factors that shape these judgments. For the dialogue group participants in our study, ethical reasoning was most strongly influenced by concerns about how ADM systems affect self-determination, caring relationships and identity recognition, and about the political–economic drivers of automation. The article contributes to AI ethics by empirically demonstrating that justice and ethics depend on the social relationships valued in different contexts and what is at stake, both personally and politically, in decisions aided by automation.
·link.springer.com·
“This robot is dictating her next steps in life”: disability justice and relational AI ethics - AI & SOCIETY
The harm & hypocrisy of AI art — Matt Corrall
The harm & hypocrisy of AI art — Matt Corrall
Whilst AI companies claim to be taking us towards the future, generative AI means less human connection and the widespread impoverishment of the visual world. As a creative in the the tech industry, I feel compelled to put forward a different narrative - one where creative careers are not turned ove
·corralldesign.com·
The harm & hypocrisy of AI art — Matt Corrall
Good Enough AI
Good Enough AI
It’s not a space race, but a race to the bottom.
·jurgengravestein.substack.com·
Good Enough AI
Judge Rejects Fair-Use Defense in Westlaw AI Copyright Suit (3)
Judge Rejects Fair-Use Defense in Westlaw AI Copyright Suit (3)
Thomson Reuters Enterprise Centre GmbH convinced a federal judge an AI-powered legal tool’s ingestion of its data isn’t “fair use” under copyright law.
·news.bloomberglaw.com·
Judge Rejects Fair-Use Defense in Westlaw AI Copyright Suit (3)
Writer Ted Chiang on AI and grappling with big ideas
Writer Ted Chiang on AI and grappling with big ideas
Ted Chiang was recently awarded the PEN/Faulkner Foundation's prize for short story excellence. He sat down with NPR to talk about AI, making art and grappling with big ideas.
·npr.org·
Writer Ted Chiang on AI and grappling with big ideas
These Women Tried to Warn Us About AI
These Women Tried to Warn Us About AI
Rumman Chowdhury, Timnit Gebru, Safiya Noble, Seeta Peña Gangadharan, and Joy Buolamwini open up about their artificial intelligence fears
·rollingstone.com·
These Women Tried to Warn Us About AI
Why A.I. Isn’t Going to Make Art
Why A.I. Isn’t Going to Make Art
To create a novel or a painting, an artist makes choices that are fundamentally alien to artificial intelligence.
·newyorker.com·
Why A.I. Isn’t Going to Make Art
The Subprime AI Crisis
The Subprime AI Crisis
None of what I write in this newsletter is about sowing doubt or "hating," but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intelligence boom — which would be better described as a generative AI boom
·wheresyoured.at·
The Subprime AI Crisis
Where Facebook's AI Slop Comes From
Where Facebook's AI Slop Comes From
Facebook itself is paying creators in India, Vietnam, and the Philippines for bizarre AI spam that they are learning to make from YouTube influencers and guides sold on Telegram.
·404media.co·
Where Facebook's AI Slop Comes From
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.
·972mag.com·
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
Will A.I. Become the New McKinsey?
Will A.I. Become the New McKinsey?
As it’s currently imagined, the technology promises to concentrate wealth and disempower workers. Is an alternative possible?
·newyorker.com·
Will A.I. Become the New McKinsey?
All-knowing machines are a fantasy
All-knowing machines are a fantasy
pemThe idea of an all-knowing computer program comes from science fiction and should stay there. Despite the seductive fluency of ChatGPT and other language models, they remain unsuitable as sources of knowledge. We must fight against the instinct to trust a human-sounding machine, argue Emily M. Bender & Chirag Shah. /em/ppem /em/ppDecades of science fiction have taught us that a key feature of a high-tech future is computer systems that give us instant access to seemingly limitless collections of knowledge through an interface that takes the form of a friendly (or sometimes sinisterly detached) voice. The early promise of the World Wide Web was that it might be the start of that collection of knowledge. With Meta’s a href="https://galactica.org/"Galactica/a, OpenAI’s a href="https://openai.com/blog/chatgpt/"ChatGPT/a and earlier this year a href="https://www.youtube.com/watch?v=_xLgXIhebxA"LaMDA/a from Google, it seems like the friendly language interface is just around the corner, too./pp span class="article-content-box" a href="https://iai.tv/articles/why-ai-must-learn-to-forget-auid-2302" class="iai-related-in-article click_on_suggestion_link--gtm-track" span class="iai-card" span class="iai-card--image" style="display: block;" img src="/assets/Uploads/_resampled/FillWyI0MDAiLCIyNzUiXQ/Why-AI-must-learn-to-forget.webp" alt="Why AI must learn to forget"/span span class="iai-card--content" style="display: block;" span class="iai-card--title" style="display: block;"SUGGESTED READING/span span class="iai-card--heading" style="display: block;"Why AI must learn to forget/span spanBy Ali Boyle/span /span /span /a /span /ppHowever, we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world. In fact, large language models like Galactica, ChatGPT and LaMDA are not fit for purpose as information access systems, in two fundamental and independent ways./ppFirst, what they are designed to do is to create coherent-seeming text. They do this by being cleverly built to take in vast quantities of training data and model the ways in which words co-occur across all of that text. The result is systems that can produce text that is very compelling when we as humans make sense of it. But the systems do not have any understanding of what they are producing, any communicative intent, any model of the world, or any ability to be accountable for the truth of what they are saying. This is why, in 2021, one of us (Bender) and her co-authors referred to them as a href="https://dl.acm.org/doi/abs/10.1145/3442188.3445922"stochastic parrots/a./pp class="article-plus-content--header" style="text-align: center;"___/pp class="article-plus-content--header" style="text-align: center;"Information seeking is more than simply getting answers as quickly as possible/pp class="article-plus-content--header" style="text-align: center;"___/ppSecond, the fantasy idea of an all-knowing computer rests on a fundamentally flawed notion of how knowledge works. There will never be an all-inclusive fully correct set of information that represents everything we could need to know. And even if you might hope that could come to pass, it should be very clear that today’s World Wide Web isn’t it. When people seek information, we might think we have a question and we are looking for the answer, but more often than not, we benefit more from engaging in sense-making: refining our question, looking at possible answers, understanding the sources those answers come from and what perspectives they represent, etc. Consider the difference between the queries: “What is 70 degrees Fahrenheit in Celcius?” and “Given current COVID conditions and my own risk factors, what precautions should I be taking?”/ppInformation seeking is more than simply getting answers as quickly as possible. Sure, many of our questions call for simple, fact-based responses, but others require some investigation. For those situations, it is important that we get to see the relevant sources and the provenance of information. While this requires more effort on the user end, there are important cognitive and affective processes that happen during that process that allow us to better understand our own needs and the context, as well as provide a better assessment of the information being sought and collected before we use it. We wrote about these issues in our a href="http://chiragshah.org/papers/Shah_Bender_Situating_Search_CHIIR2022.pdf"Situating Search paper/a./pp class="article-plus-content--header" style="text-align: center;"___/pp class="article-plus-content--header" style="text-align: center;"It is urgent that we recognize that an overlay of apparent fluency does not, despite appearances, entail accuracy, informational value, or trustworthiness/pp class="article-plus-content--header" style="text-align: center;"___/ppChatGPT and other conversational systems that provide direct answers to one’s questions have two fundamental issues in this regard. First, these systems are generating answers directly, which skips the step of showing the users sources from where one could look for answers. Second, these systems are providing responses in conversational natural language, something we otherwise only experience with other humans: Over both evolutionary time and every individual’s lived experience, natural language to-and-fro has always been with fellow human beings. As we encounter synthetic language output, it is very difficult not to extend trust in the same way as we would with a human. We argue that systems need to be very carefully designed so as not to abuse this trust. /pp span class="article-content-box" a href="https://iai.tv/articles/the-artist-is-dead-ai-killed-them-auid-2275" class="iai-related-in-article click_on_suggestion_link--gtm-track" span class="iai-card" span class="iai-card--image" style="display: block;" img src="/assets/Uploads/_resampled/FillWyI0MDAiLCIyNzUiXQ/AI-art-and-creativity.webp" alt="AI art and creativity"/span span class="iai-card--content" style="display: block;" span class="iai-card--title" style="display: block;"SUGGESTED READING/span span class="iai-card--heading" style="display: block;"The artist is dead, AI killed them/span spanBy Henry Shevlin/span /span /span /a /span /ppSince the release of ChatGPT, we have seen widespread, breathless reports of what people have been able to use it to do and we are very concerned about how this technology is presented to the public. Even with non-conversational search engines, we know that is common to place undue trust in the results: if the search system places something at the top of the list, we tend to believe it is a good or true or representative result and if it doesn’t find something, it is tempting to believe it does not exist. But, as Safiya Noble warns us in emAlgorithms of Oppression/em, these platforms aren’t neutral reflections of either the world as it is or how people talk about the world, but rather shaped by various corporate interests. It is urgent that we as a public learn to conceptualize the workings of information access systems and, in this moment especially, that we recognize that an overlay of apparent fluency does not, despite appearances, entail accuracy, informational value, or trustworthiness./p
·iai.tv·
All-knowing machines are a fantasy
Crime Prediction Keeps Society Stuck in the Past
Crime Prediction Keeps Society Stuck in the Past
So long as algorithms are trained on racist historical data and outdated values, there will be no opportunities for change.
·wired.com·
Crime Prediction Keeps Society Stuck in the Past
Diritti Digitali per la Comunità Queer
Diritti Digitali per la Comunità Queer
Attraverso l’analisi di tre tipologie di piattaforme differenti - app di ride sharing, social network e app di dating - cerchiamo di mostrare come la condizione di possibilità per l’implementazione di un design inclusivo sia lo scardinamento del binarismo algoritmico alla base dei sistemi di IA.
·web.archive.org·
Diritti Digitali per la Comunità Queer