Found 1913 bookmarks
Newest
ChatGPT - Pulpo
ChatGPT - Pulpo
Creativity Assistant 🐙 Think through, refine, and develop *any* new idea or project.
ChatGPT - Pulpo
ChatGPT - Hep!
ChatGPT - Hep!
Think and act more creatively with Hep!🧑‍🎨✨ Skilled at helping you think through things, make more stuff, and enhance your process :)
ChatGPT - Hep!
Write with LAIKA
Write with LAIKA
Playful tools for a more creative life
Ichigo, for example, can help with editing, summarising, and giving feedback. Or you can ask him about his cat.
Write with LAIKA
Creative Pattern Recognition | Creative Pattern Recognition
Creative Pattern Recognition | Creative Pattern Recognition
Creative Pattern Recognition is a hybrid publication that explores the intersection of artificial intelligence and creative practice. It covers a wide range of topics such as the role of AI in art and design, addressing the impact and potential of AI on creative practice. It also highlights the opportunities and challenges presented by AI tools, the evolution of the field, and the potential for AI to enhance human creativity and explore uncharted artistic territories. Furthermore, it explores the ethical considerations surrounding AI, including the impact on creative jobs and the importance of demystifying AI to address societal concerns. Through interviews, reflections and analytical pieces, Creative Pattern Recognition provides a comprehensive, situated overview of the current landscape of AI in creative practice and formulates an answer to the question of why creatives should engage with AI and how to approach it.
Creative Pattern Recognition | Creative Pattern Recognition
LegoGPT by Tellart
LegoGPT by Tellart
Are you also tired of writing prompts into AI tools? Why don’t we play with LEGO to build images instead? LEGOgpt is an experiment by one of our senior…
LegoGPT by Tellart
The Five Stages Of AI Grief | NOEMA
The Five Stages Of AI Grief | NOEMA
Grief-laden vitriol directed at AI fails to help us understand paths to better futures that are neither utopian nor dystopian, but open to radically weird possibilities.
The Five Stages Of AI Grief | NOEMA
Antikythera
Antikythera
A think tank reorienting planetary computation as a philosophical, technological, and geopolitical force
Antikythera
Theorizing “Algorithmic Sabotage”
Theorizing “Algorithmic Sabotage”
An urgent intervention rooted in the militant liberation struggles of the most oppressed within the arena of global computational racial capitalism.
Theorizing “Algorithmic Sabotage”
Situating Imaginaries of Ethics in / of / through Design
Situating Imaginaries of Ethics in / of / through Design
Within the last decade a large corpus of work in HCI as well as the commercial design practice has focused on systematically addressing questions of ethics, values and moral considerations embedded in the design of digital technology. Recent critiques have highlighted that these efforts fall short of actual transformative impact. We use the sociological concept of imaginaries to argue that value and ethics work needs to be considered within the larger context of socially shared visions of a desirable future and outline how existing sociotechnical imaginaries pre-frame contexts in which value work is deployed. We demonstrate that imaginaries provide the language and conceptual framework necessary to address underlying ethical worldviews before ethics driven design methods and toolkits can be successfully employed. Finally we suggest how to engage imaginaries to facilitate a broader shift towards a more politically sensitive approach to designerly value work.
Situating Imaginaries of Ethics in / of / through Design
!Mediengruppe Bitnik 1000 Bots
!Mediengruppe Bitnik 1000 Bots
!Mediengruppe Bitnik – 1000 Bots Have you ever wanted to surf the webs as a bot? Ever wondered what the Googlebot gets to see online that you don't? To find out, install 1000 Bots and surf the internet as Googlebot – the most influential Internet
!Mediengruppe Bitnik 1000 Bots
Ancestral AI: Gustavo Nogueira de Menezes in conversation w/ artists Monique Lemos and Thiago Britto
Ancestral AI: Gustavo Nogueira de Menezes in conversation w/ artists Monique Lemos and Thiago Britto
How ancestral wisdom passed down through generations can shape the future of artificial intelligence? And how to create technologies that honor and integrate different temporal perspectives? If we consider ourselves as the first generation with such technological power in our hands, what kind of ancestors of the future will we be? ​Ancestral AI posits a transformative approach to technology, advocating for an integration of time-honored wisdom with modern AI development. By embracing the deep insights of our ancestors, this initiative aims to steer the evolution of technology toward a more sustainable and harmonious future, ensuring that advancements are not only technically sound but also ethically grounded. ​This webinar is tailored for AI professionals, researchers, cultural theorists, and anyone interested in how technology intersects with ethical, social, and cultural dimensions. To learn more about how ancient wisdom can revolutionize technological advancements and to hear firsthand from our esteemed speakers about their innovative approaches to AI, make sure to register. ​🐚 ANCESTRAL AI w/ GUSTAVO: How AI affect our relationship with time? ​Gustavo Nogueira de Menezes is a Brazilian researcher specializing in temporalities, the narratives, and systems that influence our perception of time. Based in Amsterdam, he leads Torus Company and Temporality Lab, focusing on multiple temporalities from a decolonial perspective with a transdisciplinary global community. His research covers topics like social change, ancestrality, speculative design, and decoloniality, providing insight into time's impact on human experience and inspiring a rethinking of our relationship with time. As Research Lead for Ancestral AI, Gustavo highlights that in the realm of technology, especially in the rapidly evolving field of artificial intelligence, invoking the concept of ancestrality means proposing a framework that is deeply informed by these time-tested insights and values. ​​🧚 GUEST: MONIQUE LEMOS: How AI is Killing Black People Monique is the author of the ongoing work "HOW AI IS KILLING BLACK IMAGINARIES: updates on the concept of Necropolitics and tools for an anti-racist data learning future". A creative approach to mitigate some damage of racism by creating anti-racist awareness and imagining radical Black futures. ​​🧚‍♂️ GUEST: THIAGO BRITTO: The Preservation of Black Memories ​Thiago focuses on creating contemporary images using artificial intelligence. He seeks to apply these new technologies in socially relevant ways, aiming to contribute to a better preservation of black memory in Brazil. ABOUT SLOW AI PROJECT ​​​In this project, we will interrogate and publish critical AI discourse in a format that makes sense to us and our practices, namely zines and creative technology installations. Inspired by the counter-movements of slow fashion and slow food, this project will investigate three emerging AI counter-narratives –Small AI 🐜, Ancestral AI 🐚, and Esoteric AI 🔮 – and explore what it might look like to incorporate them in our everyday practice. ​​​At project end we will publish an anthology – a hot compost pile of miro boards, zines, and art that we hope sparks new ways to think and talk about AI. ​​​Follow this page to stay up-to-date on Slow AI. Slow AI is made possible by Stimuleringsfonds ​ABOUT AIxDESIGN ​​​AIxDESIGN is a global community of designers, researchers, creative technologists, and activists using AI in pursuit of creativity, justice and joy and living lab exploring participatory, slow, and more-than-corporate AI. Learn more at aixdesign.co.
Ancestral AI: Gustavo Nogueira de Menezes in conversation w/ artists Monique Lemos and Thiago Britto
Outside institutions can play a role in evaluating and ensuring fairness in AI systems : with Ploipailin Flynn of AIxD (Part 1) by Digital Health Review : Conversations with a Black Health Tech Nerd
Outside institutions can play a role in evaluating and ensuring fairness in AI systems : with Ploipailin Flynn of AIxD (Part 1) by Digital Health Review : Conversations with a Black Health Tech Nerd
In this two part episode, we sit down with Ploipailin Flynn, founding member of AI x Design, a global community and decentralized studio for critical and creative AI research & design. In this conversation, we discuss why understanding how AI systems are built is crucial for creating better outcomes and why measuring the outcomes of AI in communities and tracking deployed models is essential. Ploi shares her expertise on how designing user interfaces for different stakeholders can facilitate feedback, uphold data rights and why involving outside organizations can help evaluate the fairness of AI systems, while also creating defensible positions on company's responsibility with AI.
Outside institutions can play a role in evaluating and ensuring fairness in AI systems : with Ploipailin Flynn of AIxD (Part 1) by Digital Health Review : Conversations with a Black Health Tech Nerd
(1) Post | LinkedIn
(1) Post | LinkedIn
We were asked to design the visual identity of the Design & AI Symposium hosted by Eindhoven University of Technology during Dutch Design Week 2024. Our goal…
(1) Post | LinkedIn
Your Computer Is on Fire
Your Computer Is on Fire
Techno-utopianism is dead: Now is the time to pay attention to the inequality, marginalization, and biases woven into our technological systems.This book sou...
Your Computer Is on Fire
What Do We Critique When We Critique Technology? | American Literature | Duke University Press
What Do We Critique When We Critique Technology? | American Literature | Duke University Press
Thinking about the state of technology today necessarily means thinking about a number of interrelated but distinct entities. Considering the nuts and bolts of a news story in which, say, some corporate machine vision technology was found to be racially discriminatory can often mean having to study business practices, data sciences, specific suites of tools that can lay a claim to the moniker of AI, assemblages of hardware and software, platform infrastructures with machines slotted away in hot data-center basements in tax havens, human-computer interactions and perceptions, and academic/industry discourses within any of the aforementioned, not to mention the geopolitical and historical situation of it all, which may further call into question where, say, “American” literature can uniquely intersect with technologies splayed awkwardly across, and not always along, the traditional geopolitical and cultural fault lines. In such a scenario, the flag of “Critical AI and (American) Literature,” by its very constitution, carries several sigils, including those of big data and literature and of computational culture and literature, as well as American studies and global technological sovereignties. Focused on the more critical end of these studies, this review brings together three new multiauthored books to ask what we critique when we critique technology today.Scholars interested in literature and technology—usually found in disciplines and departments such as languages and literatures, cultural studies, science studies, and media studies—have long been producing pathbreaking critical thought about various sociotechnical phenomena. From reading technologies themselves using literary critical methods—N. Katherine Hayles, Donna Haraway, Friedrich Kittler, Wendy Chun, Matthew Kirschenbaum, Rita Raley, Lisa Gitelman, and Alexander Galloway all come to mind here—to studying literary expressions of technological worlds (see, e.g., work by Fredric Jameson, Bruce Clark, Laura Otis, Steven Shaviro, Sherryl Vint, and Colin Milburn), literary criticism has been a bellwether of technology critique for several decades now. A brief look at such critique through the ages shows us the varied moods that orient studies of these technologies, with AI just being the latest in this series that once featured the internet, the personal computer, hypertext, cellphone, and metadata. Where there was once a utopian dream with the expansion of networks in the 1990s, or a reluctant acceptance that became a residual flicker of counterprogrammatic hope that technologies can be reappropriated by radical social forces in late 2010s, there is now, in critical work collected here, largely anger and disappointment. Every day, as news cycles tell tales of unchecked tech monopolies roughly intruding into our social, political, and psychic lives, and rarely for the good, these authors find themselves angry—really angry—about the state of our technologies and what they have wrought. On the one hand, such anger indexes our historical condition and informs our engagement with technology today. On the other hand, it forces us to ask what we are actually angry about, and what can be done instead.The primary example of this mood may be found in Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip. This volume is a startlingly direct collection of essays that, for the most part, all do what they say; the overarching purpose of the volume is, in fact, a call to action that signals a diffused state of emergency in various corners of computational cultures (6). The three parts of the book—“Nothing Is Virtual,” “This Is an Emergency,” and “Where Will the Fire Spread?”—contain chapters that are thematically and methodologically varied but all united by their clear and accessible critiques that point out how inequalities and discriminations are enabled and exacerbated by technological systems today. To note a few, Nathan Ensmenger’s “The Cloud Is a Factory,” which uses Amazon as a case study for the infrastructural reinscription of older techniques of capital used by Sears and Standard Oil, is an excellent breakdown of the material behind the supposedly virtual cloud (29); Ben Peters’s “A Network Is Not a Network” accounts for the role of institutional behavior in the constitution of large-scale networks (71); Mar Hicks, through an analysis of gender discrimination in mid-twentieth-century technology labor sector, claims that “Sexism Is a Feature, Not a Bug” in tech economies and communities (135); Safiya Umoja Noble tells us that our robots aren’t neutral (199); Janet Abbate takes on tech sector’s consideration of coders using the pipeline model—the discourse that encourages getting more women and minorities to learn coding earlier and faster so as to facilitate a smoother and more expansive flow of more diverse labor into the infamously white and masculine technology sector—to show how “Coding Is Not Empowerment” (253); Ben Allen magisterially demonstrates how the same genre of technical hacks can be read as playful or criminal depending on power dynamics (273); and Paul Edwards studies platforms, which he calls fast infrastructures, as he takes up exemplars from South Africa and Kenya to suggest that these fleeting operational levers represent the next model of corporate infrastructural dominance (313). The volume, then, contains a series of related, but not necessarily coagulated, critiques of technology and its sociocultural conditions.Technoprecarious, collectively authored by the Precarity Lab (the contributing team comprises Cassius Adair, Iván Chaar López, Anna Watkins Fisher, Meryem Kamil, Cindy Lin, Silvia Lindtner, Lisa Nakamura, Cengiz Salman, Kalindi Vora, Jackie Wang, and McKenzie Wark) reads not like an edited collection but more like a short manifesto written by scholars with complementary orientations. The different sections of the book—among them “The Undergig,” “The Widening Gyre of Precarity,” “Automating Abandonment,” “Fantasies of Ability,” and“Dispossession by Surveillance”—come together to form a patchwork of commentaries, most well rooted in original cultural, sociotechnical, anthropological, and historical research, that all very playfully point out the exacerbation of precarity wrought by, with, and through digital technologies today. The titular technoprecarity, for the collective, is “the premature exposure to death and debility that working with or being subjected to digital technologies accelerates” (1). Technoprecarity here shows up in snippets that plug into work on surveillance, carceral systems, toxicity, and administrative failures, among other nodes of inquiry. The final two chapters feature a Haraway-esque hope for radical reappropriation listing the Detroit Digital Stewards Program, which features groups that help underprivileged communities gain access to technologies as tools of communication, and the use of open-source maps in Palestine as examples to be followed for practices of techno-oriented care (74–86). There is a sincere attempt here, not unlike the penultimate contribution in Your Computer Is on Fire, “How to Stop Worrying about Clean Signals and Start Loving the Noise” by Kavita Philip (363), to find a nugget or two of hope in the middle of the general condition of technoprecarity being described.Uncertain Archives: Critical Keywords for Big Data, edited by Nanna Bonde Thylstrup, Daniela Agostinho, Annie Ring, Catherine D’Ignazio, and Kristin Veel, is a six-hundred-page collection that features sixty-one keyword entries, altogether providing a Raymond Williams–style vocabulary for critical studies of data and AI. Considering big data as an uncertain archive—drawing from archival theory and critical data studies while thinking about the latent possibilities of aggregation as presented by big data today—the collection features short, punchy nuggets of wisdom that offer a polysemic understanding of the kinds of critical thought different disciplines can offer to studies of big data and AI at large. The entries vary widely in style, tone, content, and orientation. Overlapping questions of epistemologies (“Quantification” by Jacqueline Wernimont, “Ethics” by Louise Amoore, “Unpredictability” by Elena Esposito, “Remains” by Tonia Sutherland), alterity and discrimination (“Abuse” by Sarah Roberts, “(Mis)Gendering” by Os Keyes), power (“DNA” by Mél Hogan, “Instrumentality” by Luciana Parisi, “Organization” by Timon Beyes), aesthetics (“Demo” by Orit Halpern; a brilliant, poetic one on “Throbber” by Kristoffer Ørum; “Visualization” by Johanna Drucker), infrastructures (“Cooling” by Nicole Starosielski, “Supply Chain” by Miriam Posner, “Field” by Shannon Mattern), and socialities (“Values” by John S. Seberger and Geoffrey C. Bowker, “Proxies” by Wendy Hui Kyong Chun, Boaz Levin, and Vera Tollmann; “Self-Tracking” by Natasha Dow Schüll) all sit alongside questions near and dear to literary critical approaches (“Digital Humanities” by Roopika Risam, “File” by Craig Robertson, “Misreading” by Lisa Gitelman). In performing the immensely unenviable task of shepherding sixty-eight other scholars from across the world into one contained collection, the editors here provide a deliberately fragmented mise-en-scène of critical data studies as it unfolds across several corners of academia today. Juggling several different approaches, Uncertain Archives does not easily offer a shared through line. Nevertheless, it can be read as a collection trying to enumerate the various evaluative frameworks that can be applied to/in critical (big) data studies; most terms offered here—some of which are compressed versions based on the concepts outlined in the contributors’ monographs and articles—can be taken as pedagogical scaffolds or starting points for a broader set of research inquiries. And big data here shows up as nebulous and tentacular, both in its contemporary material reach and in its ...
What Do We Critique When We Critique Technology? | American Literature | Duke University Press