Book: You Look Like A Thing — Janelle Shane

read
Parables of AI in/from the Majority World: An Anthology
This anthology was curated from stories of living with data and AI in/from the majority world, narrated at a storytelling workshop in October 2021 organized by Data & Society Research Institute.
From ”Explainable AI” to ”Graspable AI” | Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction
EKILA: Synthetic Media Provenance and Attribution for Generative Art
We present EKILA; a decentralized framework that enables creatives to receive recognition and reward for their
contributions to generative AI (GenAI). EKILA proposes a
robust visual attribution technique and combines this with
an emerging content provenance standard (C2PA) to address the problem of synthetic image provenance – determining the generative model and training data responsible for an AI-generated image. Furthermore, EKILA extends the non-fungible token (NFT) ecosystem to introduce
a tokenized representation for rights, enabling a triangular relationship between the asset’s Ownership, Rights,
and Attribution (ORA). Leveraging the ORA relationship
enables creators to express agency over training consent
and, through our attribution model, to receive apportioned
credit, including royalty payments for the use of their assets
in GenAI.
882cca4b6aa2c90245ef4edc4a56eb81
A Theory of Vibe, by Peli Grietzer — Glass Bead
Towards a Poetics of Artificial Superintelligence
Symbolic language can help us grasp the nature and power of what is coming
HOLO 3: Mirror Stage
Nora N. Khan assembles a cast of luminaries to consider the far-reaching implications of AI and computational culture.
–$40
Hallucinations as a feature, not a bug | Union Square Ventures
Co-authored with Grace Carney A few months ago Fred kicked off a conversation about what the “native” applications of AI technology will be. What are the
day dreaming
Generative AI Systems Aren't Just Open or Closed Source
Conversation around generative AI tends to focus on whether its development is open or closed. It's more responsible to envision releases along a gradient.
A Virtue-Based Framework to Support Putting AI Ethics into Practice | Montreal AI Ethics Institute
justice, honesty, responsibility, and care <3
Studying up Machine Learning Data: Why Talk About Bias When We Mean Power? | Montreal AI Ethics Institute
🔬 Research Summary by Shreyasha Paudel, a Ph.D. student at the University of Toronto with an interdisciplinary research focus that combines Human-Computer Interaction with critical theories from…
The Curse of Recursion: Training on Generated Data Makes Models Forget
Stable Diffusion revolutionised image creation from descriptive text. GPT-2,
GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of
language tasks. ChatGPT introduced such language models to the general public.
It is now clear that large language models (LLMs) are here to stay, and will
bring about drastic change in the whole ecosystem of online text and images. In
this paper we consider what the future might hold. What will happen to GPT-{n}
once LLMs contribute much of the language found online? We find that use of
model-generated content in training causes irreversible defects in the
resulting models, where tails of the original content distribution disappear.
We refer to this effect as Model Collapse and show that it can occur in
Variational Autoencoders, Gaussian Mixture Models and LLMs. We build
theoretical intuition behind the phenomenon and portray its ubiquity amongst
all learned generative models. We demonstrate that it has to be taken seriously
if we are to sustain the benefits of training from large-scale data scraped
from the web. Indeed, the value of data collected about genuine human
interactions with systems will be increasingly valuable in the presence of
content generated by LLMs in data crawled from the Internet.
AI Is a Lot of Work
How many humans does it take to make tech seem human? Millions.
[PDF] Artificial Intelligence and Post-Capitalism by Thanasis Apostolakoudis · 10.1201/9780429446726-9 · OA.mg
Read and download Artificial Intelligence and Post-Capitalism by Thanasis Apostolakoudis on OA.mg
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence
This paper explores the important role of critical science, and in particular
of post-colonial and decolonial theories, in understanding and shaping the
ongoing advances in artificial intelligence. Artificial Intelligence (AI) is
viewed as amongst the technological advances that will reshape modern societies
and their relations. Whilst the design and deployment of systems that
continually adapt holds the promise of far-reaching positive change, they
simultaneously pose significant risks, especially to already vulnerable
peoples. Values and power are central to this discussion. Decolonial theories
use historical hindsight to explain patterns of power that shape our
intellectual, political, economic, and social world. By embedding a decolonial
critical approach within its technical practice, AI communities can develop
foresight and tactics that can better align research and technology development
with established ethical principles, centring vulnerable peoples who continue
to bear the brunt of negative impacts of innovation and scientific progress. We
highlight problematic applications that are instances of coloniality, and using
a decolonial lens, submit three tactics that can form a decolonial field of
artificial intelligence: creating a critical technical practice of AI, seeking
reverse tutelage and reverse pedagogies, and the renewal of affective and
political communities. The years ahead will usher in a wave of new scientific
breakthroughs and technologies driven by AI research, making it incumbent upon
AI communities to strengthen the social contract through ethical foresight and
the multiplicity of intellectual perspectives available to us; ultimately
supporting future technologies that enable greater well-being, with the goal of
beneficence and justice for all.
Positive AI: Key Challenges for Designing Wellbeing-aligned Artificial Intelligence
Artificial Intelligence (AI) is transforming the world as we know it,
implying that it is up to the current generation to use the technology for
''good.'' We argue that making good use of AI constitutes aligning it with the
wellbeing of conscious creatures. However, designing wellbeing-aligned AI
systems is difficult. In this article, we investigate a total of twelve
challenges that can be categorized as related to a lack of knowledge (how to
contextualize, operationalize, optimize, and design AI for wellbeing), and lack
of motivation (designing AI for wellbeing is seen as risky and unrewarding).
Our discussion can be summarized into three key takeaways: 1) our understanding
of the impact of systems on wellbeing should be advanced, 2) systems should be
designed to promote and sustain wellbeing intentionally, and 3), above all,
Positive AI starts with believing that we can change the world for the better
and that it is profitable.
Instagram effect in ChatGPT — Abhishek Gupta | Responsible AI | ACI
A lot is required to achieve those shareable trophies of taming ChatGPT into producing what you want: tinkering, rejected drafts, invocations of the right spells (I mean prompts!), and learning from observing Twitter threads and Reddit forums. But, those early efforts remain hidden, a kind of surviv
Democratising AI: Multiple Meanings, Goals, and Methods | Montreal AI Ethics Institute
🔬 Research Summary by Elizabeth Seger, PhD, a researcher at the Centre for the Governance of AI (GovAI) in Oxford, UK, investigating beneficial AI model-sharing norms and practices.
Mechanisms of Techno-Moral Change: A Taxonomy and Overview
Ethical Theory and Moral Practice - The idea that technologies can change moral beliefs and practices is an old one. But how, exactly, does this happen? This paper builds on an emerging field of...
Paradoxical Intelligence | Vera List Center
The Vera List Center for Art and Politics is a research center and a public forum for art, culture, and politics.
The Wide Angle: Understanding TESCREAL — Silicon Valley’s Rightward Turn | Washington Spectator
For decades, the conventional wisdom about Silicon Valley was that it leaned progressive. And by many measures (like donations by Big Tech employees to political candidates), the industry has been…
Is Gen Z Ready to Embrace AI? It’s Complicated.
Lessons from young people on AI’s thin line between productivity and interference.
Design und KI (künstliche Intelligenz)
Jennifer Moosbrugger
A feminist guide to the past, present and future of computing
My first computer was gifted to me by my mother. A talented computer programmer herself, she preferred I learn programming than memorising multiplication tables at school. Almost two decades later, as I researched and wrote my undergraduate thesis on the history of information systems and the effective role of computing, I realised something. All the literature that I was reading around the advent of AI, communication theory and cybernetics came from a limited demographic; most of whom were priv
What Have Language Models Learned?
By asking language models to fill in the blank, we can probe their understanding of the world.
Piercing the Algorithmic Veil
“Piercing the corporate veil” is when there is a legal decision made to hold a company’s shareholders or directors responsible for the…
WePresent | Writer James Bridle explores creativity and AI
Writer James Bridle explores advancements in AI and how humankind's collaboration with machines could make for a fascinating creative future
AI Takeoff - LessWrong
AI Takeoff refers to the process of an artificial general intelligence going from a certain threshold of capability (often discussed as "human level") to being super-intelligent and capable enough to control the fate of civilization.
There has been much debate about whether AI takeoff is more likely to be "slow" or "fast".
AI takeoff is sometimes casually referred to as AI FOOM.
LoRA: Low-Rank Adaptation of Large Language Models
An important paradigm of natural language processing consists of large-scale
pre-training on general domain data and adaptation to particular tasks or
domains. As we pre-train larger models, full fine-tuning, which retrains all
model parameters, becomes less feasible. Using GPT-3 175B as an example --
deploying independent instances of fine-tuned models, each with 175B
parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or
LoRA, which freezes the pre-trained model weights and injects trainable rank
decomposition matrices into each layer of the Transformer architecture, greatly
reducing the number of trainable parameters for downstream tasks. Compared to
GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable
parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA
performs on-par or better than fine-tuning in model quality on RoBERTa,
DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher
training throughput, and, unlike adapters, no additional inference latency. We
also provide an empirical investigation into rank-deficiency in language model
adaptation, which sheds light on the efficacy of LoRA. We release a package
that facilitates the integration of LoRA with PyTorch models and provide our
implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at
https://github.com/microsoft/LoRA.