Design und KI (künstliche Intelligenz)

read
A feminist guide to the past, present and future of computing
My first computer was gifted to me by my mother. A talented computer programmer herself, she preferred I learn programming than memorising multiplication tables at school. Almost two decades later, as I researched and wrote my undergraduate thesis on the history of information systems and the effective role of computing, I realised something. All the literature that I was reading around the advent of AI, communication theory and cybernetics came from a limited demographic; most of whom were priv
What Have Language Models Learned?
By asking language models to fill in the blank, we can probe their understanding of the world.
Piercing the Algorithmic Veil
“Piercing the corporate veil” is when there is a legal decision made to hold a company’s shareholders or directors responsible for the…
WePresent | Writer James Bridle explores creativity and AI
Writer James Bridle explores advancements in AI and how humankind's collaboration with machines could make for a fascinating creative future
AI Takeoff - LessWrong
AI Takeoff refers to the process of an artificial general intelligence going from a certain threshold of capability (often discussed as "human level") to being super-intelligent and capable enough to control the fate of civilization.
There has been much debate about whether AI takeoff is more likely to be "slow" or "fast".
AI takeoff is sometimes casually referred to as AI FOOM.
LoRA: Low-Rank Adaptation of Large Language Models
An important paradigm of natural language processing consists of large-scale
pre-training on general domain data and adaptation to particular tasks or
domains. As we pre-train larger models, full fine-tuning, which retrains all
model parameters, becomes less feasible. Using GPT-3 175B as an example --
deploying independent instances of fine-tuned models, each with 175B
parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or
LoRA, which freezes the pre-trained model weights and injects trainable rank
decomposition matrices into each layer of the Transformer architecture, greatly
reducing the number of trainable parameters for downstream tasks. Compared to
GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable
parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA
performs on-par or better than fine-tuning in model quality on RoBERTa,
DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher
training throughput, and, unlike adapters, no additional inference latency. We
also provide an empirical investigation into rank-deficiency in language model
adaptation, which sheds light on the efficacy of LoRA. We release a package
that facilitates the integration of LoRA with PyTorch models and provide our
implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at
https://github.com/microsoft/LoRA.
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
The premise of the paper is that while OpenAI and Google continue to race to build the most powerful language models, their efforts are rapidly being eclipsed by the work happening in the open source community.
Concepts Portal around AI - LessWrong
A community blog devoted to refining the art of rationality
The Public Stack: a Model to Incorporate Public Values in Technology - Amsterdam Smart City
*Public administrators, public tech developers, and public service providers face the same challenge: How to develop and use technology in accordance with public values like openness, fairness, and inclusivity? The question is urgent as we continue to rely upon proprietary technology that is developed within a surveillance capitalist context and is incompatible with the goals and missions of our democratic institutions. This problem has been a driving force behind the development of the [public stack](https://publicstack.net/), a conceptual model developed by [Waag](https://waag.org/en/) through [ACROSS ]()and other projects, which roots technical development in public values.* The idea behind the public stack is simple: There are unseen [layers](https://publicstack.net/layers/) behind the technology we use, including hardware, software, design processes, and business models. All of these layers affect the relationship between people and technology – as consumers, subjects, or (as the public stack model advocates) citizens and human beings in a democratic society. The public stack challenges developers, funders, and other stakeholders to develop technology based on shared public values by utilising participatory design processes and open technology. The goal is to position people and the planet as democratic agents; and as more equal stakeholders in deciding how technology is developed and implemented. ACROSS is a Horizon2020 European project that develops open source resources to protect digital identity and personal data across European borders. In this context, Waag is developing the public stack model into a service design approach – a resource to help others reflect upon and improve the extent to which their own ‘stack’ is reflective of public values. In late 2022, Waag developed a method using the public stack as a lens to prompt reflection amongst developers. A more extensive public stack reflection process is now underway in ACROSS; resources to guide other developers through this same process will be made available later in 2023. The public stack is a useful model for anyone involved in technology, whether as a developer, funder, active, or even passive user. In the case of ACROSS, its adoption helped project partners to implement decentralised privacy-by-design technology based on values like privacy and user control. The model lends itself to be applied just as well in other use cases: * Municipalities can use the public stack to maintain democratic approaches to technology development and adoption in cities. * Developers of both public and private tech can use the public stack to reflect on which values are embedded in their technology. * Researchers can use the public stack as a way to ethically assess technology. * Policymakers can use the public stack as a way to understand, communicate, and shape the context in which technology development and implementation occurs. ***Are you interested in using the public stack in your own project, initiative, or development process? We’d love to hear about it. Let us know more by emailing us at publicstack@waag.org.***
Rethinking Design Tools in the Age of Machine Learning
The creative reach of the individual is expanding.
Homogeneity vs. heterogeneity in AI takeoff scenarios - AI Alignment Forum
Special thanks to Kate Woolverton for comments and feedback. …
2022 Expert Survey on Progress in AI
Collected data and analysis from a large survey of machine learning researchers.
Let’s think about slowing down AI - LessWrong
AVERTING DOOM BY NOT BUILDING THE DOOM MACHINE
If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build…
Creative Strategies for Algorithmic Resistance! — Cybernetic Forests.
The machines that nudge us can be nudged back: Here are some ideas on how to steer them.
Algorithmic Modernity: Mechanizing Thought and Action, 1500-2000
Abstract. Algorithmic Modernity explores key moments in the historical emergence of algorithmic practices and in the constitution of their credibility and autho
Fear of AI is Profitable
Welcome to Butopia
AI and the American Smile
How AI misrepresents culture through a facial expression. "For the diversity of human expression to survive algorithmic hegemony."
Timnit Gebru Is Building a Slow AI Movement
Her new organization, DAIR, will raise the alarm about how AI is being deployed today and use AI to speak truth to power.
The stupidity of AI
The long read: Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous
@timnitGebru@dair-community.social on Mastodon on Twitter
The very first citation in this stupid letter is to our #StochasticParrots Paper, "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"EXCEPT— @timnitGebru@dair-community.social on Mastodon (@timnitGebru) March 30, 2023
@emilymbender@dair-community.social on Mastodon on Twitter
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with #Aihype. Here's a quick rundown. — @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
1,100+ notable signatories just signed an open letter asking 'all AI labs to immediately pause for at least 6 months'
Signatories including Elon Musk, Steve Wozniak, and Tristan Harris are asking AI labs to pause building AI more powerful than GPT-4.
The Hypermarket of Information
The Hypermarket of Information Happy New Year… Sorry for being away for two months. I have been working hard and absorbing so much stuff that I’m excited to...
The Problem With AI Is the Problem With Capitalism
Workers’ fear of new artificial intelligence technology makes sense: that technology has the potential to eliminate their jobs. But if we didn’t live under capitalism, AI could be used to liberate us from drudgery rather than hurl us into poverty.
ChatGPT and the Magnet of Mediocrity
It has only been a little under four months since ChatGPT came out, and yet I already feel super behind in publishing this story. I guess…
The Meaning of “Vision” and “Image” in the Age of AI - Feral File - Close-Ups
On the occasion of For Your Eyes Only, curator Domenico Quaranta discusses machine vision, operational images, and the future of human visual culture with Antonio Somaini, professor in film, media, and visual culture theory at the Université Sorbonne Nouvelle – Paris 3.
Machine Bias — MODEM
Machine Bias explore the unconscious bias built into AI systems and map potential strategies for a more inclusive tech landscape.
Participatory Design of AI Systems: Opportunities and Challenges Across Diverse Users, Relationships, and Application Domains | Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems
Power to the People? Opportunities and Challenges for Participatory AI | Equity and Access in Algorithms, Mechanisms, and Optimization