Found 17 bookmarks
Custom sorting
Dario Amodei — Machines of Loving Grace
Dario Amodei — Machines of Loving Grace
I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.
I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides.
the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires.
The five categories I am most excited about are: Biology and physical health Neuroscience and mental health Economic development and poverty Peace and governance Work and meaning
We could summarize this as a “country of geniuses in a datacenter”.
you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.
I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase.
Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.
Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators
Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order.
I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.
CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity.
there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward.
·darioamodei.com·
Dario Amodei — Machines of Loving Grace
AI and problems of scale — Benedict Evans
AI and problems of scale — Benedict Evans
Scaling technological abilities can itself represent a qualitative change, where a difference in degree becomes a difference in kind, requiring new ways of thinking about ethical and regulatory implications. These are usually a matter of social, cultural, and political considerations rather than purely technical ones
what if every police patrol car had a bank of cameras that scan not just every number plate but every face within a hundred yards against a national database of outstanding warrants? What if the cameras in the subway do that? All the connected cameras in the city? China is already trying to do this, and we seem to be pretty sure we don’t like that, but why? One could argue that there’s no difference in principle, only in scale, but a change in scale can itself be a change in principle.
As technology advances, things that were previously possible only on a small scale can become practically feasible at a massive scale, which can change the nature and implications of those capabilities
Generative AI is now creating a lot of new examples of scale itself as a difference in principle. You could look the emergent abuse of AI image generators, shrug, and talk about Photoshop: there have been fake nudes on the web for as long as there’s been a web. But when high-school boys can load photos of 50 or 500 classmates into an ML model and generate thousands of such images (let’s not even think about video) on a home PC (or their phone), that does seem like an important change. Faking people’s voices has been possible for a long time, but it’s new and different that any idiot can do it themselves. People have always cheated at homework and exams, but the internet made it easy and now ChatGPT makes it (almost) free. Again, something that has always been theoretically possible on a small scale becomes practically possible on a massive scale, and that changes what it means.
This might be a genuinely new and bad thing that we don’t like at all; or, it may be new and we decide we don’t care; we may decide that it’s just a new (worse?) expression of an old thing we don’t worry about; and, it may be that this was indeed being done before, even at scale, but somehow doing it like this makes it different, or just makes us more aware that it’s being done at all. Cambridge Analytica was a hoax, but it catalysed awareness of issues that were real
As new technologies emerge, there is often a period of ambivalence and uncertainty about how to view and regulate them, as they may represent new expressions of old problems or genuinely novel issues.
·ben-evans.com·
AI and problems of scale — Benedict Evans
How we use generative AI tools | Communications | University of Cambridge
How we use generative AI tools | Communications | University of Cambridge
The ability of generative AI tools to analyse huge datasets can also be used to help spark creative inspiration. This can help us if we’re struggling for time or battling writer’s block. For example, if a social media manager is looking for ideas on how to engage alumni on Instagram, they could ask ChatGPT for suggestions based on recent popular content. They could then pick the best ideas from ChatGPT’s response and adapt them. We may use these tools in a similar way to how we ask a colleague for an idea on how to approach a creative task.
We may use these tools in a similar way to how we use search engines for researching topics and will always carefully fact-check before publication.
we will not publish any press releases, articles, social media posts, blog posts, internal emails or other written content that is 100% produced by generative AI. We will always apply brand guidelines, fact-check responses, and re-write in our own words.
We may use these tools to make minor changes to a photo to make it more usable without changing the subject matter or original essence. For example, if a website manager needs a photo in a landscape ratio but only has one in a portrait ratio, they could use Photoshop’s inbuilt AI tools to extend the background of the photo to create an image with the correct dimensions for the website.
·communications.cam.ac.uk·
How we use generative AI tools | Communications | University of Cambridge
Announcing iA Writer 7
Announcing iA Writer 7
New features in iA Writer that discern authorship between human and AI writing, and encourages making human changes to writing pasted from AI
With iA Writer 7 you can manually mark ChatGPT’s contributions as AI text. AI text is greyed out. This allows you to separate and control what you borrow and what you type. By splitting what you type and what you pasted, you can make sure that you speak your mind with your voice, rhythm and tone.
As a dialog partner AI makes you think more and write better. As ghost writer it takes over and you lose your voice. Yet, sometimes it helps to paste its replies and notes. And if you want to use that information, you rewrite it to make it our own. So far, in traditional apps we are not able to easily see what we wrote and what we pasted from AI. iA Writer lets you discern your words from what you borrowed as you write on top of it. As you type over the AI generated text you can see it becoming your own. We found that in most cases, and with the exception of some generic pronouns and common verbs like “to have” and “to be”, most texts profit from a full rewrite.
we believe that using AI for writing will likely become as common as using dishwashers, spellcheckers, and pocket calculators. The question is: How will it be used? Like spell checkers, dishwashers, chess computers and pocket calculators, writing with AI will be tied to varying rules in different settings.
We suggest using AI’s ability to replace thinking not for ourselves but for writing in dialogue. Don’t use it as a ghost writer. Because why should anyone bother to read what you didn’t write? Use it as a writing companion. It comes with a ChatUI, so ask it questions and let it ask you questions about what you write. Use it to think better, don’t become a vegetable.
·ia.net·
Announcing iA Writer 7
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
This paper maps concepts from AI alignment onto a basic, three step interaction cycle, yielding a corresponding set of alignment objectives: 1) specification alignment: ensuring the user can efficiently and reliably communicate objectives to the AI, 2) process alignment: providing the ability to verify and optionally control the AI's execution process, and 3) evaluation support: ensuring the user can verify and understand the AI's output.
the notion of a Process Gulf, which highlights how differences between human and AI processes can lead to challenges in AI control.
·arxiv.org·
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
Grammy Chief Harvey Mason Clarifies New AI Rule: We’re Not Giving an Award to a Computer
Grammy Chief Harvey Mason Clarifies New AI Rule: We’re Not Giving an Award to a Computer
The full wording of the ruling follows: The GRAMMY Award recognizes creative excellence. Only human creators are eligible to be submitted for consideration for, nominated for, or win a GRAMMY Award. A work that contains no human authorship is not eligible in any Categories. A work that features elements of A.I. material (i.e., material generated by the use of artificial intelligence technology) is eligible in applicable Categories; however: (1) the human authorship component of the work submitted must be meaningful and more than de minimis; (2) such human authorship component must be relevant to the Category in which such work is entered (e.g., if the work is submitted in a songwriting Category, there must be meaningful and more than de minimis human authorship in respect of the music and/or lyrics; if the work is submitted in a performance Category, there must be meaningful and more than de minimis human authorship in respect of the performance); and (3) the author(s) of any A.I. material incorporated into the work are not eligible to be nominees or GRAMMY recipients insofar as their contribution to the portion of the work that consists of such A.I material is concerned. De minimis is defined as lacking significance or importance; so minor as to merit disregard.
the human portion of the of the composition, or the performance, is the only portion that can be awarded or considered for a Grammy Award. So if an AI modeling system or app built a track — ‘wrote’ lyrics and a melody — that would not be eligible for a composition award. But if a human writes a track and AI is used to voice-model, or create a new voice, or use somebody else’s voice, the performance would not be eligible, but the writing of the track and the lyric or top line would be absolutely eligible for an award.”
·variety.com·
Grammy Chief Harvey Mason Clarifies New AI Rule: We’re Not Giving an Award to a Computer
Inside the AI Factory
Inside the AI Factory
Over the past six months, I spoke with more than two dozen annotators from around the world, and while many of them were training cutting-edge chatbots, just as many were doing the mundane manual labor required to keep AI running. There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators don’t get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors.
·nymag.com·
Inside the AI Factory
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
AI Is Tearing Wikipedia Apart
AI Is Tearing Wikipedia Apart
While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked.
·vice.com·
AI Is Tearing Wikipedia Apart
The $2 Per Hour Workers Who Made ChatGPT Safer
The $2 Per Hour Workers Who Made ChatGPT Safer
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
This reminds me of [[On the Social Media Ideology - Journal 75 September 2016 - e-flux]]:<br>> Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.
I haven't finished watching [[Severance]] yet but this labeling system reminds me of the way they have to process and filter data that is obfuscated as meaningless numbers. In the show, employees have to "sense" whether the numbers are "bad," which they can, somehow, and sort it into the trash bin.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
·time.com·
The $2 Per Hour Workers Who Made ChatGPT Safer