read

read

291 bookmarks
Custom sorting
Effective altruism as the most exciting cause in the world - EA Forum
Effective altruism as the most exciting cause in the world - EA Forum
I feel that one thing that effective altruists haven't sufficiently capitalized on in their marketing is just how amazingly exciting the whole thing is. There's Holden Karnofsky's post on excited alt…
Here's an extra bonus. At the moment, the core of effective altruism is formed of smart, driven, and caring people from all around the world. When you become an effective altruist and start participating, you are joining a community of some of the most interesting people on Earth.
Best of all? This isn't just some fuzzy feelgood thing where you're taking things on faith. People in the community are constantly debating these things, looking for ways to improve and to become even better at doing good.
If you spot a crucial flaw in someone else's argument, or suggest a critical improvement, it may impact the effectiveness of all the other effective altruists who are doing or thinking about doing something related.
The average person working in an ordinary job can potentially save several lives a year, just by donating a measly 10% of his income and doing literally nothing else altruistic! That would already be amazing by itself.
·forum.effectivealtruism.org·
Effective altruism as the most exciting cause in the world - EA Forum
The highest-impact career paths our research has identified so far
The highest-impact career paths our research has identified so far
The highest-impact career for you is the one that allows you to make the biggest contribution to solving one of the world's most pressing problems. On this page, we list some broad categories of impactful careers, followed by about 30 more specific and unusual career paths we think are especially impactful, such as long-term AI policy research. The lists are based on 10 years of research and experience advising people, and represent the careers it seems to us will be most impactful over the long run if you get started on them now — though of course we can't be sure what the future holds. You can use the lists on this page to get new ideas for impactful careers and make sure you haven't missed a great option.
·80000hours.org·
The highest-impact career paths our research has identified so far
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
This post is based on notes for a talk I gave at EAG Boston 2017. I talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.
I’ll talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.
“S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.”
I’ll argue that s-risks are not much more unlikely than AI-related extinction risk.
You may think “this is absurd”, we can’t even send humans to Mars, why worry about suffering on cosmic scales? This was certainly my immediately intuitive reaction when I first encountered related concepts. But as EAs, we should be cautious to take such intuitive, ‘system 1’ reactions, at face value. For we are aware that a large body of psychological research in the “heuristics and biases” approach suggests that our intuitive probability estimates are often driven by how easily we can recall a prototypical example of the event we’re considering. For types of events that have no precedent in history, we can’t recall any prototypical example, and so we’re systematically underestimating the probability of such events if we aren’t careful.
Artificial sentience refers to the idea that the capacity to have subjective experience – and in particular, the capacity to suffer – is not limited to biological animals. While there is no universal agreement on this, in fact most contemporary views in the philosophy of mind imply that artificial sentience is possible in principle. And for the particular case of brain emulations, researchers have outlined a concrete roadmap, identifying concrete milestones and remaining uncertainties.
s-risks involving artificial sentience and “AI gone wrong” have been discussed by Bostrom under the term mindcrime.
To conclude: to be worried about s-risk, we don’t need to posit any new technology or any qualitatively new feature above what is already being considered by the AI risk community. So I’d argue that s-risks are not much more unlikely than AI-related x-risks. Or at the very least, if someone is worried about AI-related x-risk but not s-risk, the burden of proof is on them.
·longtermrisk.org·
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
Preventing an AI-related catastrophe - Problem profile - EA Forum
Preventing an AI-related catastrophe - Problem profile - EA Forum
We (80,000 Hours) have just released our longest and most in-depth problem profile — on reducing existential risks from AI. …
Our overall view Recommended - highest priority This is among the most pressing problems to work on.
Overall, our current take is that AI development poses a bigger threat to humanity’s long-term flourishing than any other issue we know of.
Around $50 million was spent on reducing the worst risks from AI in 2020 – billions were spent advancing AI capabilities.[3] [4] While we are seeing increasing concern from AI experts, there are still only around 300 people working directly on reducing the chances of an AI-related existential catastrophe.[2] Of these, it seems like about two-thirds are working on technical AI safety research, with the rest split between strategy (and policy) research and advocacy.
·forum.effectivealtruism.org·
Preventing an AI-related catastrophe - Problem profile - EA Forum
The case for taking AI seriously as a threat to humanity
The case for taking AI seriously as a threat to humanity
Why some people fear AI, explained.
Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”
Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.
For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.
It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.
I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”
In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”
But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.
Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).
When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.
·vox.com·
The case for taking AI seriously as a threat to humanity
Why I am probably not a longtermist - EA Forum
Why I am probably not a longtermist - EA Forum
tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether ther…
What I value less than a total utilitarian is bringing happy people into existence who would not have existed otherwise. This means I am not too fussed about humanity’s failure to become much bigger and spread to the stars. While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks
Also, many world improvements I expect to peter out before they become negative. But I am worried that some will not. For example, I think increased hedonism and individualism have both been a good force, but if overdone I would consider them to make the world worse, and it seems to me we are either almost or already there.
On a related note, while this is not an argument which deters me from longtermism, some longtermists looking forward to futures which I consider to be worthless (e.g. the hedonium shockwave) puts me off. Culturally many longtermists seem to favour more hedonism, individualism and techno-utopianism than I would like.
I am unconvinced that people can reliably have a positive impact which dissipates further into the future than 100 years, maybe within a factor of 3. But there is one important exception: if we have the ability to prevent or shape a “lock-in” scenario within this timeframe. By lock-in I mean anything which humanity can never escape from. Extinction risks are an obvious example, others are permanent civilisational collapse.
·forum.effectivealtruism.org·
Why I am probably not a longtermist - EA Forum
Utilitronium - LessWrong
Utilitronium - LessWrong
Utilitronium is relatively homogeneous matter optimized for maximum utility (like computronium is optimized for maximum computing power). For a paperclip maximizer, utilitronium is paperclips. For more complex values, no homogeneous organization of matter will have optimal utility. Utilitronium shockwave is a process of converting all matter in the universe into utilitronium as quickly as possible, which would look like a shockwave of utilitronium spreading outwards from the point of origin, presumably nearly at the speed of light. SEE ALSO * Utility function * Hedon, Utils * Paperclip maximizer, Complexity of value * Hedonium, the hedonistic utilitarian version of utilitronium
·lesswrong.com·
Utilitronium - LessWrong
What We Owe the Future, Chapter 1 - EA Forum
What We Owe the Future, Chapter 1 - EA Forum
What We Owe The Future, by William MacAskill Chapter One • The Silent Billions …
Future people count. There could be a lot of them. We can make their lives  go better.
By abandoning the tyranny  of the present over the future, we can act as trustees—helping to create a  flourishing world for generations to come.
These ideas are common sense. A popular proverb says, “A society grows great when old men plant trees under whose shade they will never sit.” When we dispose of radioactive waste, we don’t say, “Who cares if this poisons people centuries from now?” Similarly, few of us who care about climate change or pollution do so solely for the sake of people alive today.
Oren Lyons, a faithkeeper for the Onondaga and Seneca nations of the Iroquois Confederacy, phrases this in terms of a “seventh-generation” principle, saying, “We . . . make every decision that we make relate to the welfare and well-being of the seventh generation to come. . . . We consider: will this be to the benefit of the seventh generation?”
suppose that we only last as long as the typical mammalian species—that is, around one million years. Also assume that our population continues at its current size. In that case, there would be eighty trillion people yet to come; future people would out- number us ten thousand to one.
Future populations might be much smaller or much larger than they are today. But if the future population is smaller, it can be smaller by eight billion at most—the size of today’s population. In contrast, if the future population is bigger, it could be much bigger.
Decarbonisation is a proof of concept for longtermism.
·forum.effectivealtruism.org·
What We Owe the Future, Chapter 1 - EA Forum
All Possible Views About Humanity's Future Are Wild - EA Forum
All Possible Views About Humanity's Future Are Wild - EA Forum
Audio version is here • • Summary: • * In a series of posts starting with this one, I'm going to argue that the 21st century could see our civilization develop technologies allowing rapid expansion…
I'm going to argue that the 21st century could see our civilization develop technologies allowing rapid expansion throughout our currently-empty galaxy. And thus, that this century could determine the entire future of the galaxy for tens of billions of years, or more.
The ~1.4 billion years to complete space expansion is based on the distance to the outer edge of the Milky Way, divided by the speed of a fast existing human-made spaceship
Maybe humanity is destined to destroy itself before it reaches this stage. But note that if the way we destroy ourselves is via misaligned AI,[7] it would be possible for AI to build its own technology and spread throughout the galaxy, which still seems in line with the spirit of the above sections. In fact, it highlights that how we handle AI this century could have ramifications for many billions of years. So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.
But space expansion seems feasible, and our galaxy is empty. These two things seem in tension. A similar tension - the question of why we see no signs of extraterrestrials, despite the galaxy having so many possible stars they could emerge from - is often discussed under the heading of the Fermi Paradox.
·forum.effectivealtruism.org·
All Possible Views About Humanity's Future Are Wild - EA Forum
This Can't Go On - EA Forum
This Can't Go On - EA Forum
Audio version available at Cold Takes (or search Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio") …
The world can't just keep growing at this rate indefinitely. We should be ready for other possibilities: stagnation (growth slows or ends), explosion (growth accelerates even more, before hitting its limits), and collapse (some disaster levels the economy).
This growth has gone on for longer than any of us can remember, but that isn't very long in the scheme of things - just a couple hundred years, out of thousands of years of human civilization. It's a huge acceleration, and it can't go on all that much longer.
If this holds up, then 8200 years from now, the economy would be about 3*1070 times its current size.
So if the economy were 3*1070 times as big as today's, and could only make use of 1070 (or fewer) atoms, we'd need to be sustaining multiple economies as big as today's entire world economy per atom.
·forum.effectivealtruism.org·
This Can't Go On - EA Forum
Why 'Mutual Aid'? – social solidarity, not charity
Why 'Mutual Aid'? – social solidarity, not charity
Peter Kropotkin's most famous work advancing a belief in the depth of our connection to each other is titled 'Mutual Aid: A Factor in Evolution'.
'Mutual aid' has suddenly entered the collective consciousness as we seek ways to support our friends and neighbours amidst a global pandemic. Alexandria Ocasio-Cortez has tweeted about it, The New York Times has discussed "so called mutual-aid" networks in major cities, and mutual aid workshops have spread throughout the United States.
"Social solidarity - not charity," might be the slogan response, but conceptualizing the difference is not easy.
Fundamentally, mutual aid is about building "bottom-up" structures of cooperation, rather than relying on the state or wealthy philanthropists to address our needs. It emphasizes horizontal networks of solidarity rather than "top down" solutions, networks that flow in both directions and sustain the life of a community.
Mutual-aid is a concept born from a curious hybrid of Russian evolutionary theory and anarchist thought. It is, specifically, an idea associated with Peter Kropotkin - a well-known anarchist-socialist thinker – also a naturalist, geographer, ethnographer and advocate of scientific thought. Kropotkin, along with other Russian scientists, developed mutual aid in response to the profound impact of Darwin's evolutionary theory and the focus on competition among his adherents.
pencer believed in the progressive evolution of not only organisms but also human societies and helped to popularize evolutionary theory as a social, and not only biological, phenomenon. Humans are, after all, an element of nature.
Kropotkin, however, was deeply concerned about an interpretation of evolutionary theory that emphasized hostility and competition, especially when extended, as it still often is, to the social and political lives of human beings. He saw that "survival of the fittest" would inevitably be used to justify poverty, colonialism, gender inequality, racism and war as "natural" processes – innate and immutable expressions of our very genetic being.
Instead of this relentless competition, Kropotkin saw cooperation everywhere he looked: in colonies of ants, in the symbiotic behaviors of plants and animals, and in the practices of peasants in his own travels.
“the fittest are not the physically strongest, nor the cunningest, but those who learn to combine so as mutually to support each other, strong and weak alike, for the welfare of the community.”
·opendemocracy.net·
Why 'Mutual Aid'? – social solidarity, not charity
Sam Bankman-Fried Said He Would Give Away Billions. Broken Promises Are All That’s Left.
Sam Bankman-Fried Said He Would Give Away Billions. Broken Promises Are All That’s Left.
The FTX founder pledged to donate billions. His firm’s swift collapse wiped out his wealth and ambitious philanthropic endeavors.
“This is something of a tragedy for people who were hoping philanthropy could step up and fill the gap in addressing the catastrophic risks that government isn’t nimble enough to deal with,” Dr. Esvelt said.
Future Fund’s open call for ideas in February drew thousands of submissions. The application process was part of the appeal. The organization promised fast responses and encouraged risky projects. It attracted scientists pursuing interdisciplinary work outside their field of expertise frustrated by the laborious nature of government funding. In a frequently-asked-questions page on its website, Future Fund offered this advice: “We tend to find that people think too small, rather than think too big.”
Its two largest public grants, of $15 million and $13.9 million, were awarded to effective altruism groups where Mr. MacAskill held roles.
Several grant recipients, including one affiliated with Mr. MacAskill, were still owed funds when FTX failed, according to people familiar with the matter.
·wsj.com·
Sam Bankman-Fried Said He Would Give Away Billions. Broken Promises Are All That’s Left.
Amia Srinivasan · Stop the Robot Apocalypse: The New Utilitarians · LRB 23 September 2015
Amia Srinivasan · Stop the Robot Apocalypse: The New Utilitarians · LRB 23 September 2015
Philosophers may talk about justice or rights, but they don’t often try to reshape the world according to their ideals...
Philosophers have a tendency to slip from sense into seeming absurdity: a defence of abortion ends up defending infanticide; an argument for vegetarianism turns into a call for the extermination of wild carnivores.
Their leader is William MacAskill, a 28-year-old lecturer at Oxford.
In 2011, MacAskill set up 80,000 Hours (the name refers to the number of hours the average person works over a lifetime), a charity that helps people make career choices with the aim of maximising social benefit; it raised eyebrows early on by advising graduates to become philanthropic bankers rather than NGO workers.
groups such as GiveWell (founded by two hedge-fund managers at around the same time as MacAskill and Ord started their work), The Life You Can Save (founded by the philosopher Peter Singer), Good Ventures (founded by the Facebook cofounder Dustin Moskovitz and his wife, Cari Tuna, who have pledged to give away most of their money), Animal Charity Evaluators (an 80,000 Hours spin-off) and the Open Philanthropy Project (a collaboration between GiveWell and Good Ventures).
To do that we need empirical research – research his organisations provide – into the amount of good created by various different charities, types of consumption, careers and so on. MacAskill proposes that ‘good’, here, can be understood roughly in terms of quality-adjusted life-years (Qalys), a unit that allows welfare economists to compare benefits of very different sorts. One Qaly is a single year of life lived at 100 per cent health. According to a standardised scale, a year as an Aids patient not on antiretrovirals is worth 0.5 Qalys; a year with Aids lived on antiretrovirals is worth 0.9 Qalys. A year of life for a blind person is worth 0.4 Qalys; a year of life as a non-blind, otherwise healthy person is worth 1 Qaly. (These numbers are based on self-reporting by Aids patients and blind people, which raises some obvious worries. For example, dialysis patients rate their lives at 0.56 Qalys – significantly higher than the 0.39 Qalys predicted by people who don’t need dialysis. Maybe this is because dialysis isn’t as bad as we think. Or maybe it’s because dialysis is so awful that you forget just how much better your life was without it.)
Thinking in terms of Qalys makes it possible to compare that which seemingly cannot be compared: blindness with Aids; increases in life expectancy with increases in life quality. Qalys free us from the specificity of people’s lives, giving us a universal currency for misery.
e must also think both marginally and counterfactually. The idea that value should be measured on the margin is familiar from economics; it’s what explains the fact that, say, heating repairmen make more money than childcare workers. Presumably childcare workers produce more total value than heating repairmen, but because the supply of childcare workers is greater than the supply of good repairmen, we will pay more for an additional repairman than an additional childcare worker. The average value of a childcare worker might be higher than the average value of a repairman, but the repairman has the greater marginal value. (Another way of putting this: coffee might be really important to you, but if you’ve already had three cups you’re probably not going to care as much about a fourth.)
The average doctor in the developed world helps save a lot of lives, but the marginal doctor – because the supply of doctors is large, and most of the life-saving work is already covered – doesn’t. The marginal doctor in the developing world has greater value, since the supply of doctors is lower there. MacAskill estimates that a doctor practising in a very poor country adds about a hundred times as much marginal value (measured in Qalys) as a doctor practising in the UK. (In general, MacAskill says, a pound spent in a poor country can do one hundred times more good than it can in a rich one, a heuristic he calls the ‘100x Multiplier’.)
Yet if you didn’t take that job, someone else probably would; they may not save quite as many lives as you, but they would save most of them. Meanwhile you could quit medicine, take a high-paying finance job and donate most of your salary each year to the most effective charities.
But don’t many lucrative careers have bad social effects? Up until recently MacAskill argued that such effects were morally irrelevant, again by counterfactual reasoning: if you didn’t take the banking job someone else would, so the harm would be done anyway. (In an academic paper published last year, he compares a philanthropic banker to Oskar Schindler, who provided munitions to the Nazis as a means of saving the lives of 1200 Jews; if Schindler hadn’t manufactured the arms, some other Nazi would have, without saving any Jewish lives.) More recently MacAskill and his team at 80,000 Hours have backed away from this ‘replaceability thesis’, conceding that it’s harder than they initially thought to evaluate the counterfactuals. For example, there’s good economic reason to think that going into banking really does increase the total number of bankers, and doesn’t simply change who does the banking. MacAskill says he no longer recommends that people go into banking, or at least not the parts of it that he thinks cause direct harm: creating risks that will be borne by unsuspecting taxpayers, or selling products that no properly informed person would buy. Instead 80,000 Hours now encourages people to take what it sees as morally neutral or positive jobs: quantitative hedge-fund trading, management consulting, technology start-ups.
If you want to improve animal welfare, it’s better to stop eating eggs than beef, since caged layer hens live worse lives than farmed cows, and because eating eggs consumes more animals than eating beef: the average American consumes 0.8 layer hens but only 0.1 beef cows per year.
The results of all this number-crunching are sometimes satisfyingly counterintuitive.
(I’m not saying it doesn’t work. Halfway through reading the book I set up a regular donation to GiveDirectly, one of the charities MacAskill endorses for its proven efficacy. It gives unconditional direct cash transfers to poor households in Uganda and Kenya.)
MacAskill is evidently comfortable with ways of talking that are familiar from the exponents of global capitalism: the will to quantify, the essential comparability of all goods and all evils, the obsession with productivity and efficiency, the conviction that there is a happy convergence between self-interest and morality, the seeming confidence that there is no crisis whose solution is beyond the ingenuity of man.
That he speaks in the proprietary language of the illness – global inequality – whose symptoms he proposes to mop up is an irony on which he doesn’t comment. Perhaps he senses that his potential followers – privileged, ambitious millennials – don’t want to hear about the iniquities of the system that has shaped their worldview. Or perhaps he thinks there’s no irony here at all: capitalism, as always, produces the means of its own correction, and effective altruism is just the latest instance.
Since effective altruism is committed to whatever would maximise the social good, it might for example turn out to support anti-capitalist revolution.
MacAskill describes how he helped an Oxford PPE student work out whether or not she should get into electoral politics. He calculates that historically, the odds of a politically ambitious Oxford PPE student becoming an MP have been one in thirty (he notes that this reflects ‘some disappointing facts about political mobility and equal representation in the UK’). Applying some conservative estimates of the resources an average MP gets to control, he prices the marginal expected value of the student’s running for Parliament at £8 million, which turns out to be high enough, compared with the expected value of other careers she might pursue, to justify the move into politics.
What’s the expected marginal value of becoming an anti-capitalist revolutionary? To answer that you’d need to put a value and probability measure on achieving an unrecognisably different world – even, perhaps, on our becoming unrecognisably different sorts of people. It’s hard enough to quantify the value of a philanthropic intervention: how would we go about quantifying the consequences of radically reorganising society?
MacAskill seems to think there is no moral calculation that can’t be made to fit on the back of his envelope; any uncertainty we might have about precise values or probabilities can be priced into the model.
Do we really need a sophisticated model to tell us that we shouldn’t deal in subprime mortgages, or that the American prison system needs fixing, or that it might be worthwhile going into electoral politics if you can be confident you aren’t doing it solely out of self-interest? The more complex the problem effective altruism tries to address – that is, the more deeply it engages with the world as a political entity – the less distinctive its contribution becomes. Effective altruists, like everyone else, come up against the fact that the world is messy, and like everyone else who wants to make it better they must do what strikes them as best, without any final sense of what that might be or any guarantee that they’re getting it right.
A three-day conference, ‘Effective Altruism Global’, was held this summer at Google’s headquarters in Mountain View, California. While some of the sessions focused on the issues closest to MacAskill’s heart – cost-effective philanthropy, global poverty, career choice – much of it was dominated, according to Dylan Matthews, who was there and wrote about it for Vox, by talk of existential risks (or x-risks, as the community calls them).
Even if Bostrom’s 1052 estimate has only a 1 per cent chance of being correct, the expected value of reducing an x-risk by one billionth of one billionth of a percentage point (that’s 0.0000000000000000001 per cent) is still a hundred billion times greater than the value of saving the lives of a billion people living now. So it turns out to be better to try to prevent some hypothetical x-risk, even with an extremely remote chance of being able to do so, than to help actual living people.
the one that effective altruists like to worry about most is the ‘intelligence explosion’: artificial intelligence taking over the world and destroying humanity. Their favoured solution is to invest more money in AI research. Thus the humanitarian logic of effective altruism leads to the conclusion that more money needs to be spent on computers: why invest in anti-malarial nets when there’s a robot apocalypse to halt?
one of the organisers of the Googleplex conference declared that ‘effective altruism could be the last social movement we ever need.’
MacAskill does not address the deep sources of global misery – international trade and finance, debt, nationalism, imperialism, racial and gender-based subordination, war, environmental degradation, corruption, exploitation of labour – or the forces that ensure its reproduction.
Effective altruism doesn’t try to understand how power works, except to better align itself with it. In this sense it leaves everything just as it is. This is no doubt comforting to those who enjoy the status quo – and may in part account for the movement’s success.
In 1972 Peter Singer published his paper ‘Famine, Affluence and Morality’, a classic of contemporary utilitarianism, in which he compares a Westerner who spends money on luxuries rather than donating it to the developing world to someone who walks by a drowning child rather than get his clothes muddy.
Effective altruism takes up the spirit of Singer’s argument but shields us from the full blast of its conclusion; moral indictment is transformed into an empowering investment opportunity.
MacAskill tells us that effective altruists – like utilitarians – are committed to doing the most good possible, but he also tells us that it’s OK to enjoy a ‘cushy lifestyle’, so long as you’re donating a lot to charity. Either effective altruism, like utilitarianism, demands that we do the most good possible, or it asks merely that we try to make things better. The first thought is genuinely radical, requiring us to overhaul our daily lives in ways unimaginable to most.
If effective altruism is simply in the business of getting us to be more effective when we try to help others, then it’s hard to object to it. But in that case it’s also hard to see what it’s offering in the way of fresh moral insight, still less how it could be the last social movement we’ll ever need.
How far should the effective altruist go with this logic? If you’re faced with the choice between spending a few hours consoling a bereaved friend, or earning some money to donate to an effective charity, the utilitarian calculus will tell you to do the latter.
You should stay and console your friend not because you’ve already met your do-gooding quota, but because it’s your friend that is in distress. This is also the reason you shouldn’t deal in subprime mortgages or make money from the exploitation of labour, even if the good effects would outweigh the bad: it’s your life, and it matters, morally speaking, what you do with it, and not just – as MacAskill suggests – what is done because of it.
That emphasis on ‘your’ is something that utilitarians often find conceptually mystifying, or at least a moral distraction.
If I were to give to the Fistula Foundation rather than to charities I thought were more effective, I would be privileging the needs of some people over others for emotional rather than moral reasons. That would be unfair to those I could have helped more. If I’d visited some other shelter in Ethiopia, or in any other country, I would have had a different set of personal connections. It was arbitrary that I’d seen this particular problem at close quarters.
But doesn’t such arbitrariness come to mean something else, ethically speaking, when it is constitutive of our personal experience: when it becomes embedded in the complex structure of commitments, affinities and understandings that comprise social life?
When MacAskill says that helping the Ethiopian women he met would be ‘arbitrary’ and ‘unfair’, he means to speak from what the 19th-century utilitarian Henry Sidgwick called ‘the point of view of the universe’.
The tacit assumption is that the individual, not the community, class or state, is the proper object of moral theorising. There are benefits to thinking this way. If everything comes down to the marginal individual, then our ethical ambitions can be safely circumscribed; the philosopher is freed from the burden of trying to understand the mess we’re in, or of proposing an alternative vision of how things could be. The philosopher is left to theorise only the autonomous man, the world a mere background for his righteous choices. You wouldn’t be blamed for hoping that philosophy has more to give
·lrb.co.uk·
Amia Srinivasan · Stop the Robot Apocalypse: The New Utilitarians · LRB 23 September 2015
Philosophical Critiques of Effective Altruism by Prof Jeff McMahan - EA Forum
Philosophical Critiques of Effective Altruism by Prof Jeff McMahan - EA Forum
Prof Jeff MacMahanis White's Professor of Moral Philosophy at Oxford University - this article of his is from The Philosopher's Magazine. …
This is also part of the answer to a similar charge by Srinivasan that effective altruism is ‘profoundly individualistic. Its utilitarian calculations presuppose that everyone else will continue to conduct business as usual; the world is a given, in which one can make careful, piecemeal interventions. … The philosopher is left to theorise only the autonomous man, the world a mere background for his righteous choices’. Although it is presented as an objection, this seems to me exactly right: individuals must decide what to do against the background of what others will in fact do.
‘The tacit assumption’ of the effective altruists, she writes, is that the individual, not the community, class or state, is the proper object of moral theorising. There are benefits to thinking this way. If everything comes down to the marginal individual, then our ethical ambitions can be safely circumscribed; the philosopher is freed from the burden of trying to understand the mess we’re in, or of proposing an alternative vision of how things could be.
·forum.effectivealtruism.org·
Philosophical Critiques of Effective Altruism by Prof Jeff McMahan - EA Forum
Pascal's mugging - Wikipedia
Pascal's mugging - Wikipedia
scal's mugging is a thought-experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighed by their probability, have higher utility. But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.
·en.wikipedia.org·
Pascal's mugging - Wikipedia
The Repugnant Conclusion
The Repugnant Conclusion
the Repugnant Conclusion is stated as follows: “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living” (Parfit 1984).
The Repugnant Conclusion highlights a problem in an area of ethics which has become known as population ethics.
Parfit finds the Repugnant Conclusion unacceptable and many philosophers agree. However, it has been surprisingly difficult to find a theory that avoids the Repugnant Conclusion without implying other equally counterintuitive conclusions.
straightforward way of capturing the No-Difference View is total utilitarianism according to which the best outcome is the one in which there would be the greatest quantity of whatever makes life worth living (Parfit 1984 p. 387). However, this view implies that any loss in the quality of lives in a population can be compensated for by a sufficient gain in the quantity of a population; that is, it leads to the Repugnant Conclusion.
2.1.1 The average principle One proposal that easily comes to mind when faced with the Repugnant Conclusion is to reject total utilitarianism in favor of a principle prescribing that the average welfare per life in a population is maximized.
2.1.2 Variable value principles An attempt to produce a compromise between a total principle and an average principle is provided by a variable value principle. The idea behind this view is that the value of adding worthwhile lives to a population varies with the number of already existing lives in such a way that it has more value when the number of these lives is small than when it is large (Hurka 1983, Ng 1989; Sider 1991).
More exactly, Ng’s theory implies the “Sadistic Conclusion” (Arrhenius 2000a): For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.
Roughly, the arguments share the following structure: in comparison between three population scenarios the second scenario is considered better than the first and the third better than the second, leading to the conclusion that the third scenario is better than the first. By iterated application of this line of reasoning one ends up with the Repugnant Conclusion (that Z is better than A). One of the more radical approaches to such arguments has been to reject the transitivity of the relation “better than” (or “equally as good as”). According to transitivity, if p is better than q, and q is better than r, then p is better than r. However, if transitivity of “better than” is denied then the reasoning leading the Repugnant Conclusion is blocked (Temkin 1987, 2012; Persson 2004; Rachels 2004).
Fred Feldman has proposed a desert-adjusted version of utilitarianism, ‘justicism’ which he claims avoids the Repugnant Conclusion (Feldman 1997). In justicism, the value of an episode of pleasure is determined not only by the hedonic level but also by the recipient’s desert level. Feldman’s proposal is that there is some level of happiness that people deserve merely in virtue of being people. He assumes that this level corresponds to 100 units of pleasure and that people with very low welfare enjoy only one unit of pleasure. He suggests that the life of a person who deserves 100 units of pleasure and receives exactly that amount of pleasure has an intrinsic value of 200, whereas the life of a person deserving 100 units but who only receives one unit of pleasure has an intrinsic value of −49. It follows that any population consisting of people with very low welfare and desert level 100 has negative value, whereas any population with very high welfare has positive value.
It has been held that it can be proven that there is no population ethics that satisfies a set of apparently plausible adequacy conditions on such a theory. In fact, several such proofs have been suggested (Arrhenius 2000b, 2011). What such a theorem would show about ethics is not quite clear. Arrhenius has suggested that an impossibility theorem leaves us with three options: (1) to abandon some of the conditions on which the theorem is based; (2) to become moral sceptics; or (3) to try to explain away the significance of the proofs—alternatives which do not invite an easy choice (cf. Temkin 2012).
When confronted with the Repugnant Conclusion, many share the view that the conclusion is highly counterintuitive. However, it has been suggested that it is premature to conclude from this that the conclusion is morally unacceptable.
·plato.stanford.edu·
The Repugnant Conclusion
OK, WTF Is 'Longtermism', the Tech Elite Ideology That Led to the FTX Collapse?
OK, WTF Is 'Longtermism', the Tech Elite Ideology That Led to the FTX Collapse?
Longtermism and effective altruism are shaky moral frameworks with giant blindspots that have proven useful for cynics, opportunists, and plutocrats.
Longtermism and effective altruism are shaky moral frameworks with giant blindspots that have proven useful for cynics, opportunists, and plutocrats.
Effective altruism basically espouses the idea that the best way to do the most good for the future is to become as rich and powerful as possible right now.
Longtermism takes it a step further by saying the greatest good you can do is allocating resources to maximize the potential happiness of trillions of humans in the far future that will only be born if we minimize the risk of humanity’s extinction.
Australian philosopher Peter Singer, an early articulator of animal rights and liberation, is regarded as the intellectual father of effective altruism thanks to his 1972 essay “Famine, Affluence, and Morality.” In the essay, Singer argued people were morally obligated to maximize their impact through charity focused on causes that offered the greatest quality of life improvements, such as those aimed at reducing poverty and mortality in the Global South.
Others, including MacAskill, took it further by insisting that we must maximize our positive impact on people yet to be born thousands, millions, and even billions of years in the future.
Longtermism, MacAskill writes in his recent manifesto What We Owe the Future, is "the idea that positively influencing the long-term future is a key moral priority of our time" as "future people" yet to be born but sure to exist "count for no less, morally, than the present generation."
But longtermism, for all of its apparent focus on people, is not simply a vision that prioritizes the well-being of future individuals above more immediate concerns like profits or political coalitions.
For example, FHI research assistant and former FTX Future Fund member Nick Beckstead, argued at length in his 2013 dissertation for conclusions that seem to go against the origins of effective altruism. “Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries,” Beckstead wrote. “It now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal."
Utilitarianism—which effective altruism springs from—was developed by Jeremy Bentham and John Stuart Mill nearly three hundred years ago. Its promoters have argued since then that we have a moral imperative to maximize humanity's sum well-being with happiness as a positive, suffering a negative, and probability requiring that we try to average or hedge our estimates.
It can, however, lend itself to horrific conclusions: when faced with the choice to save millions today or shave percentage points off the probability of some existential risk which would preclude trillions of humans in the coming trillions of years, we're required to do the latter. Worse still, if shaving those percentage points can be achieved by those millions of people not being saved, then it's permitted so long as the effects are limited.
Or in other words, it may indeed be tragic if people today live horrible lives because we allocate scarce resources to improving future generations’ well-being, but if future people are equivalent to present people, and if there are more people in the future than today, then it is a moral crime to not ensure those generations have the greatest possible lives.
An illustrative example of how this thinking can go off the rails is Earning to Give, which MacAskill introduced to effective altruism. It was an attempt to convince people to take up high-paying jobs (even if they were harmful) to maximize their ability to contribute to charitable causes.
As Tyler Cowen wrote in a succinct blog post: “Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated).”
In the end, longtermism has proven itself to be the least effective form of altruism and, ironically, an existential risk to the effective altruism movement itself.
It’s important to distinguish effective altruism from longtermism because the latter's embrace of repugnant imperatives and conclusions have more profound consequences both for the movement’s tens of billions in capital at its disposal and is part of an ongoing attempt to capture it. On the other hand, however, longtermism is being advanced by the co-founder of effective altruism in an active attempt by him and other members of the movement to reorient its goals and ambitions.
As Eric Levitz writes in the Intelligencer, effective altruism as a whole wasn’t complicit in Sam Bankman-Fried’s spectacular collapse but some effective altruists were—specifically those who subscribed to longtermism.
One has to wonder why so many people missed the warning signs. How did a movement that prides itself on quantifying risk, impact, and harm rationalize the ascent of and support for a crypto billionaire when the crypto industry itself is so fundamentally odious and awash in scammers and grifters using the language of community and empowerment? An ideology that sees entering harmful industries as acceptable, with vaguely defined moral limits, certainly played a role.
These comments seem clear cut: these people believed their iteration of effective altruism—longtermism—morally compelled them to make risky bets because the potential upside (increased wealth to increase contributions to help realize humanity's long term potential) massively outweighed the downside (losing all their money, and everyone else's too).
When asked via DMs if that response was simply PR spin, he admitted that it was.
It may be tempting to interpret this as Bankman-Fried admitting he didn’t believe in effective altruism all along, but it’s really a doubling down of longtermism and its moral imperatives that are distinct and far more dangerous than effective altruism.
Bankman-Fried’s moral philosophy was one that prioritized the far future and its massively larger population over the present because of the exponentially larger aggregate happiness it would potentially have. In that sort of worldview, what does it matter if you build a crypto empire that may expose millions more people to fraud and volatile speculation that could wipe out their life savings—you’re doing this to raise enough money to save humanity, after all.
If longtermism is morally repugnant, it’s only because effective altruism is so morally vacuous.
Both were spearheaded by MacAskill, both are utilitarian logics that pave beautiful roads to hell with envelope math and slippery arguments about harm and impact, and both have led us to this current moment.
On some level, maybe it makes sense to ensure that your actions have the greatest possible positive impact—that your money is donated effectively to causes that improve people’s lives to the greatest degree possible, or that scarce resources should be mobilized to tackle the roots of problems as much as possible. But it's not clear why this top-down, from-first-principles approach is the right one, however. It's a fundamentally anti-democratic impulse that can lead to paralysis when uncertainty looms, massive blindspots for existential risks and moral hazards, or capture by opportunistic plutocrats and cynical adherents who simply want power, influence, and unblemished reputations.
These sorts of moral frameworks are the real existential threats to humanity today. Not rogue superintelligent AIs, but human beings who help saboteurs of today’s world be recast as the saviors of a distant tomorrow.
·vice.com·
OK, WTF Is 'Longtermism', the Tech Elite Ideology That Led to the FTX Collapse?
'Simulations'
'Simulations'
I felt a tap at my wrist, snatching restlessly for the button to answer. “Yeah?” “It’s not good,” came the voice of Sidney, our VP of Special Projects, through my earpiece. “Every time he comes out of it, he’s irritable within half an hour, demands to go back.
·redeem-tomorrow.com·
'Simulations'
Concrete Biosecurity Projects (some of which could be big) - EA Forum
Concrete Biosecurity Projects (some of which could be big) - EA Forum
Andrew Snyder-Beattie and Ethan Alley • This is a list of longtermist biosecurity projects. We think most of them could reduce catastrophic biorisk by more than 1% or so on the current margin (in rel…
One concrete commercial goal would be to produce a suit (and accompanying system) that is designed for severely immunocompromised people to lead relatively normal lives, at a cost low enough to convince the US government to acquire 100 million units for the Strategic National Stockpile.[4]  Another goal would be for the suit to simultaneously meet military-grade specifications, e.g. protecting against a direct hit of anthrax.
Existing bunkers provide a fair amount of protection, but we think there could be room for specially designed refuges that safeguard against catastrophic pandemics (e.g. cycling teams of people in and out with extensive pathogen agnostic testing, adding a ‘civilization-reboot package’, and possibly even having the capability to develop and deploy biological countermeasures from the protected space).
Collectively, these projects can potentially absorb a lot of aligned engineering and executive talent, and a lot of money.  Executive talent might be the biggest constraint, as it’s needed for effective deployment of other talent and resources.
·forum.effectivealtruism.org·
Concrete Biosecurity Projects (some of which could be big) - EA Forum
CRDTs for Mortals - James Long at dotJS 2019
CRDTs for Mortals - James Long at dotJS 2019
What do CRDTs and frontends have to do with each other? James talks about how CRDTs finally deliver on the promise of local-first apps, which provide superior user experience, and explains how simple CRDTs can be and how to leverage them to create robust local-first apps.
·dotconferences.com·
CRDTs for Mortals - James Long at dotJS 2019