read

read

395 bookmarks
Newest
Pascal's mugging - Wikipedia
Pascal's mugging - Wikipedia
scal's mugging is a thought-experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighed by their probability, have higher utility. But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.
·en.wikipedia.org·
Pascal's mugging - Wikipedia
The Repugnant Conclusion
The Repugnant Conclusion
the Repugnant Conclusion is stated as follows: “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living” (Parfit 1984).
The Repugnant Conclusion highlights a problem in an area of ethics which has become known as population ethics.
Parfit finds the Repugnant Conclusion unacceptable and many philosophers agree. However, it has been surprisingly difficult to find a theory that avoids the Repugnant Conclusion without implying other equally counterintuitive conclusions.
straightforward way of capturing the No-Difference View is total utilitarianism according to which the best outcome is the one in which there would be the greatest quantity of whatever makes life worth living (Parfit 1984 p. 387). However, this view implies that any loss in the quality of lives in a population can be compensated for by a sufficient gain in the quantity of a population; that is, it leads to the Repugnant Conclusion.
2.1.1 The average principle One proposal that easily comes to mind when faced with the Repugnant Conclusion is to reject total utilitarianism in favor of a principle prescribing that the average welfare per life in a population is maximized.
2.1.2 Variable value principles An attempt to produce a compromise between a total principle and an average principle is provided by a variable value principle. The idea behind this view is that the value of adding worthwhile lives to a population varies with the number of already existing lives in such a way that it has more value when the number of these lives is small than when it is large (Hurka 1983, Ng 1989; Sider 1991).
More exactly, Ng’s theory implies the “Sadistic Conclusion” (Arrhenius 2000a): For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.
Roughly, the arguments share the following structure: in comparison between three population scenarios the second scenario is considered better than the first and the third better than the second, leading to the conclusion that the third scenario is better than the first. By iterated application of this line of reasoning one ends up with the Repugnant Conclusion (that Z is better than A). One of the more radical approaches to such arguments has been to reject the transitivity of the relation “better than” (or “equally as good as”). According to transitivity, if p is better than q, and q is better than r, then p is better than r. However, if transitivity of “better than” is denied then the reasoning leading the Repugnant Conclusion is blocked (Temkin 1987, 2012; Persson 2004; Rachels 2004).
Fred Feldman has proposed a desert-adjusted version of utilitarianism, ‘justicism’ which he claims avoids the Repugnant Conclusion (Feldman 1997). In justicism, the value of an episode of pleasure is determined not only by the hedonic level but also by the recipient’s desert level. Feldman’s proposal is that there is some level of happiness that people deserve merely in virtue of being people. He assumes that this level corresponds to 100 units of pleasure and that people with very low welfare enjoy only one unit of pleasure. He suggests that the life of a person who deserves 100 units of pleasure and receives exactly that amount of pleasure has an intrinsic value of 200, whereas the life of a person deserving 100 units but who only receives one unit of pleasure has an intrinsic value of −49. It follows that any population consisting of people with very low welfare and desert level 100 has negative value, whereas any population with very high welfare has positive value.
It has been held that it can be proven that there is no population ethics that satisfies a set of apparently plausible adequacy conditions on such a theory. In fact, several such proofs have been suggested (Arrhenius 2000b, 2011). What such a theorem would show about ethics is not quite clear. Arrhenius has suggested that an impossibility theorem leaves us with three options: (1) to abandon some of the conditions on which the theorem is based; (2) to become moral sceptics; or (3) to try to explain away the significance of the proofs—alternatives which do not invite an easy choice (cf. Temkin 2012).
When confronted with the Repugnant Conclusion, many share the view that the conclusion is highly counterintuitive. However, it has been suggested that it is premature to conclude from this that the conclusion is morally unacceptable.
·plato.stanford.edu·
The Repugnant Conclusion
Sam Bankman-Fried Said He Would Give Away Billions. Broken Promises Are All That’s Left.
Sam Bankman-Fried Said He Would Give Away Billions. Broken Promises Are All That’s Left.
The FTX founder pledged to donate billions. His firm’s swift collapse wiped out his wealth and ambitious philanthropic endeavors.
“This is something of a tragedy for people who were hoping philanthropy could step up and fill the gap in addressing the catastrophic risks that government isn’t nimble enough to deal with,” Dr. Esvelt said.
Future Fund’s open call for ideas in February drew thousands of submissions. The application process was part of the appeal. The organization promised fast responses and encouraged risky projects. It attracted scientists pursuing interdisciplinary work outside their field of expertise frustrated by the laborious nature of government funding. In a frequently-asked-questions page on its website, Future Fund offered this advice: “We tend to find that people think too small, rather than think too big.”
Its two largest public grants, of $15 million and $13.9 million, were awarded to effective altruism groups where Mr. MacAskill held roles.
Several grant recipients, including one affiliated with Mr. MacAskill, were still owed funds when FTX failed, according to people familiar with the matter.
·wsj.com·
Sam Bankman-Fried Said He Would Give Away Billions. Broken Promises Are All That’s Left.
OK, WTF Is 'Longtermism', the Tech Elite Ideology That Led to the FTX Collapse?
OK, WTF Is 'Longtermism', the Tech Elite Ideology That Led to the FTX Collapse?
Longtermism and effective altruism are shaky moral frameworks with giant blindspots that have proven useful for cynics, opportunists, and plutocrats.
Longtermism and effective altruism are shaky moral frameworks with giant blindspots that have proven useful for cynics, opportunists, and plutocrats.
Effective altruism basically espouses the idea that the best way to do the most good for the future is to become as rich and powerful as possible right now.
Longtermism takes it a step further by saying the greatest good you can do is allocating resources to maximize the potential happiness of trillions of humans in the far future that will only be born if we minimize the risk of humanity’s extinction.
Australian philosopher Peter Singer, an early articulator of animal rights and liberation, is regarded as the intellectual father of effective altruism thanks to his 1972 essay “Famine, Affluence, and Morality.” In the essay, Singer argued people were morally obligated to maximize their impact through charity focused on causes that offered the greatest quality of life improvements, such as those aimed at reducing poverty and mortality in the Global South.
Others, including MacAskill, took it further by insisting that we must maximize our positive impact on people yet to be born thousands, millions, and even billions of years in the future.
Longtermism, MacAskill writes in his recent manifesto What We Owe the Future, is "the idea that positively influencing the long-term future is a key moral priority of our time" as "future people" yet to be born but sure to exist "count for no less, morally, than the present generation."
But longtermism, for all of its apparent focus on people, is not simply a vision that prioritizes the well-being of future individuals above more immediate concerns like profits or political coalitions.
For example, FHI research assistant and former FTX Future Fund member Nick Beckstead, argued at length in his 2013 dissertation for conclusions that seem to go against the origins of effective altruism. “Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries,” Beckstead wrote. “It now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal."
Utilitarianism—which effective altruism springs from—was developed by Jeremy Bentham and John Stuart Mill nearly three hundred years ago. Its promoters have argued since then that we have a moral imperative to maximize humanity's sum well-being with happiness as a positive, suffering a negative, and probability requiring that we try to average or hedge our estimates.
It can, however, lend itself to horrific conclusions: when faced with the choice to save millions today or shave percentage points off the probability of some existential risk which would preclude trillions of humans in the coming trillions of years, we're required to do the latter. Worse still, if shaving those percentage points can be achieved by those millions of people not being saved, then it's permitted so long as the effects are limited.
Or in other words, it may indeed be tragic if people today live horrible lives because we allocate scarce resources to improving future generations’ well-being, but if future people are equivalent to present people, and if there are more people in the future than today, then it is a moral crime to not ensure those generations have the greatest possible lives.
An illustrative example of how this thinking can go off the rails is Earning to Give, which MacAskill introduced to effective altruism. It was an attempt to convince people to take up high-paying jobs (even if they were harmful) to maximize their ability to contribute to charitable causes.
As Tyler Cowen wrote in a succinct blog post: “Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated).”
In the end, longtermism has proven itself to be the least effective form of altruism and, ironically, an existential risk to the effective altruism movement itself.
It’s important to distinguish effective altruism from longtermism because the latter's embrace of repugnant imperatives and conclusions have more profound consequences both for the movement’s tens of billions in capital at its disposal and is part of an ongoing attempt to capture it. On the other hand, however, longtermism is being advanced by the co-founder of effective altruism in an active attempt by him and other members of the movement to reorient its goals and ambitions.
As Eric Levitz writes in the Intelligencer, effective altruism as a whole wasn’t complicit in Sam Bankman-Fried’s spectacular collapse but some effective altruists were—specifically those who subscribed to longtermism.
One has to wonder why so many people missed the warning signs. How did a movement that prides itself on quantifying risk, impact, and harm rationalize the ascent of and support for a crypto billionaire when the crypto industry itself is so fundamentally odious and awash in scammers and grifters using the language of community and empowerment? An ideology that sees entering harmful industries as acceptable, with vaguely defined moral limits, certainly played a role.
These comments seem clear cut: these people believed their iteration of effective altruism—longtermism—morally compelled them to make risky bets because the potential upside (increased wealth to increase contributions to help realize humanity's long term potential) massively outweighed the downside (losing all their money, and everyone else's too).
When asked via DMs if that response was simply PR spin, he admitted that it was.
It may be tempting to interpret this as Bankman-Fried admitting he didn’t believe in effective altruism all along, but it’s really a doubling down of longtermism and its moral imperatives that are distinct and far more dangerous than effective altruism.
Bankman-Fried’s moral philosophy was one that prioritized the far future and its massively larger population over the present because of the exponentially larger aggregate happiness it would potentially have. In that sort of worldview, what does it matter if you build a crypto empire that may expose millions more people to fraud and volatile speculation that could wipe out their life savings—you’re doing this to raise enough money to save humanity, after all.
If longtermism is morally repugnant, it’s only because effective altruism is so morally vacuous.
Both were spearheaded by MacAskill, both are utilitarian logics that pave beautiful roads to hell with envelope math and slippery arguments about harm and impact, and both have led us to this current moment.
On some level, maybe it makes sense to ensure that your actions have the greatest possible positive impact—that your money is donated effectively to causes that improve people’s lives to the greatest degree possible, or that scarce resources should be mobilized to tackle the roots of problems as much as possible. But it's not clear why this top-down, from-first-principles approach is the right one, however. It's a fundamentally anti-democratic impulse that can lead to paralysis when uncertainty looms, massive blindspots for existential risks and moral hazards, or capture by opportunistic plutocrats and cynical adherents who simply want power, influence, and unblemished reputations.
These sorts of moral frameworks are the real existential threats to humanity today. Not rogue superintelligent AIs, but human beings who help saboteurs of today’s world be recast as the saviors of a distant tomorrow.
·vice.com·
OK, WTF Is 'Longtermism', the Tech Elite Ideology That Led to the FTX Collapse?
'Simulations'
'Simulations'
I felt a tap at my wrist, snatching restlessly for the button to answer. “Yeah?” “It’s not good,” came the voice of Sidney, our VP of Special Projects, through my earpiece. “Every time he comes out of it, he’s irritable within half an hour, demands to go back.
·redeem-tomorrow.com·
'Simulations'
Concrete Biosecurity Projects (some of which could be big) - EA Forum
Concrete Biosecurity Projects (some of which could be big) - EA Forum
Andrew Snyder-Beattie and Ethan Alley • This is a list of longtermist biosecurity projects. We think most of them could reduce catastrophic biorisk by more than 1% or so on the current margin (in rel…
One concrete commercial goal would be to produce a suit (and accompanying system) that is designed for severely immunocompromised people to lead relatively normal lives, at a cost low enough to convince the US government to acquire 100 million units for the Strategic National Stockpile.[4]  Another goal would be for the suit to simultaneously meet military-grade specifications, e.g. protecting against a direct hit of anthrax.
Existing bunkers provide a fair amount of protection, but we think there could be room for specially designed refuges that safeguard against catastrophic pandemics (e.g. cycling teams of people in and out with extensive pathogen agnostic testing, adding a ‘civilization-reboot package’, and possibly even having the capability to develop and deploy biological countermeasures from the protected space).
Collectively, these projects can potentially absorb a lot of aligned engineering and executive talent, and a lot of money.  Executive talent might be the biggest constraint, as it’s needed for effective deployment of other talent and resources.
·forum.effectivealtruism.org·
Concrete Biosecurity Projects (some of which could be big) - EA Forum
CRDTs for Mortals - James Long at dotJS 2019
CRDTs for Mortals - James Long at dotJS 2019
What do CRDTs and frontends have to do with each other? James talks about how CRDTs finally deliver on the promise of local-first apps, which provide superior user experience, and explains how simple CRDTs can be and how to leverage them to create robust local-first apps.
·dotconferences.com·
CRDTs for Mortals - James Long at dotJS 2019
Why 'Mutual Aid'? – social solidarity, not charity
Why 'Mutual Aid'? – social solidarity, not charity
Peter Kropotkin's most famous work advancing a belief in the depth of our connection to each other is titled 'Mutual Aid: A Factor in Evolution'.
'Mutual aid' has suddenly entered the collective consciousness as we seek ways to support our friends and neighbours amidst a global pandemic. Alexandria Ocasio-Cortez has tweeted about it, The New York Times has discussed "so called mutual-aid" networks in major cities, and mutual aid workshops have spread throughout the United States.
"Social solidarity - not charity," might be the slogan response, but conceptualizing the difference is not easy.
Fundamentally, mutual aid is about building "bottom-up" structures of cooperation, rather than relying on the state or wealthy philanthropists to address our needs. It emphasizes horizontal networks of solidarity rather than "top down" solutions, networks that flow in both directions and sustain the life of a community.
Mutual-aid is a concept born from a curious hybrid of Russian evolutionary theory and anarchist thought. It is, specifically, an idea associated with Peter Kropotkin - a well-known anarchist-socialist thinker – also a naturalist, geographer, ethnographer and advocate of scientific thought. Kropotkin, along with other Russian scientists, developed mutual aid in response to the profound impact of Darwin's evolutionary theory and the focus on competition among his adherents.
pencer believed in the progressive evolution of not only organisms but also human societies and helped to popularize evolutionary theory as a social, and not only biological, phenomenon. Humans are, after all, an element of nature.
Kropotkin, however, was deeply concerned about an interpretation of evolutionary theory that emphasized hostility and competition, especially when extended, as it still often is, to the social and political lives of human beings. He saw that "survival of the fittest" would inevitably be used to justify poverty, colonialism, gender inequality, racism and war as "natural" processes – innate and immutable expressions of our very genetic being.
Instead of this relentless competition, Kropotkin saw cooperation everywhere he looked: in colonies of ants, in the symbiotic behaviors of plants and animals, and in the practices of peasants in his own travels.
“the fittest are not the physically strongest, nor the cunningest, but those who learn to combine so as mutually to support each other, strong and weak alike, for the welfare of the community.”
·opendemocracy.net·
Why 'Mutual Aid'? – social solidarity, not charity
Climate change - 80,000 Hours - EA Forum
Climate change - 80,000 Hours - EA Forum
Could climate change lead to the end of civilisation? …
If climate change could lead to the end of civilisation, then that would mean future generations might never get to exist – or they could live in a permanently worse world. If so, then preventing it, and adapting to its effects, might be more important than working on almost any other issue.
But even when we try to account for unknown unknowns,4 nothing in the IPCC’s report suggests that civilisation will be destroyed.
Looking at the worst possible scenarios, it could be an important factor that increases existential threats from other sources, like great power conflicts, nuclear war, or pandemics.
We think your personal carbon footprint is much less important than what you do for work, and that some ways of making a difference on climate change are likely to be much more effective than others. In particular, you could use your career to help develop technology or advocate for policy that would reduce our current emissions, or research technology that could remove carbon from the atmosphere in the future.
We’d love to see more people working on this issue, but — given our general worldview — all else equal we’d be even more excited for someone to work on one of our top priority problem areas.
Overall, climate change is far less neglected than other issues we prioritise. Current spending is likely over $640 billion per year. Climate change has also received high levels of funding for decades, meaning lots of high-impact work has already occurred. It also seems likely that as climate change worsens, even more attention will be paid to it, allowing us to do more to combat its worst effects. However, there are likely specific areas that don’t get as much attention as they should.
Climate change seems more tractable than many other global catastrophic risks. This is because there is a clear measure of our success (how much greenhouse gas we are emitting), plus lots of experience seeing what works — so there is clear evidence on how to move ahead. That said, climate change is a tricky global coordination problem, which makes it harder to solve.
We’re going to review the three most common ways people say climate change might directly cause human extinction: high temperatures, rising water, and disruption to agriculture.
Worst case climate scenarios look very bad in terms of lives disrupted and lost. We’re focusing on extinction because, for reasons we discuss here, we think reducing existential threats should be among humanity’s biggest priorities – in part due to their significance for all future generations.
In short, most scientists think it’s pretty close to impossible for climate change to directly cause the extinction of humanity.
If there were 12°C of warming, a majority of land where humans currently live would be too hot for humans to survive at least a few days a year.16 An increase of 13°C would make working outdoors impossible for most of the year in the tropics, and around half the year in currently temperate regions.
But even with the cloud feedback loop, it would take decades for global temperatures to reach this level, and while this worst-case scenario would cause extraordinary suffering and death, it seems very likely that we could adapt to avoid extinction (for example, by building better buildings and widespread air conditioning, as well as building more in the cooler areas of the Earth).
As an upper bound, we can consider what would happen if the polar ice caps melted completely. The highest estimate we’ve seen is that this would produce sea level rise of around 80 metres. Fifty of the world’s major cities would flood, but the vast majority of land would remain above water.
It seems that a one-metre sea level rise would, without adaptation, displace around half a billion people from their homes. But with adaptation (like building flood defences), the number of people displaced would be much smaller: the IPCC estimates that hundreds of thousands of people would in reality be displaced due to a two-metre sea level rise, far fewer than half a billion.
But, as with heat stress, sea level rise does not pose an extinction risk.
There may also be some positive effects of climate change on agriculture — for example, we’ll be able to grow crops in areas that are currently too cold. It’s possible that these effects would be enough to completely mitigate the negative effects on agriculture.
But even with all these likely disruptions, we should still be able to adapt — due to increasing agricultural productivity. Over the past few centuries, food prices have fallen as technology makes it cheaper and cheaper to produce large quantities of food.
So it is against this backdrop of rapidly improving productivity that climate change will act — and even if temperatures rise a lot, it’ll take some time (decades or maybe centuries) for that to happen. As a result, the IPCC expects (with high confidence) that we’ll be able to adapt to climate change in such a way that risks to food security will be mitigated.
One expert we spoke to did say that their best guess is that a 13°C warmer world would lead — through droughts and the disruption of agriculture — to the deaths of hundreds of millions of people. But even this horrific scenario is a long way from human extinction or the kind of catastrophic event that could directly lead to humanity being unable to ever recover.
It’s possible that climate change could lead to ecosystem collapse. Many ethical views put intrinsic value on biodiversity — and even if you don’t, ecosystem collapse could affect people and nonhuman animals in other ways.
There are, of course, many other benefits to biodiversity, like the development of new medicines. But overall, biodiversity loss seems like it won’t cause the collapse of civilisation.
Though this would be a humanitarian disaster of unprecedented proportions, humanity would still have land cool enough to live on, it won’t all be submerged in the ocean, and we will still be able to grow food in many places, though not all. In other words, humanity would survive.
The IPCC’s Sixth Assessment Report, building on Sherwood et al.’s assessment of the Earth’s climate sensitivity attempts to account for structural uncertainty and unknown unknowns. Roughly, they find it’s unlikely that all the various lines of evidence are biased in just one direction — for every consideration that could increase warming, there are also considerations that could decrease it.22 This means we should expect unknowns mostly to cancel out, and be surprised if they point in one direction or the other.
As a result, it’s extremely unlikely (we’d guess less than a 1 in 1,000,000 chance) that we’ll see the temperature changes necessary for climate change to have the kinds of effects that would directly lead to extinction.
It’s often claimed that displaced populations can increase resource scarcity and the risk of conflict in countries that they move to. Forced displacement also arguably increases the spread of infectious diseases and general political tensions. But it’s very difficult to estimate the size of these effects — and from there, to estimate the implications of these effects for the rest of society.
There’s also the possibility of much larger wars. If climate change significantly affects the fortunes of Russia, China, India, Pakistan, the EU, or the US, this could cause a great power war. Migration crises, heat stress, sea level rise, changes to agriculture, or broader economic effects on these countries could all contribute to the chances of conflict.
We haven’t thought about this possibility as much, but the same reasons we think climate change won’t lead to extinction suggest it won’t lead to a catastrophic event of this size. In short: even in the worst-case warming scenarios, a lot of humans will still be able to live on the land and grow food.
Even in the top 1% of worst scenarios, our guess is that it is extremely unlikely for premature deaths due to climate change to exceed a billion people, and this loss would likely be gradual (e.g. over a century) and due to things like declining economic productivity, rather than an all-at-once catastrophic collapse.
Moreover, if climate change gets very bad, that probably means we burned through our fossil fuel reserves. This isn’t an effect of climate change per se, but rather an effect of us not doing enough to prevent it by reducing fossil fuel use. Besides causing climate change and everything that that entails, using up our fossil fuel reserves would mean that if humanity does suffer a (different) global catastrophe that leads to a civilisational collapse, it might be harder to rebuild.
There are lots of global issues that deserve more attention than they currently get. This includes climate change, but also others that seem to pose a more material risk of extinction — like catastrophic pandemics or nuclear war.
Climate change seems unusually solvable for a global issue: there is a clear measure of our success (how much greenhouse gas we are emitting), plus lots of experience seeing what works — so there is clear evidence on how to move ahead.
And working on clean energy tech also seems neglected relative to its importance for solving the problem, though it still gets a lot of resources.
Other existential threats seem considerably greater
Experts studying risks of human extinction usually think nuclear war, great power conflict in general, and certain dangerous advances in machine learning or biotechnology all have a higher likelihood of causing human extinction than climate change.
Second, solutions that require coordination are difficult to achieve. This is true on both an individual level and a country level.
For this reason, focusing on developing and deploying new technology seems more likely to succeed (and has fewer downsides, and faces fewer coordination issues) than seeking to encourage individuals to voluntarily reduce their energy consumption. This is because it doesn’t cost the innovator much; they can benefit from selling their inventions.
For example, emissions from cars are only about four times higher than emissions from cement, but there’s much more than four times the focus on electric cars. That means there could be better opportunities to move the needle by greening cement production. We think that means working on the latter could plausibly be better
There’s also value in technology that increases energy efficiency, for example by reducing the costs of building better-insulated buildings.
The other primary form of geoengineering is solar geoengineering (deliberately deflecting sunlight away from Earth to cool the planet down). Solar geoengineering poses potential risks to humanity in itself, given the unprecedented scale of the intervention and the fact that, once in use, solar geoengineering can’t be left untended without disastrous effects.
·forum.effectivealtruism.org·
Climate change - 80,000 Hours - EA Forum
A framework for comparing global problems in terms of expected impact - EA Forum
A framework for comparing global problems in terms of expected impact - EA Forum
Suppose you’re trying to figure out whether to learn about health in developing countries; or whether to become a researcher in solar energy; or whether to campaign for criminal justice reform in the…
After a large amount of resources have been dedicated to a problem, you’ll hit diminishing returns. This is because people take the best opportunities for impact first, so as more and more resources get invested, it becomes harder and harder to make a difference. It’s therefore often better to focus on problems that have been neglected by others.
To make more wide ranging comparisons between problems, you need to turn to “yardsticks” for scale. These are more measurable ways of comparing scale that we hope will correlate with long-run social impact. For instance, economists often use GDP growth as a convenient yardstick for economic progress (although it has many weaknesses). Nick Bostrom has argued that the key yardstick for long run welfare should be whether an action increases or decreases the risk of the end of civilization – what he called existential risk.
However, we think that society’s mechanisms for doing good are far from efficient, so all else equal, neglectedness is a good sign.
In other cases – where solving a problem requires innovative techniques – the scores are usually assigned based on judgement calls, ideally based on a survey of expert opinion.
For scoring we use the ‘expected value’ approach. That is, a 10% chance of solving all of a problem is scored the same as a project that would definitely reduce it by 10%.
we prefer to use the scores to make relative comparisons rather than absolute estimates.
While personal fit is not assessed in our problem profiles, it is relevant to your personal decisions.
Within a field, the top performers often have 10 to 100 times as much impact as the median.
A great entrepreneur or researcher has far more impact than an average one, so if you’re planning to contribute in either of those ways, personal fit matters a lot. However, if you’re earning to give, personal fit is less relevant because you’re sending money rather than your unique skills.
So to assess personal fit in more depth, you could estimate your percentile in the field, then multiply by a factor that depends on the variation of performance in the field.
If you’ve used our rubric above, you can add the scores together to get a rough answer of which problem will be more effective to work on.
Bear in mind that these scores are imprecise, and adding them increases the uncertainty even further, because we only measure each one imprecisely. This means you need to take your final summed score with a grain of salt – or rather a lot of salt.
Within 80,000 Hours, if the difference in score between two problems is 4 or larger, we have a reasonable level of confidence that it’s a more effective problem to work on. If the difference is 3 or smaller it looks more like a close call.
For one, our scores have to be tempered by common sense judgements about the world.
The scores we get when using this framework suggest that some problems are 10,000x more effective to work on than others. However, we don’t believe that the differences really are that large.
Some other reasons for being modest about what such prioritisation research can show us are discussed here.
Explicitly quantifying outcomes can enable you to notice large, robust differences in effectiveness that might be difficult to notice qualitatively, and help you to avoid scope neglect.
Going through the process of making these estimates is a great way to test your understanding of a problem, since it forces you to be explicit about your assumptions and how they fit together.
In practice, these types of estimates usually involve very high levels of uncertainty. This means their results are not robust: different assumptions can greatly alter the conclusion of the analysis. As a result, there is a danger of being misled by an incomplete model, when it would have been better to go with a broader qualitative analysis, or simple common sense.
An individual can only focus on one or two areas at a time, but a large group of people working together should most likely spread out over several.
·forum.effectivealtruism.org·
A framework for comparing global problems in terms of expected impact - EA Forum
Marginal Impact
Marginal Impact
Is supporting impactful projects always the best way to achieve impact? And how do you know how much impact you're generating?
The marginal impact of an investment of time or money is the additional impact that this specific investment created. The term is usually used to emphasize that when you make decisions, you should take into account only the impact that was actually generated by your choice, rather than counting the impact of already existing efforts. For example, joining a huge movement with lots of impact isn’t inherently better than joining a small movement, if your own impact isn’t greater as a part of that movement.
If you’re a toaster manufacturer considering whether to manufacture one more toaster to sell, for example, the question you need to ask yourself is not whether the toaster business is profitable over all - but rather how much profit you’ll make on this next toaster. It may be the case that selling toasters is a lucrative business overall, but the market is already flooded with your previously-sold products and you’ll fail to sell another one. In this case, your total returns from selling toasters might remain large even if you manufacture another one, but your marginal returns (income minus expenses of this next unit) will be negative - so manufacturing it in the first place is a bad idea; you’re losing money.
As a result, it’s unclear that additional donations to Wikimedia lead to improvement in the content provided by Wikipedia. This is an example where the total impact (or even total cost-effectiveness) is a pretty terrible proxy for the marginal impact of additional donations. The first few millions of dollars that Wikipedia receives are incredibly valuable and important, but those are already a done deal - you can only control the marginal impact of the 100 millionth dollar or above.
·probablygood.org·
Marginal Impact
Get a free chapter of "The Precipice"
Get a free chapter of "The Precipice"
Toby Ord's new book on existential risk was released in March 2020. Get a free copy now.
Join the 80,000 Hours newsletter and our partners at Impact Books will send you a free copy of the book.
·80000hours.org·
Get a free chapter of "The Precipice"
The case for reducing existential risk - EA Forum
The case for reducing existential risk - EA Forum
In 1939, Einstein wrote to Roosevelt:[1] …
Here’s a suggestion that’s not so often discussed: our first priority should be to survive. So long as civilisation continues to exist, we’ll have the chance to solve all our other problems, and have a far better future. But if we go extinct, that’s it.
These concerns have started a new movement working to safeguard civilisation, which has been joined by Stephen Hawking, Max Tegmark, and new institutes founded by researchers at Cambridge, MIT, Oxford, and elsewhere.
We used to think the risks were extremely low as well, but when we looked into it, we changed our minds. As we’ll see, researchers who study these issues think the risks are over one thousand times higher, and are probably increasing.
We then make the case that reducing these risks could be the most important thing you do with your life, and explain exactly what you can do to help.
·forum.effectivealtruism.org·
The case for reducing existential risk - EA Forum
Our final century? - EA Forum
Our final century? - EA Forum
“So if we drop the baton, succumbing to an existential catastrophe, we would fail our ancestors in a multitude of ways. We would fail to achieve the dreams they hoped for; we would betray the trust t…
In this chapter we’ll focus on existential risks: risks that threaten the destruction of humanity’s long-term potential.
The importance, neglectedness, tractability framework: The most important problems generally affect a lot of people, are relatively under-invested in, and can be meaningfully improved with a small amount of work.
Thinking on the margin: If you're donating $1, you should give that extra $1 to the intervention that can most cost-effectively improve the world.
Crucial considerations: It can be extremely hard to figure out whether some action helps your goal or causes harm, particularly if you’re trying to influence complex social systems or the long-term. This is part of why it can make sense to do a lot of analysis of interventions you’re considering.
·forum.effectivealtruism.org·
Our final century? - EA Forum
How not to be a “white in shining armor”
How not to be a “white in shining armor”
This post inspired by the upcoming Day Without Dignity online event GiveWell’s current top-rated charities focus on proven, cost-effective health [...]
We fundamentally believe that progress on most problems must be locally driven. So we seek to improve people’s abilities to make progress on their own, rather than taking personal responsibility for each of their challenges.
One more approach to “putting locals in the driver’s seat”: give to GiveDirectly to support unconditional cash transfers. We feel that global health and nutrition interventions are superior because they reach so many more people (per dollar), but for those who are even more concerned than we are about the trap of “whites in shining armor,” this option has some promise.
·blog.givewell.org·
How not to be a “white in shining armor”
Duolingo Streak Goal
Duolingo Streak Goal
This is one of the 🤯🤯🤯 experiments we ran on the Duolingo Retention team : pic.twitter.com/Dv9Wp377vT— Ali Abouelatta (@abouelatta_ali) November 21, 2022
·twitter.com·
Duolingo Streak Goal
Blog - Towards the next generation of XNU memory safety: kalloc_type - Apple Security Research
Blog - Towards the next generation of XNU memory safety: kalloc_type - Apple Security Research
Improving software memory safety is a key security objective for engineering teams across the industry. Here we begin a journey into the XNU kernel at the core of iOS and explore the intricate work our engineering teams have done to harden the memory allocator and make our software much more difficult to exploit.
·security.apple.com·
Blog - Towards the next generation of XNU memory safety: kalloc_type - Apple Security Research
Mike Davis’s Specificities | Gabriel Winant
Mike Davis’s Specificities | Gabriel Winant
The US working class was forged, for Davis, through its compounded historical defeat, which gave it a distinctive contradictory, battered, and lumpy form that could not be evened out through appeals to abstraction. Most importantly, the cycle of defeat and accommodation had separated the official labor movement from the Black working class, which he saw as the only possible “cutting edge” for socialist politics.
·nplusonemag.com·
Mike Davis’s Specificities | Gabriel Winant
Digital Rocks | Will Tavlin
Digital Rocks | Will Tavlin
Eventually DCI scrubbed celluloid film almost entirely from the film industry, ushering in the most significant technological shift since the introduction of sound. The digital revolution transformed nearly every aspect of filmmaking for Hollywood and independent filmmakers. This revolution was invisible, and it was designed to be that way. Its success depended on audiences never noticing at all.
·nplusonemag.com·
Digital Rocks | Will Tavlin
BONELAB - Release Date Trailer
BONELAB - Release Date Trailer
Wishlist now! Quest2: https://www.oculus.com/experiences/quest/4215734068529064/ Steam: https://store.steampowered.com/app/1592190/BONELAB/ Oculus: https://www.oculus.com/experiences/rift/5088709007839657/ Suspected of séancing with an unknown power, you are on trial. During your execution you are called to action. Escaping death you descend into an unknown underworld lab. A series of preparatory challenges await you, but for what? Will you transcend them and discover your calling? Discord: https://discord.gg/stresslevelzero
·youtube.com·
BONELAB - Release Date Trailer