Found 1 bookmarks
Custom sorting
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
This post is based on notes for a talk I gave at EAG Boston 2017. I talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.
I’ll talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.
“S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.”
I’ll argue that s-risks are not much more unlikely than AI-related extinction risk.
You may think “this is absurd”, we can’t even send humans to Mars, why worry about suffering on cosmic scales? This was certainly my immediately intuitive reaction when I first encountered related concepts. But as EAs, we should be cautious to take such intuitive, ‘system 1’ reactions, at face value. For we are aware that a large body of psychological research in the “heuristics and biases” approach suggests that our intuitive probability estimates are often driven by how easily we can recall a prototypical example of the event we’re considering. For types of events that have no precedent in history, we can’t recall any prototypical example, and so we’re systematically underestimating the probability of such events if we aren’t careful.
Artificial sentience refers to the idea that the capacity to have subjective experience – and in particular, the capacity to suffer – is not limited to biological animals. While there is no universal agreement on this, in fact most contemporary views in the philosophy of mind imply that artificial sentience is possible in principle. And for the particular case of brain emulations, researchers have outlined a concrete roadmap, identifying concrete milestones and remaining uncertainties.
s-risks involving artificial sentience and “AI gone wrong” have been discussed by Bostrom under the term mindcrime.
To conclude: to be worried about s-risk, we don’t need to posit any new technology or any qualitatively new feature above what is already being considered by the AI risk community. So I’d argue that s-risks are not much more unlikely than AI-related x-risks. Or at the very least, if someone is worried about AI-related x-risk but not s-risk, the burden of proof is on them.
·longtermrisk.org·
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk