read

read

400 bookmarks
Custom sorting
Windows NT
Windows NT
Windows NT is a proprietary graphical operating system produced by Microsoft, the first version of which was released on July 27, 1993. It is a processor-independent, multiprocessing and multi-user operating system.
·en.wikipedia.org·
Windows NT
Layoff Brain
Layoff Brain
Back in the summer of 2020, I quit what was ostensibly a very good job in media to write this newsletter full-time. I didn’t do it for money. I didn’t do it because I didn’t like editors (I had excellent ones, whom I appreciated deeply). I did it because I had Layoff Brain.
·annehelen.substack.com·
Layoff Brain
The Instrumentalist | Zadie Smith
The Instrumentalist | Zadie Smith
During the first ten minutes of Tár, it is possible to feel that the critic Adam Gopnik is a better actor than Cate Blanchett. They sit together on a New
·nybooks.com·
The Instrumentalist | Zadie Smith
How vx-underground is building a hacker's dream library
How vx-underground is building a hacker's dream library
When malware repository vx-underground launched in 2019, it hardly made a splash in the hacking world. "I had no success really," said its founder, who goes by the online moniker smelly_vx.
·therecord.media·
How vx-underground is building a hacker's dream library
Max Berger on Twitter
Max Berger on Twitter
I spent some time looking into SBF's political giving and what I found was pretty shocking. SBF was collaborating with AIPAC and Trump supporting billionaires to stop the growth of the squad and the electoral left.https://t.co/S5gra6NsRv— Max Berger (@maxberger) January 3, 2023
·twitter.com·
Max Berger on Twitter
The Wife Left, but They’re Still Together
The Wife Left, but They’re Still Together
After a pandemic dip, the number of married couples “living apart together” has started to rise again. And women, in search of their own space, are driving the increase.
·nytimes.com·
The Wife Left, but They’re Still Together
#95: Are you a baby? A litmus test
#95: Are you a baby? A litmus test
Good morning! The other day some friends and I were reminiscing about an app idea we had years ago that would allow you to “blind cancel” on your friends. That is—flag if you were open to canceling a plan, which would only be revealed if the other person flagged it too. Basically, it was Tinder for bailing. This was our ultimate dream: an official, guilt-free conduit for that quiet hope that your friends wants to cancel, too.
·haleynahman.substack.com·
#95: Are you a baby? A litmus test
Effective altruism as the most exciting cause in the world - EA Forum
Effective altruism as the most exciting cause in the world - EA Forum
I feel that one thing that effective altruists haven't sufficiently capitalized on in their marketing is just how amazingly exciting the whole thing is. There's Holden Karnofsky's post on excited alt…
Here's an extra bonus. At the moment, the core of effective altruism is formed of smart, driven, and caring people from all around the world. When you become an effective altruist and start participating, you are joining a community of some of the most interesting people on Earth.
Best of all? This isn't just some fuzzy feelgood thing where you're taking things on faith. People in the community are constantly debating these things, looking for ways to improve and to become even better at doing good.
If you spot a crucial flaw in someone else's argument, or suggest a critical improvement, it may impact the effectiveness of all the other effective altruists who are doing or thinking about doing something related.
The average person working in an ordinary job can potentially save several lives a year, just by donating a measly 10% of his income and doing literally nothing else altruistic! That would already be amazing by itself.
·forum.effectivealtruism.org·
Effective altruism as the most exciting cause in the world - EA Forum
The highest-impact career paths our research has identified so far
The highest-impact career paths our research has identified so far
The highest-impact career for you is the one that allows you to make the biggest contribution to solving one of the world's most pressing problems. On this page, we list some broad categories of impactful careers, followed by about 30 more specific and unusual career paths we think are especially impactful, such as long-term AI policy research. The lists are based on 10 years of research and experience advising people, and represent the careers it seems to us will be most impactful over the long run if you get started on them now — though of course we can't be sure what the future holds. You can use the lists on this page to get new ideas for impactful careers and make sure you haven't missed a great option.
·80000hours.org·
The highest-impact career paths our research has identified so far
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
This post is based on notes for a talk I gave at EAG Boston 2017. I talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.
I’ll talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.
“S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.”
I’ll argue that s-risks are not much more unlikely than AI-related extinction risk.
You may think “this is absurd”, we can’t even send humans to Mars, why worry about suffering on cosmic scales? This was certainly my immediately intuitive reaction when I first encountered related concepts. But as EAs, we should be cautious to take such intuitive, ‘system 1’ reactions, at face value. For we are aware that a large body of psychological research in the “heuristics and biases” approach suggests that our intuitive probability estimates are often driven by how easily we can recall a prototypical example of the event we’re considering. For types of events that have no precedent in history, we can’t recall any prototypical example, and so we’re systematically underestimating the probability of such events if we aren’t careful.
Artificial sentience refers to the idea that the capacity to have subjective experience – and in particular, the capacity to suffer – is not limited to biological animals. While there is no universal agreement on this, in fact most contemporary views in the philosophy of mind imply that artificial sentience is possible in principle. And for the particular case of brain emulations, researchers have outlined a concrete roadmap, identifying concrete milestones and remaining uncertainties.
s-risks involving artificial sentience and “AI gone wrong” have been discussed by Bostrom under the term mindcrime.
To conclude: to be worried about s-risk, we don’t need to posit any new technology or any qualitatively new feature above what is already being considered by the AI risk community. So I’d argue that s-risks are not much more unlikely than AI-related x-risks. Or at the very least, if someone is worried about AI-related x-risk but not s-risk, the burden of proof is on them.
·longtermrisk.org·
S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
Preventing an AI-related catastrophe - Problem profile - EA Forum
Preventing an AI-related catastrophe - Problem profile - EA Forum
We (80,000 Hours) have just released our longest and most in-depth problem profile — on reducing existential risks from AI. …
Our overall view Recommended - highest priority This is among the most pressing problems to work on.
Overall, our current take is that AI development poses a bigger threat to humanity’s long-term flourishing than any other issue we know of.
Around $50 million was spent on reducing the worst risks from AI in 2020 – billions were spent advancing AI capabilities.[3] [4] While we are seeing increasing concern from AI experts, there are still only around 300 people working directly on reducing the chances of an AI-related existential catastrophe.[2] Of these, it seems like about two-thirds are working on technical AI safety research, with the rest split between strategy (and policy) research and advocacy.
·forum.effectivealtruism.org·
Preventing an AI-related catastrophe - Problem profile - EA Forum
The case for taking AI seriously as a threat to humanity
The case for taking AI seriously as a threat to humanity
Why some people fear AI, explained.
Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”
Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.
For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.
It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.
I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”
In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”
But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.
Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).
When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.
·vox.com·
The case for taking AI seriously as a threat to humanity
Why I am probably not a longtermist - EA Forum
Why I am probably not a longtermist - EA Forum
tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether ther…
What I value less than a total utilitarian is bringing happy people into existence who would not have existed otherwise. This means I am not too fussed about humanity’s failure to become much bigger and spread to the stars. While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks
Also, many world improvements I expect to peter out before they become negative. But I am worried that some will not. For example, I think increased hedonism and individualism have both been a good force, but if overdone I would consider them to make the world worse, and it seems to me we are either almost or already there.
On a related note, while this is not an argument which deters me from longtermism, some longtermists looking forward to futures which I consider to be worthless (e.g. the hedonium shockwave) puts me off. Culturally many longtermists seem to favour more hedonism, individualism and techno-utopianism than I would like.
I am unconvinced that people can reliably have a positive impact which dissipates further into the future than 100 years, maybe within a factor of 3. But there is one important exception: if we have the ability to prevent or shape a “lock-in” scenario within this timeframe. By lock-in I mean anything which humanity can never escape from. Extinction risks are an obvious example, others are permanent civilisational collapse.
·forum.effectivealtruism.org·
Why I am probably not a longtermist - EA Forum
Utilitronium - LessWrong
Utilitronium - LessWrong
Utilitronium is relatively homogeneous matter optimized for maximum utility (like computronium is optimized for maximum computing power). For a paperclip maximizer, utilitronium is paperclips. For more complex values, no homogeneous organization of matter will have optimal utility. Utilitronium shockwave is a process of converting all matter in the universe into utilitronium as quickly as possible, which would look like a shockwave of utilitronium spreading outwards from the point of origin, presumably nearly at the speed of light. SEE ALSO * Utility function * Hedon, Utils * Paperclip maximizer, Complexity of value * Hedonium, the hedonistic utilitarian version of utilitronium
·lesswrong.com·
Utilitronium - LessWrong