Found 24 bookmarks
Custom sorting
The Only Reason to Explore Space
The Only Reason to Explore Space

Claude summary: > This article argues that the only enduring justification for space exploration is its potential to fundamentally transform human civilization and our understanding of ourselves. The author traces the history of space exploration, from the mystical beliefs of early rocket pioneers to the geopolitical motivations of the Space Race, highlighting how current economic, scientific, and military rationales fall short of sustaining long-term commitment. The author contends that achieving interstellar civilization will require unprecedented organizational efforts and societal commitment, likely necessitating institutions akin to governments or religions. Ultimately, the piece suggests that only a society that embraces the pursuit of interstellar civilization as its central legitimating project may succeed in this monumental endeavor, framing space exploration not as an inevitable outcome of progress, but as a deliberate choice to follow a "golden path to a destiny among the stars."

·palladiummag.com·
The Only Reason to Explore Space
Dario Amodei — Machines of Loving Grace
Dario Amodei — Machines of Loving Grace
I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.
I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides.
the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires.
The five categories I am most excited about are: Biology and physical health Neuroscience and mental health Economic development and poverty Peace and governance Work and meaning
We could summarize this as a “country of geniuses in a datacenter”.
you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.
I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase.
Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.
Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators
Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order.
I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.
CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity.
there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward.
·darioamodei.com·
Dario Amodei — Machines of Loving Grace
Bernard Stiegler’s philosophy on how technology shapes our world | Aeon Essays
Bernard Stiegler’s philosophy on how technology shapes our world | Aeon Essays
technics – the making and use of technology, in the broadest sense – is what makes us human. Our unique way of existing in the world, as distinct from other species, is defined by the experiences and knowledge our tools make possible
The essence of technology, then, is not found in a device, such as the one you are using to read this essay. It is an open-ended creative process, a relationship with our tools and the world.
the more ubiquitous that digital technologies become in our lives, the easier it is to forget that these tools are social products that have been constructed by our fellow humans.
By forgetting, we lose our all-important capacity to imagine alternative ways of living. The future appears limited, even predetermined, by new technology.
·aeon.co·
Bernard Stiegler’s philosophy on how technology shapes our world | Aeon Essays
How Perplexity builds product
How Perplexity builds product
inside look at how Perplexity builds product—which to me feels like what the future of product development will look like for many companies:AI-first: They’ve been asking AI questions about every step of the company-building process, including “How do I launch a product?” Employees are encouraged to ask AI before bothering colleagues.Organized like slime mold: They optimize for minimizing coordination costs by parallelizing as much of each project as possible.Small teams: Their typical team is two to three people. Their AI-generated (highly rated) podcast was built and is run by just one person.Few managers: They hire self-driven ICs and actively avoid hiring people who are strongest at guiding other people’s work.A prediction for the future: Johnny said, “If I had to guess, technical PMs or engineers with product taste will become the most valuable people at a company over time.”
Typical projects we work on only have one or two people on it. The hardest projects have three or four people, max. For example, our podcast is built by one person end to end. He’s a brand designer, but he does audio engineering and he’s doing all kinds of research to figure out how to build the most interactive and interesting podcast. I don’t think a PM has stepped into that process at any point.
We leverage product management most when there’s a really difficult decision that branches into many directions, and for more involved projects.
The hardest, and most important, part of the PM’s job is having taste around use cases. With AI, there are way too many possible use cases that you could work on. So the PM has to step in and make a branching qualitative decision based on the data, user research, and so on.
a big problem with AI is how you prioritize between more productivity-based use cases versus the engaging chatbot-type use cases.
we look foremost for flexibility and initiative. The ability to build constructively in a limited-resource environment (potentially having to wear several hats) is the most important to us.
We look for strong ICs with clear quantitative impacts on users rather than within their company. If I see the terms “Agile expert” or “scrum master” in the resume, it’s probably not going to be a great fit.
My goal is to structure teams around minimizing “coordination headwind,” as described by Alex Komoroske in this deck on seeing organizations as slime mold. The rough idea is that coordination costs (caused by uncertainty and disagreements) increase with scale, and adding managers doesn’t improve things. People’s incentives become misaligned. People tend to lie to their manager, who lies to their manager. And if you want to talk to someone in another part of the org, you have to go up two levels and down two levels, asking everyone along the way.
Instead, what you want to do is keep the overall goals aligned, and parallelize projects that point toward this goal by sharing reusable guides and processes.
Perplexity has existed for less than two years, and things are changing so quickly in AI that it’s hard to commit beyond that. We create quarterly plans. Within quarters, we try to keep plans stable within a product roadmap. The roadmap has a few large projects that everyone is aware of, along with small tasks that we shift around as priorities change.
Each week we have a kickoff meeting where everyone sets high-level expectations for their week. We have a culture of setting 75% weekly goals: everyone identifies their top priority for the week and tries to hit 75% of that by the end of the week. Just a few bullet points to make sure priorities are clear during the week.
All objectives are measurable, either in terms of quantifiable thresholds or Boolean “was X completed or not.” Our objectives are very aggressive, and often at the end of the quarter we only end up completing 70% in one direction or another. The remaining 30% helps identify gaps in prioritization and staffing.
At the beginning of each project, there is a quick kickoff for alignment, and afterward, iteration occurs in an asynchronous fashion, without constraints or review processes. When individuals feel ready for feedback on designs, implementation, or final product, they share it in Slack, and other members of the team give honest and constructive feedback. Iteration happens organically as needed, and the product doesn’t get launched until it gains internal traction via dogfooding.
all teams share common top-level metrics while A/B testing within their layer of the stack. Because the product can shift so quickly, we want to avoid political issues where anyone’s identity is bound to any given component of the product.
We’ve found that when teams don’t have a PM, team members take on the PM responsibilities, like adjusting scope, making user-facing decisions, and trusting their own taste.
What’s your primary tool for task management, and bug tracking?Linear. For AI products, the line between tasks, bugs, and projects becomes blurred, but we’ve found many concepts in Linear, like Leads, Triage, Sizing, etc., to be extremely important. A favorite feature of mine is auto-archiving—if a task hasn’t been mentioned in a while, chances are it’s not actually important.The primary tool we use to store sources of truth like roadmaps and milestone planning is Notion. We use Notion during development for design docs and RFCs, and afterward for documentation, postmortems, and historical records. Putting thoughts on paper (documenting chain-of-thought) leads to much clearer decision-making, and makes it easier to align async and avoid meetings.Unwrap.ai is a tool we’ve also recently introduced to consolidate, document, and quantify qualitative feedback. Because of the nature of AI, many issues are not always deterministic enough to classify as bugs. Unwrap groups individual pieces of feedback into more concrete themes and areas of improvement.
High-level objectives and directions come top-down, but a large amount of new ideas are floated bottom-up. We believe strongly that engineering and design should have ownership over ideas and details, especially for an AI product where the constraints are not known until ideas are turned into code and mock-ups.
Big challenges today revolve around scaling from our current size to the next level, both on the hiring side and in execution and planning. We don’t want to lose our core identity of working in a very flat and collaborative environment. Even small decisions, like how to organize Slack and Linear, can be tough to scale. Trying to stay transparent and scale the number of channels and projects without causing notifications to explode is something we’re currently trying to figure out.
·lennysnewsletter.com·
How Perplexity builds product
Strong and weak technologies - cdixon
Strong and weak technologies - cdixon
Strong technologies capture the imaginations of technology enthusiasts. That is why many important technologies start out as weekend hobbies. Enthusiasts vote with their time, and, unlike most of the business world, have long-term horizons. They build from first principles, making full use of the available resources to design technologies as they ought to exist.
·cdixon.org·
Strong and weak technologies - cdixon
A bicycle for the senses
A bicycle for the senses
We can take nature’s superpowers and expand them across many more vectors that are interesting to humans: Across scale — far and near, binoculars, zoom, telescope, microscope Across wavelength — UV, IR, heatmaps, nightvision, wifi, magnetic fields, electrical and water currents Across time — view historical imagery, architectural, terrain, geological, and climate changes Across culture — experience the relevance of a place in books, movies, photography, paintings, and language Across space — travel immersively to other locations for tourism, business, and personal connections Across perspective — upside down, inside out, around corners, top down, wider, narrower, out of body Across interpretation — alter the visual and artistic interpretation of your environment, color-shifting, saturation, contrast, sharpness
Headset displays connect sensory extensions directly to your vision. Equipped with sensors that perceive beyond human capabilities, and access to the internet, they can provide information about your surroundings wherever you are. Until now, visual augmentation has been constrained by the tiny display on our phone. By virtue of being integrated with your your eyesight, headsets can open up new kinds of apps that feel more natural. Every app is a superpower. Sensory computing opens up new superpowers that we can borrow from nature. Animals, plants and other organisms can sense things that humans can’t
The first mass-market bicycle for the senses was Apple’s AirPods. Its noise cancellation and transparency mode replace and enhance your hearing. Earbuds are turning into ear computers that will become more easily programmable. This can enable many more kinds of hearing. For example, instantaneous translation may soon be a reality
For the past seven decades, computers have been designed to enhance what your brain can do — think and remember. New kinds of computers will enhance what your senses can do — see, hear, touch, smell, taste. The term spatial computing is emerging to encompass both augmented and virtual reality. I believe we are exploring an even broader paradigm: sensory computing. The phone was a keyhole for peering into this world, and now we’re opening the door.
What happens when put on a headset and open the “Math” app? How could seeing the world through math help you understand both better?
Advances in haptics may open up new kinds of tactile sensations. A kind of second skin, or softwear, if you will. Consider that Apple shipped a feature to help you find lost items that vibrates more strongly as you get closer. What other kinds of data could be translated into haptic feedback?
It may sound far-fetched, but converting olfactory patterns into visual patterns could open up some interesting applications. Perhaps a new kind of cooking experience? Or new medical applications that convert imperceptible scents into visible patterns?
·stephango.com·
A bicycle for the senses
Elon Musk’s Shadow Rule
Elon Musk’s Shadow Rule
There is little precedent for a civilian’s becoming the arbiter of a war between nations in such a granular way, or for the degree of dependency that the U.S. now has on Musk in a variety of fields, from the future of energy and transportation to the exploration of space. SpaceX is currently the sole means by which NASA transports crew from U.S. soil into space, a situation that will persist for at least another year. The government’s plan to move the auto industry toward electric cars requires increasing access to charging stations along America’s highways. But this rests on the actions of another Musk enterprise, Tesla. The automaker has seeded so much of the country with its proprietary charging stations that the Biden Administration relaxed an early push for a universal charging standard disliked by Musk. His stations are eligible for billions of dollars in subsidies, so long as Tesla makes them compatible with the other charging standard.
In the past twenty years, against a backdrop of crumbling infrastructure and declining trust in institutions, Musk has sought out business opportunities in crucial areas where, after decades of privatization, the state has receded. The government is now reliant on him, but struggles to respond to his risk-taking, brinkmanship, and caprice
Current and former officials from NASA, the Department of Defense, the Department of Transportation, the Federal Aviation Administration, and the Occupational Safety and Health Administration told me that Musk’s influence had become inescapable in their work, and several of them said that they now treat him like a sort of unelected official
Sam Altman, the C.E.O. of OpenAI, with whom Musk has both worked and sparred, told me, “Elon desperately wants the world to be saved. But only if he can be the one to save it.
later. “He had grown up in the male-dominated culture of South Africa,” Justine wrote. “The will to compete and dominate that made him so successful in business did not magically shut off when he came home.”
There are competitors in the field, including Jeff Bezos’s Blue Origin and Richard Branson’s Virgin Galactic, but none yet rival SpaceX. The new space race has the potential to shape the global balance of power. Satellites enable the navigation of drones and missiles and generate imagery used for intelligence, and they are mostly under the control of private companies.
A number of officials suggested to me that, despite the tensions related to the company, it has made government bureaucracies nimbler. “When SpaceX and NASA work together, we work closer to optimal speed,” Kenneth Bowersox, NASA’s associate administrator for space operations, told me. Still, some figures in the aerospace world, even ones who think that Musk’s rockets are basically safe, fear that concentrating so much power in private companies, with so few restraints, invites tragedy.
Tesla for a time included in its vehicles the ability to replace the humming noises that electric cars must emit—since their engines make little sound—with goat bleats, farting, or a sound of the owner’s choice. “We’re, like, ‘No, that’s not compliant with the regulations, don’t be stupid,’ ” Cliff told me. Tesla argued with regulators for more than a year, according to an N.H.T.S.A. safety report
Musk’s personal wealth dwarfs the entire budget of OSHA, which is tasked with monitoring the conditions in his workplaces. “You add on the fact that he considers himself to be a master of the universe and these rules just don’t apply to people like him,” Jordan Barab, a former Deputy Assistant Secretary of Labor at OSHA, told me. “There’s a lot of underreporting in industry in general. And Elon Musk kind of seems to raise that to an art form.”
Some people who know Musk well still struggle to make sense of his political shift. “There was nothing political about him ever,” a close associate told me. “I’ve been around him for a long time, and had lots of deep conversations with the man, at all hours of the day—never heard a fucking word about this.”
the cuts that Musk had instituted quickly took a toll on the company. Employees had been informed of their termination via brusque, impersonal e-mails—Musk is now being sued for hundreds of millions of dollars by employees who say that they are owed additional severance pay—and the remaining staffers were abruptly ordered to return to work in person. Twitter’s business model was also in question, since Musk had alienated advertisers and invited a flood of fake accounts by reinventing the platform’s verification process
Musk’s trolling has increasingly taken on the vernacular of hard-right social media, in which grooming, pedophilia, and human trafficking are associated with liberalism
It is difficult to say whether Musk’s interest in A.I. is driven by scientific wonder and altruism or by a desire to dominate a new and potentially powerful industry.
·newyorker.com·
Elon Musk’s Shadow Rule
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
The VR winter — Benedict Evans
The VR winter — Benedict Evans
When I started my career 3G was the hot topic, and every investor kept asking ‘what’s the killer app for 3G?’ It turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. But with each of those, we knew what to build next, and with VR we don’t. That tells me that VR has a place in the future. It just doesn’t tell me what kind of place.
The successor to the smartphone will be something that doesn’t just merge AR and VR but make the distinction irrelevant - something that you can wear all day every day, and that can seamlessly both occlude and supplement the real world and generate indistinguishable volumetric space.
·ben-evans.com·
The VR winter — Benedict Evans
Isn’t That Spatial? | No Mercy / No Malice
Isn’t That Spatial? | No Mercy / No Malice
Betting against a first-generation Apple product is a bad trade — from infamous dismissals of the iPhone to disappointment with the original iPad. In fact, this is a reflection of Apple’s strategy: Start with a product that’s more an elegant proof-of-concept than a prime-time hit; rely on early adopters to provide enough runway for its engineers to keep iterating; and trust in unmatched capital, talent, brand equity, and staying power to morph a first-gen toy into a third-gen triumph
We are a long way from making three screens, a glass shield, and an array of supporting hardware light enough to wear for an extended period. Reviewers were (purposefully) allowed to wear the Vision Pro for less than half an hour, and nearly every one said comfort was declining even then. Avatar: The Way of Water is 3 hours and 12 minutes.
Meta’s singular strategic objective is to escape second-tier status and, like Apple and Alphabet, control its distribution. And its path to independence runs through Apple Park. Zuckerberg is spending the GDP of a small country to invent a new world, the metaverse, where Apple doesn’t own the roads or power stations. Vision Pro is insurance against the metaverse evolving into anything more than an incel panic room.
The only product category where VR makes difference is good VR games. Price is not limiting factor, the quality of VR experience is. Beat Saber is good and fun and physical exercise. Half Life: Alyx, is amazing. VR completely supercharges horror games, and scary stalking shooters. Want to fear of your life and get PTSD in the comfort of your home? You can do it. Games can connect people and provide physical exercise. If the 3rd iteration of Vision Pro is good for 2 hours of playing for $2000 Apple will kill the console market. Playstations no more. Apple is not a gaming company, but if Vision Pro becomes better and slightly cheaper, Apple becomes gaming company against its will.
·profgalloway.com·
Isn’t That Spatial? | No Mercy / No Malice
Apple Vision
Apple Vision
Apple Vision is technically a VR device that experientially is an AR device, and it’s one of those solutions that, once you have experienced it, is so obviously the correct implementation that it’s hard to believe there was ever any other possible approach to the general concept of computerized glasses.
the Vision is taking that captured image, processing it, and displaying it in front of your eyes in around 4 milliseconds.
Real-time operating systems are used in embedded systems for applications with critical functionality, like a car, for example: it’s ok to have an infotainment system that sometimes hangs or even crashes, in exchange for more flexibility and capability, but the software that actually operates the vehicle has to be reliable and unfailingly fast. This is, in broad strokes, one way to think about how visionOS works: while the user experience is a time-sharing operating system that is indeed a variation of iOS, and runs on the M2 chip, there is a subsystem that primarily operates the R1 chip that is real-time; this means that even if visionOS hangs or crashes, the outside world is still rendered under that magic 12 milliseconds.
I’ll be honest: what this looked like to me was a divorced dad, alone at home with his Vision Pro, perhaps because his wife was irritated at the extent to which he got lost in his own virtual experience.
·stratechery.com·
Apple Vision
Interview with Kevin Kelly,editor, author, and futurist
Interview with Kevin Kelly,editor, author, and futurist
To write about something hard to explain, write a detailed letter to a friend about why it is so hard to explain, and then remove the initial “Dear Friend” part and you’ll have a great first draft.
To be interesting just tell your story with uncommon honesty.
Most articles and stories are improved significantly if you delete the first page of the manuscript draft. Immediately start with the action.
Each technology can not stand alone. It takes a saw to make a hammer and it takes a hammer to make a saw. And it takes both tools to make a computer, and in today’s factory it takes a computer to make saws and hammers. This co-dependency creates an ecosystem of highly interdependent technologies that support each other
On the other hand, I see this technium as an extension of the same self-organizing system responsible for the evolution of life on this planet. The technium is evolution accelerated. A lot of the same dynamics that propel evolution are also at work in the technium
Our technologies are ultimately not contrary to life, but are in fact an extension of life, enabling it to develop yet more options and possibilities at a faster rate. Increasing options and possibilities is also known as progress, so in the end, what the technium brings us humans is progress.
Libraries, journals, communication networks, and the accumulation of other technologies help create the next idea, beyond the efforts of a single individual
We also see near-identical parallel inventions of tricky contraptions like slingshots and blowguns. However, because it was so ancient, we don’t have a lot of data for this behavior. What we would really like is to have a N=100 study of hundreds of other technological civilizations in our galaxy. From that analysis we’d be able to measure, outline, and predict the development of technologies. That is a key reason to seek extraterrestrial life.
When information is processed in a computer, it is being ceaselessly replicated and re-copied while it computes. Information wants to be copied. Therefore, when certain people get upset about the ubiquitous copying happening in the technium, their misguided impulse is to stop the copies. They want to stamp out rampant copying in the name of "copy protection,” whether it be music, science journals, or art for AI training. But the emergent behavior of the technium is to copy promiscuously. To ban, outlaw, or impede the superconductivity of copies is to work against the grain of the system.
the worry of some environmentalists is that technology can only contribute more to the problem and none to the solution. They believe that tech is incapable of being green because it is the source of relentless consumerism at the expense of diminishing nature, and that our technological civilization requires endless growth to keep the system going. I disagree.
Over time evolution arranges the same number of atoms in more complex patterns to yield more complex organisms, for instance producing an agile lemur the same size and weight as a jelly fish. We seek the same shift in the technium. Standard economic growth aims to get consumers to drink more wine. Type 2 growth aims to get them to not drink more wine, but better wine.
[[An optimistic view of capitalism]]
to measure (and thus increase) productivity we count up the number of refrigerators manufactured and sold each year. More is generally better. But this counting tends to overlook the fact that refrigerators have gotten better over time. In addition to making cold, they now dispense ice cubes, or self-defrost, and use less energy. And they may cost less in real dollars. This betterment is truly real value, but is not accounted for in the “more” column
it is imperative that we figure out how to shift more of our type 1 growth to type 2 growth, because we won’t be able to keep expanding the usual “more.”  We will have to perfect a system that can keep improving and getting better with fewer customers each year, smaller markets and audiences, and fewer workers. That is a huge shift from the past few centuries where every year there has been more of everything.
“degrowthers” are correct in that there are limits to bulk growth — and running out of humans may be one of them. But they don’t seem to understand that evolutionary growth, which includes the expansion of intangibles such as freedom, wisdom, and complexity, doesn’t have similar limits. We can always figure out a way to improve things, even without using more stuff — especially without using more stuff!
the technium is not inherently contrary to nature; it is inherently derived from evolution and thus inherently capable of being compatible with nature. We can choose to create versions of the technium that are aligned with the natural world.
Social media can transmit false information at great range at great speed. But compared to what? Social media's influence on elections from transmitting false information was far less than the influence of the existing medias of cable news and talk radio, where false information was rampant. Did anyone seriously suggest we should regulate what cable news hosts or call in radio listeners could say? Bullying middle schoolers on social media? Compared to what? Does it even register when compared to the bullying done in school hallways? Radicalization on YouTube? Compared to talk radio? To googling?
Kids are inherently obsessive about new things, and can become deeply infatuated with stuff that they outgrow and abandon a few years later. So the fact they may be infatuated with social media right now should not in itself be alarming. Yes, we should indeed understand how it affects children and how to enhance its benefits, but it is dangerous to construct national policies for a technology based on the behavior of children using it.
Since it is the same technology, inspecting how it is used in other parts of the world would help us isolate what is being caused by the technology and what is being caused by the peculiar culture of the US.
You don’t notice what difference you make because of the platform's humongous billions-scale. In aggregate your choices make a difference which direction it — or any technology — goes. People prefer to watch things on demand, so little by little, we have steered the technology to let us binge watch. Streaming happened without much regulation or even enthusiasm of the media companies. Street usage is the fastest and most direct way to steer tech.
Vibrators instead of the cacophony of ringing bells on cell phones is one example of a marketplace technological solution
The long-term effects of AI will affect our society to a greater degree than electricity and fire, but its full effects will take centuries to play out. That means that we’ll be arguing, discussing, and wrangling with the changes brought about by AI for the next 10 decades. Because AI operates so close to our own inner self and identity, we are headed into a century-long identity crisis.
What we tend to call AI, will not be considered AI years from now
What we are discovering is that many of the cognitive tasks we have been doing as humans are dumber than they seem. Playing chess was more mechanical than we thought. Playing the game Go is more mechanical than we thought. Painting a picture and being creative was more mechanical than we thought. And even writing a paragraph with words turns out to be more mechanical than we thought
out of the perhaps dozen of cognitive modes operating in our minds, we have managed to synthesize two of them: perception and pattern matching. Everything we’ve seen so far in AI is because we can produce those two modes. We have not made any real progress in synthesizing symbolic logic and deductive reasoning and other modes of thinking
we are slowly realizing we still have NO IDEA how our own intelligences really work, or even what intelligence is. A major byproduct of AI is that it will tell us more about our minds than centuries of psychology and neuroscience have
There is no monolithic AI. Instead there will be thousands of species of AIs, each engineered to optimize different ways of thinking, doing different jobs
Now from the get-go we assume there will be significant costs and harms of anything new, which was not the norm in my parent's generation
The astronomical volume of money and greed flowing through this frontier overwhelmed and disguised whatever value it may have had
The sweet elegance of blockchain enables decentralization, which is a perpetually powerful force. This tech just has to be matched up to the tasks — currently not visible — where it is worth paying the huge cost that decentralization entails. That is a big ask, but taking the long-view, this moment may not be a failure
My generic career advice for young people is that if at all possible, you should aim to work on something that no one has a word for. Spend your energies where we don’t have a name for what you are doing, where it takes a while to explain to your mother what it is you do. When you are ahead of language, that means you are in a spot where it is more likely you are working on things that only you can do. It also means you won’t have much competition.
Your 20s are the perfect time to do a few things that are unusual, weird, bold, risky, unexplainable, crazy, unprofitable, and looks nothing like “success.” The less this time looks like success, the better it will be as a foundation
·noahpinion.substack.com·
Interview with Kevin Kelly,editor, author, and futurist
What comes after Zoom? — Benedict Evans
What comes after Zoom? — Benedict Evans
If you’d looked at Skype in 2004 and argued that it would own ‘voice’ on ‘computers’, that would not have been the right mental model. I think this is where we’ll go with video - there will continue to be hard engineering, but video itself will be a commodity and the question will be how you wrap it. There will be video in everything, just as there is voice in everything, and there will be a great deal of proliferation into industry verticals on one hand and into unbundling pieces of the tech stack on the other. On one hand video in healthcare, education or insurance is about the workflow, the data model and the route to market, and lots more interesting companies will be created, and on the other hand Slack is deploying video on top of Amazon’s building blocks, and lots of interesting companies will be created here as well. There’s lots of bundling and unbundling coming, as always. Everything will be ‘video’ and then it will disappear inside.
the calendar is often the aggregation layer - you don’t need to know what service the next call uses, just when it is. Skype needed both an account and an app, so had a network effect (and lost even so). WhatsApp uses the telephone numbering system as an address and so piggybacked on your phone’s contact list - effectively, it used the PSTN as the social graph rather than having to build its own. But a group video call is a URL and a calendar invitation - it has no graph of its own.
one of the ways that this all feels very 1.0 is the rather artificial distinction between calls that are based on a ‘room’, where the addressing system is a URL and anyone can join without an account, and calls that are based on ‘people’, where everyone joining needs their own address, whether it’s a phone number, an account or something else. Hence Google has both Meet (URLs) and Duo (people) - Apple’s FaceTime is only people (no URLs).
When Snap launched, there were already infinite ways to share images, but Snap asked a bunch of weird questions that no-one had really asked before. Why do you have to press the camera button - why doesn’t the app open in the camera? Why are you saving your messages - isn’t that like saving all your phone calls? Fundamentally, Snap asked ‘why, exactly, are you sending a picture? What is the underlying social purpose?’ You’re not really sending someone a sheet of pixels - you’re communicating.
That’s the question Zoom and all its competitors haven’t really asked. Zoom has done a good job of asking why it was hard to get into a call, but it hasn’t asked why you’re in the call in the first place. Why, exactly, are you sending someone a video stream and watching another one? Why am I looking at a grid of little thumbnails of faces? Is that the purpose of this moment? What is the ‘mute’ button for - background noise, or so I can talk to someone else, or is it so I can turn it off to raise my hand? What social purpose is ‘mute’ actually serving? What is screen-sharing for? What other questions could one ask? And so if Zoom is the Dropbox or Skype of video, we are waiting for the Snap, Clubhouse and Yo.
·ben-evans.com·
What comes after Zoom? — Benedict Evans