Found 104 bookmarks
Custom sorting
Panic Among the Streamers
Panic Among the Streamers
Netflix could buy 10 top quality screenplays per year with the cash they’ll spend on that one job.  They must have big plans for AI.There are also a half dozen AI job openings at Disney. And the tech-based streamers (Apple, Amazon) already have made big investments in AI. Sony launched an AI business unit in April 2020—in order to “enhance human imagination and creativity, particularly in the realm of entertainment.”
When Spotify launched on the stock exchange in 2018, it was losing around $30 million per month. Now it’s much larger, and is losing money at the pace of more than $100 million per month.
But the real problem at Spotify isn’t just convincing people to pay more. It runs much deeper. Spotify finds itself in the awkward position of asking people to pay more for a lousy interface that degrades the entire user experience.
Boredom is built into the platform, because they lose money if you get too excited about music—you’re like the person at the all-you-can-eat buffet who goes back for a third helping. They make the most money from indifferent, lukewarm fans, and they created their interface with them in mind. In other words, Spotify’s highest aspiration is to be the Applebee’s of music.
They need to prepare for a possible royalty war against record labels and musicians—yes, that could actually happen—and they do that by creating a zombie world of brain dead listeners who don’t even know what artist they’re hearing. I know that sounds extreme, but spend some time on the platform and draw your own conclusions.
·honest-broker.com·
Panic Among the Streamers
Hollywood on Strike
Hollywood on Strike
The broader issue is that the video industry finally seems to be facing what happened to the print and music industry before them: the Internet comes bearing gifts like infinite capacity and free distribution, but those gifts are a poisoned chalice for industries predicated on scarcity. When anyone could publish text, most text-based businesses went from massive profitability to terminal decline; when anyone could distribute music the music industry could only be saved by tech companies like Spotify helping them sell convenience in place of plastic discs.
thanks to COVID a lot of people fell out of the habit of going to the movie theater, and it appears around 25% of the audience permanently found something better to do with their time; that same reality applies to TV. Just as newspapers once thought the Internet was a boon because it increased their addressable market, only to find out that it also drastically increased competition for readers’ attention, Hollywood has to face the reality that the ability to make far more shows extends not only to studios but also to literally anyone.
·stratechery.com·
Hollywood on Strike
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
What Is AI Doing To Art? | NOEMA
What Is AI Doing To Art? | NOEMA
The proliferation of AI-generated images in online environments won’t eradicate human art wholesale, but it does represent a reshuffling of the market incentives that help creative economies flourish. Like the college essay, another genre of human creativity threatened by AI usurpation, creative “products” might become more about process than about art as a commodity.
Are artists using computer software on iPads to make seemingly hand-painted images engaged in a less creative process than those who produce the image by hand? We can certainly judge one as more meritorious than the other but claiming that one is more original is harder to defend.
An understanding of the technology as one that separates human from machine into distinct categories leaves little room for the messier ways we often fit together with our tools. AI-generated images will have a big impact on copyright law, but the cultural backlash against the “computers making art” overlooks the ways computation has already been incorporated into the arts.
The problem with debates around AI-generated images that demonize the tool is that the displacement of human-made art doesn’t have to be an inevitability. Markets can be adjusted to mitigate unemployment in changing economic landscapes. As legal scholar Ewan McGaughey points out, 42% of English workers were redundant after WWII — and yet the U.K. managed to maintain full employment.
Contemporary critics claim that prompt engineering and synthography aren’t emergent professions but euphemisms necessary to equate AI-generated artwork with the work of human artists. As with the development of photography as a medium, today’s debates about AI often overlook how conceptions of human creativity are themselves shaped by commercialization and labor.
Others looking to elevate AI art’s status alongside other forms of digital art are opting for an even loftier rebrand: “synthography.” This categorization suggests a process more complex than the mechanical operation of a picture-making tool, invoking the active synthesis of disparate aesthetic elements. Like Fox Talbot and his contemporaries in the nineteenth century, “synthographers” maintain that AI art simply automates the most time-consuming parts of drawing and painting, freeing up human cognition for higher-order creativity.
Separating human from camera was a necessary part of preserving the myth of the camera as an impartial form of vision. To incorporate photography into an economic landscape of creativity, however, human agency needed to ascribe to all parts of the process.
Consciously or not, proponents of AI-generated images stamp the tool with rhetoric that mirrors the democratic aspirations of the twenty-first century.
Stability AI took a similar tack, billing itself as “AI by the people, for the people,” despite turning Stable Diffusion, their text-to-image model, into a profitable asset. That the program is easy to use is another selling point. Would-be digital artists no longer need to use expensive specialized software to produce visually interesting material.
Meanwhile, communities of digital artists and their supporters claim that the reason AI-generated images are compelling at all is because they were trained with data sets that contained copyrighted material. They reject the claim that AI-generated art produces anything original and suggest it instead be thought of as a form of “twenty-first century collage.”
Erasing human influence from the photographic process was good for underscoring arguments about objectivity, but it complicated commercial viability. Ownership would need to be determined if photographs were to circulate as a new form of property. Was the true author of a photograph the camera or its human operator?
By reframing photographs as les dessins photographiques — or photographic drawings, the plaintiffs successfully established that the development of photographs in a darkroom was part of an operator’s creative process. In addition to setting up a shot, the photographer needed to coax the image from the camera’s film in a process resembling the creative output of drawing. The camera was a pencil capable of drawing with light and photosensitive surfaces, but held and directed by a human author.
Establishing photography’s dual function as both artwork and document may not have been philosophically straightforward, but it staved off a surge of harder questions.
Human intervention in the photographic process still appeared to happen only on the ends — in setup and then development — instead of continuously throughout the image-making process.
·noemamag.com·
What Is AI Doing To Art? | NOEMA
The Gap
The Gap
Designers move from idea to a wireframe, a prototype, a logo, or even just a drawing. Developers move from a problem or feature to a coded solution that is solved and released. Both are creative, both are in aid of the end-user. The Design Engineer role is also creative and authors code but systematically translates a design towards implementation in a structured way.  I have never worked anywhere where there wasn't someone trying to close the gap. This role is often filled in accidentally, and companies are totally unaware of the need. Recruiters have never heard of it, and IT consultancies don't have the capability in their roster. We now name the role "Design Engineer" because the gap is widening, and the role has become too complex to not exist.
·linkedin.com·
The Gap
Vision Pro — Benedict Evans
Vision Pro — Benedict Evans
Meta, today, has roughly the right price and is working forward to the right device: Apple has started with the right device and will work back to the right price. Meta is trying to catalyse an ecosystem while we wait for the right hardware - Apple is trying to catalyse an ecosystem while we wait for the right price.
one of the things I wondered before the event was how Apple would show a 3D experience in 2D. Meta shows either screenshots from within the system (with the low visual quality inherent in the spec you can make and sell for $500) or shots of someone wearing the headset and grinning - neither are satisfactory. Apple shows the person in the room, with the virtual stuff as though it was really there, because it looks as though it is.
For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere - it’s a screen and there could be five different ‘metaverse’ apps. This iPhone was a piece of glass that could be anything - this is trying to be a piece of glass that can show anything.
A lot of what Apple shows is possibility and experiment - it could be this, this or that, just as when Apple launched the watch it suggested it as fitness, social or fashion, and it turn out to work best for fitness (and is now a huge business).
Mark Zuckerberg, speaking to a Meta all-hands after Apple’s event, made the perfectly reasonable point that Apple hasn’t shown much that no-one had thought of before - there’s no ‘magic’ invention. Everyone already knows we need better screens, eye-tracking and hand-tracking, in a thin and light device.
It’s worth remembering that Meta isn’t in this to make a games device, nor really to sell devices per se - rather, the thesis is that if VR is the next platform, Meta has to make sure it isn’t controlled by a platform owner who can screw them, as Apple did with IDFA in 2021.
On the other hand, the Vision Pro is an argument that current devices just aren’t good enough to break out of the enthusiast and gaming market, incremental improvement isn’t good enough either, and you need a step change in capability.
Apple’s privacy positioning, of course, has new strategic value now that it’s selling a device you wear that’s covered in cameras
the genesis of the current wave of VR was the realisation a decade ago that the VR concepts of the 1990s would work now, and with nothing more than off-the-shelf smartphone components and gaming PCs, plus a bit more work. But ‘a bit more work’ turned out to be thirty or forty billion dollars from Meta and God only knows how much more from Apple - something over $100bn combined, almost certainly.
So it might be that a wearable screen of any kind, no matter how good, is just a staging post - the summit of a foothill on the way to the top of Everest. Maybe the real Reality device is glasses, or contact lenses projecting onto your retina, or some kind of neural connection, all of which might be a decade or decades away again, and the piece of glass in our pocket remains the right device all the way through.
I think the price and the challenge of category creation are tightly connected. Apple has decided that the capabilities of the Vision Pro are the minimum viable product - that it just isn’t worth making or selling a device without a screen so good you can’t see the pixels, pass-through where you can’t see any lag, perfect eye-tracking and perfect hand-tracking. Of course the rest of the industry would like to do that, and will in due course, but Apple has decided you must do that.
For VR, better screens are merely better, but for AR Apple thinks this this level of display system is a base below which you don’t have a product at all.
For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere - it’s a screen and there could be five different ‘metaverse’ apps. The iPhone was a piece of glass that could be anything - this is trying to be a piece of glass that can show anything.
This reminds me a little of when Meta tried to make a phone, and then a Home Screen for a phone, and Mark Zuckerberg said “your phone should be about people.” I thought “no, this is a computer, and there are many apps, some of which are about people and some of which are not.” Indeed there’s also an echo of telco thinking: on a feature phone, ‘internet stuff’ was one or two icons on your portable telephone, but on the iPhone the entire telephone was just one icon on your computer. On a Vision Pro, the ‘Meta Metaverse’ is one app amongst many. You have many apps and panels, which could be 2D or 3D, or could be spaces.
·ben-evans.com·
Vision Pro — Benedict Evans
Isn’t That Spatial? | No Mercy / No Malice
Isn’t That Spatial? | No Mercy / No Malice
Betting against a first-generation Apple product is a bad trade — from infamous dismissals of the iPhone to disappointment with the original iPad. In fact, this is a reflection of Apple’s strategy: Start with a product that’s more an elegant proof-of-concept than a prime-time hit; rely on early adopters to provide enough runway for its engineers to keep iterating; and trust in unmatched capital, talent, brand equity, and staying power to morph a first-gen toy into a third-gen triumph
We are a long way from making three screens, a glass shield, and an array of supporting hardware light enough to wear for an extended period. Reviewers were (purposefully) allowed to wear the Vision Pro for less than half an hour, and nearly every one said comfort was declining even then. Avatar: The Way of Water is 3 hours and 12 minutes.
Meta’s singular strategic objective is to escape second-tier status and, like Apple and Alphabet, control its distribution. And its path to independence runs through Apple Park. Zuckerberg is spending the GDP of a small country to invent a new world, the metaverse, where Apple doesn’t own the roads or power stations. Vision Pro is insurance against the metaverse evolving into anything more than an incel panic room.
The only product category where VR makes difference is good VR games. Price is not limiting factor, the quality of VR experience is. Beat Saber is good and fun and physical exercise. Half Life: Alyx, is amazing. VR completely supercharges horror games, and scary stalking shooters. Want to fear of your life and get PTSD in the comfort of your home? You can do it. Games can connect people and provide physical exercise. If the 3rd iteration of Vision Pro is good for 2 hours of playing for $2000 Apple will kill the console market. Playstations no more. Apple is not a gaming company, but if Vision Pro becomes better and slightly cheaper, Apple becomes gaming company against its will.
·profgalloway.com·
Isn’t That Spatial? | No Mercy / No Malice
Apple Vision
Apple Vision
Apple Vision is technically a VR device that experientially is an AR device, and it’s one of those solutions that, once you have experienced it, is so obviously the correct implementation that it’s hard to believe there was ever any other possible approach to the general concept of computerized glasses.
the Vision is taking that captured image, processing it, and displaying it in front of your eyes in around 4 milliseconds.
Real-time operating systems are used in embedded systems for applications with critical functionality, like a car, for example: it’s ok to have an infotainment system that sometimes hangs or even crashes, in exchange for more flexibility and capability, but the software that actually operates the vehicle has to be reliable and unfailingly fast. This is, in broad strokes, one way to think about how visionOS works: while the user experience is a time-sharing operating system that is indeed a variation of iOS, and runs on the M2 chip, there is a subsystem that primarily operates the R1 chip that is real-time; this means that even if visionOS hangs or crashes, the outside world is still rendered under that magic 12 milliseconds.
I’ll be honest: what this looked like to me was a divorced dad, alone at home with his Vision Pro, perhaps because his wife was irritated at the extent to which he got lost in his own virtual experience.
·stratechery.com·
Apple Vision
This time, it feels different
This time, it feels different
In the past several months, I have come across people who do programming, legal work, business, accountancy and finance, fashion design, architecture, graphic design, research, teaching, cooking, travel planning, event management etc., all of whom have started using the same tool, ChatGPT, to solve use cases specific to their domains and problems specific to their personal workflows. This is unlike everyone using the same messaging tool or the same document editor. This is one tool, a single class of technology (LLM), whose multi-dimensionality has achieved widespread adoption across demographics where people are discovering how to solve a multitude of problems with no technical training, in the one way that is most natural to humans—via language and conversations.
I cannot recall the last time a single tool gained such widespread acceptance so swiftly, for so many use cases, across entire demographics.
there is significant substance beneath the hype. And that is what is worrying; the prospect of us starting to depend indiscriminately on poorly understood blackboxes, currently offered by megacorps, that actually work shockingly well.
If a single dumb, stochastic, probabilistic, hallucinating, snake oil LLM with a chat UI offered by one organisation can have such a viral, organic, and widespread adoption—where large disparate populations, people, corporations, and governments are integrating it into their daily lives for use cases that they are discovering themselves—imagine what better, faster, more “intelligent” systems to follow in the wake of what exists today would be capable of doing.
A policy for “AI anxiety” We ended up codifying this into an actual AI policy to bring clarity to the organisation.[10] It states that no one at Zerodha will lose their job if a technology implementation (AI or non-AI) directly renders their existing responsibilities and tasks obsolete. The goal is to prevent unexpected rug-pulls from underneath the feet of humans. Instead, there will be efforts to create avenues and opportunities for people to upskill and switch between roles and responsibilities
To those who believe that new jobs will emerge at meaningful rates to absorb the losses and shocks, what exactly are those new jobs? To those who think that governments will wave magic wands to regulate AI technologies, one just has to look at how well governments have managed to regulate, and how well humanity has managed to self-regulate, human-made climate change and planetary destruction. It is not then a stretch to think that the unraveling of our civilisation and its socio-politico-economic systems that are built on extracting, mass producing, and mass consuming garbage, might be exacerbated. Ted Chiang’s recent essay is a grim, but fascinating exploration of this. Speaking of grim, we can always count on us to ruin nice things! Along the lines of Murphy’s Law,[11] I present: Anything that can be ruined, will be ruined — Grumphy’s law
I asked GPT-4 to summarise this post and write five haikus on it. I have always operated a piece of software, but never asked it anything—that is, until now. Anyway, here is the fifth one. Future’s tangled web, Offloading choices to black boxes, Humanity’s voice fades
·nadh.in·
This time, it feels different
Society's Technical Debt and Software's Gutenberg Moment
Society's Technical Debt and Software's Gutenberg Moment
Past innovations have made costly things become cheap enough to proliferate widely across society. He suggests LLMs will make software development vastly more accessible and productive, alleviating the "technical debt" caused by underproduction of software over decades.
Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things. It is almost infinitely malleable, able to slide and twist and contort itself such that, in its pliability, it pries open doorways as yet unseen.
the clearing price for software production will change. But not just because it becomes cheaper to produce software. In the limit, we think about this moment as being analogous to how previous waves of technological change took the price of underlying technologies—from CPUs, to storage and bandwidth—to a reasonable approximation of zero, unleashing a flood of speciation and innovation. In software evolutionary terms, we just went from human cycle times to that of the drosophila: everything evolves and mutates faster.
A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment. It is an exaggeration, but only a modest one, to say that it is a kind of Gutenberg moment, one where previous barriers to creation—scholarly, creative, economic, etc—are going to fall away, as people are freed to do things only limited by their imagination, or, more practically, by the old costs of producing software.
We have almost certainly been producing far less software than we need. The size of this technical debt is not knowable, but it cannot be small, so subsequent growth may be geometric. This would mean that as the cost of software drops to an approximate zero, the creation of software predictably explodes in ways that have barely been previously imagined.
Entrepreneur and publisher Tim O’Reilly has a nice phrase that is applicable at this point. He argues investors and entrepreneurs should “create more value than you capture.” The technology industry started out that way, but in recent years it has too often gone for the quick win, usually by running gambits from the financial services playbook. We think that for the first time in decades, the technology industry could return to its roots, and, by unleashing a wave of software production, truly create more value than its captures.
Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.
technology has a habit of confounding economics. When it comes to technology, how do we know those supply and demand lines are right? The answer is that we don’t. And that’s where interesting things start happening. Sometimes, for example, an increased supply of something leads to more demand, shifting the curves around. This has happened many times in technology, as various core components of technology tumbled down curves of decreasing cost for increasing power (or storage, or bandwidth, etc.).
Suddenly AI has become cheap, to the point where people are “wasting” it via “do my essay” prompts to chatbots, getting help with microservice code, and so on. You could argue that the price/performance of intelligence itself is now tumbling down a curve, much like as has happened with prior generations of technology.
it’s worth reminding oneself that waves of AI enthusiasm have hit the beach of awareness once every decade or two, only to recede again as the hyperbole outpaces what can actually be done.
·skventures.substack.com·
Society's Technical Debt and Software's Gutenberg Moment
A Student's Guide to Startups
A Student's Guide to Startups
Most startups end up doing something different than they planned. The way the successful ones find something that works is by trying things that don't. So the worst thing you can do in a startup is to have a rigid, pre-ordained plan and then start spending a lot of money to implement it. Better to operate cheaply and give your ideas time to evolve.
Successful startups are almost never started by one person. Usually they begin with a conversation in which someone mentions that something would be a good idea for a company, and his friend says, "Yeah, that is a good idea, let's try it." If you're missing that second person who says "let's try it," the startup never happens. And that is another area where undergrads have an edge. They're surrounded by people willing to say that.
Look for the people who keep starting projects, and finish at least some of them. That's what we look for. Above all else, above academic credentials and even the idea you apply with, we look for people who build things.
You need a certain activation energy to start a startup. So an employer who's fairly pleasant to work for can lull you into staying indefinitely, even if it would be a net win for you to leave.
Most people look at a company like Apple and think, how could I ever make such a thing? Apple is an institution, and I'm just a person. But every institution was at one point just a handful of people in a room deciding to start something. Institutions are made up, and made up by people no different from you.
What goes wrong with young founders is that they build stuff that looks like class projects. It was only recently that we figured this out ourselves. We noticed a lot of similarities between the startups that seemed to be falling behind, but we couldn't figure out how to put it into words. Then finally we realized what it was: they were building class projects.
Class projects will inevitably solve fake problems. For one thing, real problems are rare and valuable. If a professor wanted to have students solve real problems, he'd face the same paradox as someone trying to give an example of whatever "paradigm" might succeed the Standard Model of physics. There may well be something that does, but if you could think of an example you'd be entitled to the Nobel Prize. Similarly, good new problems are not to be had for the asking.
real startups tend to discover the problem they're solving by a process of evolution. Someone has an idea for something; they build it; and in doing so (and probably only by doing so) they realize the problem they should be solving is another one.
Professors will tend to judge you by the distance between the starting point and where you are now. If someone has achieved a lot, they should get a good grade. But customers will judge you from the other direction: the distance remaining between where you are now and the features they need. The market doesn't give a shit how hard you worked. Users just want your software to do what they need, and you get a zero otherwise. That is one of the most distinctive differences between school and the real world: there is no reward for putting in a good effort. In fact, the whole concept of a "good effort" is a fake idea adults invented to encourage kids. It is not found in nature.
unfortunately when you graduate they don't give you a list of all the lies they told you during your education. You have to get them beaten out of you by contact with the real world.
really what work experience refers to is not some specific expertise, but the elimination of certain habits left over from childhood.
One of the defining qualities of kids is that they flake. When you're a kid and you face some hard test, you can cry and say "I can't" and they won't make you do it. Of course, no one can make you do anything in the grownup world either. What they do instead is fire you. And when motivated by that you find you can do a lot more than you realized. So one of the things employers expect from someone with "work experience" is the elimination of the flake reflex—the ability to get things done, with no excuses.
Fundamentally the equation is a brutal one: you have to spend most of your waking hours doing stuff someone else wants, or starve. There are a few places where the work is so interesting that this is concealed, because what other people want done happens to coincide with what you want to work on.
So the most important advantage 24 year old founders have over 20 year old founders is that they know what they're trying to avoid. To the average undergrad the idea of getting rich translates into buying Ferraris, or being admired. To someone who has learned from experience about the relationship between money and work, it translates to something way more important: it means you get to opt out of the brutal equation that governs the lives of 99.9% of people. Getting rich means you can stop treading water.
You don't get money just for working, but for doing things other people want. Someone who's figured that out will automatically focus more on the user. And that cures the other half of the class-project syndrome. After you've been working for a while, you yourself tend to measure what you've done the same way the market does.
the most important skill for a startup founder isn't a programming technique. It's a knack for understanding users and figuring out how to give them what they want. I know I repeat this, but that's because it's so important. And it's a skill you can learn, though perhaps habit might be a better word. Get into the habit of thinking of software as having users. What do those users want? What would make them say wow?
·paulgraham.com·
A Student's Guide to Startups
Interview with Kevin Kelly,editor, author, and futurist
Interview with Kevin Kelly,editor, author, and futurist
To write about something hard to explain, write a detailed letter to a friend about why it is so hard to explain, and then remove the initial “Dear Friend” part and you’ll have a great first draft.
To be interesting just tell your story with uncommon honesty.
Most articles and stories are improved significantly if you delete the first page of the manuscript draft. Immediately start with the action.
Each technology can not stand alone. It takes a saw to make a hammer and it takes a hammer to make a saw. And it takes both tools to make a computer, and in today’s factory it takes a computer to make saws and hammers. This co-dependency creates an ecosystem of highly interdependent technologies that support each other
On the other hand, I see this technium as an extension of the same self-organizing system responsible for the evolution of life on this planet. The technium is evolution accelerated. A lot of the same dynamics that propel evolution are also at work in the technium
Our technologies are ultimately not contrary to life, but are in fact an extension of life, enabling it to develop yet more options and possibilities at a faster rate. Increasing options and possibilities is also known as progress, so in the end, what the technium brings us humans is progress.
Libraries, journals, communication networks, and the accumulation of other technologies help create the next idea, beyond the efforts of a single individual
We also see near-identical parallel inventions of tricky contraptions like slingshots and blowguns. However, because it was so ancient, we don’t have a lot of data for this behavior. What we would really like is to have a N=100 study of hundreds of other technological civilizations in our galaxy. From that analysis we’d be able to measure, outline, and predict the development of technologies. That is a key reason to seek extraterrestrial life.
When information is processed in a computer, it is being ceaselessly replicated and re-copied while it computes. Information wants to be copied. Therefore, when certain people get upset about the ubiquitous copying happening in the technium, their misguided impulse is to stop the copies. They want to stamp out rampant copying in the name of "copy protection,” whether it be music, science journals, or art for AI training. But the emergent behavior of the technium is to copy promiscuously. To ban, outlaw, or impede the superconductivity of copies is to work against the grain of the system.
the worry of some environmentalists is that technology can only contribute more to the problem and none to the solution. They believe that tech is incapable of being green because it is the source of relentless consumerism at the expense of diminishing nature, and that our technological civilization requires endless growth to keep the system going. I disagree.
Over time evolution arranges the same number of atoms in more complex patterns to yield more complex organisms, for instance producing an agile lemur the same size and weight as a jelly fish. We seek the same shift in the technium. Standard economic growth aims to get consumers to drink more wine. Type 2 growth aims to get them to not drink more wine, but better wine.
[[An optimistic view of capitalism]]
to measure (and thus increase) productivity we count up the number of refrigerators manufactured and sold each year. More is generally better. But this counting tends to overlook the fact that refrigerators have gotten better over time. In addition to making cold, they now dispense ice cubes, or self-defrost, and use less energy. And they may cost less in real dollars. This betterment is truly real value, but is not accounted for in the “more” column
it is imperative that we figure out how to shift more of our type 1 growth to type 2 growth, because we won’t be able to keep expanding the usual “more.”  We will have to perfect a system that can keep improving and getting better with fewer customers each year, smaller markets and audiences, and fewer workers. That is a huge shift from the past few centuries where every year there has been more of everything.
“degrowthers” are correct in that there are limits to bulk growth — and running out of humans may be one of them. But they don’t seem to understand that evolutionary growth, which includes the expansion of intangibles such as freedom, wisdom, and complexity, doesn’t have similar limits. We can always figure out a way to improve things, even without using more stuff — especially without using more stuff!
the technium is not inherently contrary to nature; it is inherently derived from evolution and thus inherently capable of being compatible with nature. We can choose to create versions of the technium that are aligned with the natural world.
Social media can transmit false information at great range at great speed. But compared to what? Social media's influence on elections from transmitting false information was far less than the influence of the existing medias of cable news and talk radio, where false information was rampant. Did anyone seriously suggest we should regulate what cable news hosts or call in radio listeners could say? Bullying middle schoolers on social media? Compared to what? Does it even register when compared to the bullying done in school hallways? Radicalization on YouTube? Compared to talk radio? To googling?
Kids are inherently obsessive about new things, and can become deeply infatuated with stuff that they outgrow and abandon a few years later. So the fact they may be infatuated with social media right now should not in itself be alarming. Yes, we should indeed understand how it affects children and how to enhance its benefits, but it is dangerous to construct national policies for a technology based on the behavior of children using it.
Since it is the same technology, inspecting how it is used in other parts of the world would help us isolate what is being caused by the technology and what is being caused by the peculiar culture of the US.
You don’t notice what difference you make because of the platform's humongous billions-scale. In aggregate your choices make a difference which direction it — or any technology — goes. People prefer to watch things on demand, so little by little, we have steered the technology to let us binge watch. Streaming happened without much regulation or even enthusiasm of the media companies. Street usage is the fastest and most direct way to steer tech.
Vibrators instead of the cacophony of ringing bells on cell phones is one example of a marketplace technological solution
The long-term effects of AI will affect our society to a greater degree than electricity and fire, but its full effects will take centuries to play out. That means that we’ll be arguing, discussing, and wrangling with the changes brought about by AI for the next 10 decades. Because AI operates so close to our own inner self and identity, we are headed into a century-long identity crisis.
What we tend to call AI, will not be considered AI years from now
What we are discovering is that many of the cognitive tasks we have been doing as humans are dumber than they seem. Playing chess was more mechanical than we thought. Playing the game Go is more mechanical than we thought. Painting a picture and being creative was more mechanical than we thought. And even writing a paragraph with words turns out to be more mechanical than we thought
out of the perhaps dozen of cognitive modes operating in our minds, we have managed to synthesize two of them: perception and pattern matching. Everything we’ve seen so far in AI is because we can produce those two modes. We have not made any real progress in synthesizing symbolic logic and deductive reasoning and other modes of thinking
we are slowly realizing we still have NO IDEA how our own intelligences really work, or even what intelligence is. A major byproduct of AI is that it will tell us more about our minds than centuries of psychology and neuroscience have
There is no monolithic AI. Instead there will be thousands of species of AIs, each engineered to optimize different ways of thinking, doing different jobs
Now from the get-go we assume there will be significant costs and harms of anything new, which was not the norm in my parent's generation
The astronomical volume of money and greed flowing through this frontier overwhelmed and disguised whatever value it may have had
The sweet elegance of blockchain enables decentralization, which is a perpetually powerful force. This tech just has to be matched up to the tasks — currently not visible — where it is worth paying the huge cost that decentralization entails. That is a big ask, but taking the long-view, this moment may not be a failure
My generic career advice for young people is that if at all possible, you should aim to work on something that no one has a word for. Spend your energies where we don’t have a name for what you are doing, where it takes a while to explain to your mother what it is you do. When you are ahead of language, that means you are in a spot where it is more likely you are working on things that only you can do. It also means you won’t have much competition.
Your 20s are the perfect time to do a few things that are unusual, weird, bold, risky, unexplainable, crazy, unprofitable, and looks nothing like “success.” The less this time looks like success, the better it will be as a foundation
·noahpinion.substack.com·
Interview with Kevin Kelly,editor, author, and futurist
Thoughts on the software industry - linus.coffee
Thoughts on the software industry - linus.coffee
software gives you its own set of abstractions and basic vocabulary with which to understand every experience. It sort of smells like mathematics in some ways. But software’s way of looking at the world is more about abstractions modeling underlying complexities in systems; signal vs. noise; scale and orders of magnitude; and information — how much there is, what we can do it with, how we can learn from it and model it. Software’s interpretation of reality is particularly important because software drives the world now, and the people who write the software that runs it see the world through this kind of “software’s worldview” — scaling laws, information theory, abstractions and complexity. I think over time I’ve come to believe that understanding this worldview is more interesting than learning to wield programming tools.
·linus.coffee·
Thoughts on the software industry - linus.coffee
The Limits of Computational Photography
The Limits of Computational Photography
How much of that is the actual photo and how much you might consider to be synthesized is a line I think each person draws for themselves. I think it depends on the context; Moon photography makes for a neat demo but it is rarely relevant. A better question is whether these kinds of software enhancements hallucinate errors along the same lines of what happened in Xerox copiers for years.
·pxlnv.com·
The Limits of Computational Photography
Privacy Fundamentalism
Privacy Fundamentalism
my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other. This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.
I believe the privacy debate needs to be reset around these three assumptions: Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong. Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet, and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones. Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.
·stratechery.com·
Privacy Fundamentalism
Making Our Hearts Sing
Making Our Hearts Sing
One thing I learned long ago is that people who prioritize design, UI, and UX in the software they prefer can empathize with and understand the choices made by people who prioritize other factors (e.g. raw feature count, or the ability to tinker with their software at the system level, or software being free-of-charge). But it doesn’t work the other way: most people who prioritize other things can’t fathom why anyone cares deeply about design/UI/UX because they don’t perceive it. Thus they chalk up iOS and native Mac-app enthusiasm to being hypnotized by marketing, Pied Piper style.
Those who see and value the artistic value in software and interface design have overwhelmingly wound up on iOS; those who don’t have wound up on Android. Of course there are exceptions. Of course there are iOS users and developers who are envious of Android’s more open nature. Of course there are Android users and developers who do see how crude the UIs are for that platform’s best-of-breed apps. But we’re left with two entirely different ecosystems with entirely different cultural values — nothing like (to re-use my example from yesterday) the Coke-vs.-Pepsi state of affairs in console gaming platforms.
·daringfireball.net·
Making Our Hearts Sing
How Panic got into video games with Campo Santo
How Panic got into video games with Campo Santo
So when ex-Telltale Games designer and writer Sean Vanaman announced last month that the first game from Campo Santo, his new video game development studio, was "being both backed by and made in collaboration with the stupendous, stupidly-successful Mac utility software-cum-design studio slash app/t-shirt/engineering company Panic Inc. from Portland, Oregon," it wasn't expected, but it wasn't exactly surprising, either. It was, instead, the logical conclusion of years-long friendships and suddenly aligning desires.
"There's a weird confluence of things that have crisscrossed," he said. "One is that we're lucky in that Panic is the kind of company that has never been defined by a limited mission statement, or 'We're the network tool guys' or anything like that. I mean, we made a really popular mp3 player. Then we kind of fell into network tools and utilities, but we've always done goofy stuff like our icon changer and these shirts and all that other stuff. "I kind of love that we can build stuff, and the best reaction that we can get when we do a curveball like this is, 'That's totally weird, but also that totally makes sense for Panic.'"
"To me," Sasser said, "when you have actually good people who are more interested in making awesome things than obsessing over the business side of things or trying to squeeze every ounce of everything from everybody, then that stuff just goes easy. It's just fun. The feeling that you're left with is just excitement.
·polygon.com·
How Panic got into video games with Campo Santo
AI-generated code helps me learn and makes experimenting faster
AI-generated code helps me learn and makes experimenting faster
here are five large language model applications that I find intriguing: Intelligent automation starting with browsers but this feels like a step towards phenotropics Text generation when this unlocks new UIs like Word turning into Photoshop or something Human-machine interfaces because you can parse intent instead of nouns When meaning can be interfaced with programmatically and at ludicrous scale Anything that exploits the inhuman breadth of knowledge embedded in the model, because new knowledge is often the collision of previously separated old knowledge, and this has not been possible before.
·interconnected.org·
AI-generated code helps me learn and makes experimenting faster
Why Google Missed ChatGPT
Why Google Missed ChatGPT
Even if chatbots were to fix their accuracy issues, Google would still have a business model problem to contend with. The company makes money when people click ads next to search results, and it’s awkward to fit ads into conversational replies. Imagine receiving a response and then immediately getting pitched to go somewhere else — it feels slimy, and unhelpful. Google thus has little incentive to move us beyond traditional search, at least not in a paradigm-shifting way, until it figures out how to make the money aspect work. In the meantime, it’ll stick with the less impressive Google Assistant.
“Google doesn’t inherently want you, at an inherent level, to just get the answer to every problem. Because that might reduce the need to go click around the web, which would then reduce the need for us to go to Google.”
·bigtechnology.com·
Why Google Missed ChatGPT
G3nerative
G3nerative
Web3 has largely been technology looking for problems to solve while generative AI has been about almost too many solutions created by technology which is evolving on a seemingly daily basis. As a result, web3 has thus far been evangelists trying to convince us to re-solve old problems with their new technology
·500ish.com·
G3nerative
Creativity As an App | Andreessen Horowitz
Creativity As an App | Andreessen Horowitz
We fully acknowledge that it’s hard to be confident in any predictions at the pace the field is moving. Right now, though, it seems we’re much more likely to see applications full of creative images created strictly by programmers than applications with human-designed art built strictly by creators.
·a16z.com·
Creativity As an App | Andreessen Horowitz