Found 10 bookmarks
Newest
Netflix's head of design on the future of Netflix - Fast Company
Netflix's head of design on the future of Netflix - Fast Company
At Netflix, we have such a diverse population of shows in 183 countries around the world. We’re really trying to serve up lots of stories people haven’t heard before. When you go into our environment, you’re like, “Ooh, what is that?” You’re almost kind of afraid to touch it, because you’re like, “Well, I don’t want to waste my time.”That level of discovery is literally, I’m not bullshitting you, man, that’s the thing that keeps me up at night. How do I help figure out how to help people discover things, with enough evidence that they trust it? And when they click on it, they love it, and then they immediately ping their best friend, “Have you seen this documentary? It’s amazing.” And she tells her friends, and then that entire viral loop starts.
The discovery engine is very temporal. Member number 237308 could have been into [reality TV] because she or he just had a breakup. Now they just met somebody, so all of a sudden it shifts to rom-coms.Now that person that they met loves to travel. So [they might get into] travel documentaries. And now that person that they’re with, they may have a kid, so they might want more kids’ shows. So, it’s very dangerous for us to ever kind of say, “This is what you like. You have a cat. You must like cat documentaries.”
We don’t see each other, obviously, and I don’t want to social network on Netflix. But knowing other humans exist there is part of it.You answered the question absolutely perfectly. Not only because it’s your truth, but that’s what everyone says! That connection part. So another thing that goes back to your previous question, when you’re asking me what’s on my mind? It’s that. How do I help make sure that when you’re in that discovery loop, you still feel that you’re connected to others.I’m not trying to be the Goth kids on campus who are like, “I don’t care about what’s popular.” But I’m also not trying to be the super poppy kids who are always chasing trends. There’s something in between which is, “Oh, hey, I haven’t heard about that, and I kind of want to be up on it.”
I am looking forward to seeing what Apple does with this and then figuring out more, how are people going to use it? Then I think that we should have a real discussion about how Netflix does it.But to just port Netflix over? No. It’s got to make sure that it’s using the power of the system as much as humanly possible so that it’s really making that an immersive experience. I don’t want to put resources toward that right now.
On porting Netflix to Apple Vision Pro
The design team here at Netflix, we played a really big hand in how that worked because we had to design the back-end tool. What people don’t know about our team is 30% of our organization is actually designing and developing the software tools that we use to make the movies. We had to design a tool that allowed the teams to understand both what extra footage to shoot and how that might branch. When the Black Mirror team was trying to figure out how to make this narrative work, the software we provided really made that easier.
·fastcompany.com·
Netflix's head of design on the future of Netflix - Fast Company
One weird trick for fixing Hollywood
One weird trick for fixing Hollywood
A view of the challenges facing Hollywood, acknowledging the profound shifts in consumer behavior and media consumption driven by new technologies. The rise of smartphones and mobile entertainment apps has disrupted the traditional movie-going habits of the public, with people now less inclined to see films simply because they are playing. Free or low-paid labor on social media platforms like YouTube and TikTok is effectively competing with and undercutting the unionized Hollywood workforce.
the smartphone, and a host of software technologies built on it,3 have birthed what is essentially a parallel, non-union, motion-picture industry consisting of YouTube, TikTok, Instagram, Twitch, Twitter, and their many other social-video rivals, all of which rely on the free or barely compensated labor product of people acting as de facto writers, directors, producers, actors, and crew. Even if they’d never see it this way, YouTubers and TikTokers are effectively competing with Hollywood over the idle hours of consumers everywhere; more to the point, they’re doing what any non-union workforce does in an insufficiently organized industry: driving down labor compensation.
Almost no one I know has work; most people’s agents and managers have more or less told them there won’t be jobs until 2025. An executive recently told a friend that the only things getting made this year are “ultra premium limiteds,” which sounds like a kind of tampon but actually just means “six-episode miniseries that an A-List star wants to do.”
YouTubers’ lack of collective bargaining power isn’t just bad for me and other guild members; it’s bad for the YouTubers themselves. Ask any professional or semi-professional streamer what they think of the platform and you’ll hear a litany of complaints about its opacity and inconsistency
·maxread.substack.com·
One weird trick for fixing Hollywood
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Some of the topics I touch on: Why I believe Vision Pro may be an over-engineered “devkit” The genius & audacity behind some of Apple’s hardware decisions Gaze & pinch is an incredible UI superpower and major industry ah-ha moment Why the Vision Pro software/content story is so dull and unimaginative Why most people won’t use Vision Pro for watching TV/movies Apple’s bet in immersive video is a total game-changer for live sports Why I returned my Vision Pro… and my Top 10 wishlist to reconsider Apple’s VR debut is the best thing that ever happened to Oculus/Meta My unsolicited product advice to Meta for Quest Pro 2 and beyond
Apple really played it safe in the design of this first VR product by over-engineering it. For starters, Vision Pro ships with more sensors than what’s likely necessary to deliver Apple’s intended experience. This is typical in a first-generation product that’s been under development for so many years. It makes Vision Pro start to feel like a devkit.
A sensor party: 6 tracking cameras, 2 passthrough cameras, 2 depth sensors(plus 4 eye-tracking cameras not shown)
it’s easy to understand two particularly important decisions Apple made for the Vision Pro launch: Designing an incredible in-store Vision Pro demo experience, with the primary goal of getting as many people as possible to experience the magic of VR through Apple’s lenses — most of whom have no intention to even consider a $4,000 purchase. The demo is only secondarily focused on actually selling Vision Pro headsets. Launching an iconic woven strap that photographs beautifully even though this strap simply isn’t comfortable enough for the vast majority of head shapes. It’s easy to conclude that this decision paid off because nearly every bit of media coverage (including and especially third-party reviews on YouTube) uses the woven strap despite the fact that it’s less comfortable than the dual loop strap that’s “hidden in the box”.
Apple’s relentless and uncompromising hardware insanity is largely what made it possible for such a high-res display to exist in a VR headset, and it’s clear that this product couldn’t possibly have launched much sooner than 2024 for one simple limiting factor — the maturity of micro-OLED displays plus the existence of power-efficient chipsets that can deliver the heavy compute required to drive this kind of display (i.e. the M2).
·hugo.blog·
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Strong and weak technologies - cdixon
Strong and weak technologies - cdixon
Strong technologies capture the imaginations of technology enthusiasts. That is why many important technologies start out as weekend hobbies. Enthusiasts vote with their time, and, unlike most of the business world, have long-term horizons. They build from first principles, making full use of the available resources to design technologies as they ought to exist.
·cdixon.org·
Strong and weak technologies - cdixon
Computers Are Magical; Computers Are Awful
Computers Are Magical; Computers Are Awful
many small issues and frustrations they experienced with their various computers and apps throughout one day, such as lag, bugs, unexpected behavior, and things not working as expected. While none of the individual problems were major, the accumulation of constant small issues robbed them of confidence in the technology. The author acknowledges the hard work of developers but feels users deserve more reliable and predictable experiences given how much we rely on computers daily. They hope companies will focus more on fixing bugs rather than just new features.
·pxlnv.com·
Computers Are Magical; Computers Are Awful
The OpenAI Keynote
The OpenAI Keynote
what I cheered as an analyst was Altman’s clear articulation of the company’s priorities: lower price first, speed later. You can certainly debate whether that is the right set of priorities (I think it is, because the biggest need now is for increased experimentation, not optimization), but what I appreciated was the clarity.
The fact that Microsoft is benefiting from OpenAI is obvious; what this makes clear is that OpenAI uniquely benefits from Microsoft as well, in a way they would not from another cloud provider: because Microsoft is also a product company investing in the infrastructure to run OpenAI’s models for said products, it can afford to optimize and invest ahead of usage in a way that OpenAI alone, even with the support of another cloud provider, could not. In this case that is paying off in developers needing to pay less, or, ideally, have more latitude to discover use cases that result in them paying far more because usage is exploding.
You can, in effect, program a GPT, with language, just by talking to it. It’s easy to customize the behavior so that it fits what you want. This makes building them very accessible, and it gives agency to everyone.
Stephen Wolfram explained: For decades there’s been a dichotomy in thinking about AI between “statistical approaches” of the kind ChatGPT uses, and “symbolic approaches” that are in effect the starting point for Wolfram|Alpha. But now—thanks to the success of ChatGPT—as well as all the work we’ve done in making Wolfram|Alpha understand natural language—there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.
This new model somewhat alleviates the problem: now, instead of having to select the correct plug-in (and thus restart your chat), you simply go directly to the GPT in question. In other words, if I want to create a poster, I don’t enable the Canva plugin in ChatGPT, I go to Canva GPT in the sidebar. Notice that this doesn’t actually solve the problem of needing to have selected the right tool; what it does do is make the choice more apparent to the user at a more appropriate stage in the process, and that’s no small thing.
ChatGPT will seamlessly switch between text generation, image generation, and web browsing, without the user needing to change context. What is necessary for the plug-in/GPT idea to ultimately take root is for the same capabilities to be extended broadly: if my conversation involved math, ChatGPT should know to use Wolfram|Alpha on its own, without me adding the plug-in or going to a specialized GPT.
the obvious technical challenges of properly exposing capabilities and training the model to know when to invoke those capabilities are a textbook example of Professor Clayton Christensen’s theory of integration and modularity, wherein integration works better when a product isn’t good enough; it is only when a product exceeds expectation that there is room for standardization and modularity.
To summarize the argument, consumers care about things in ways that are inconsistent with whatever price you might attach to their utility, they prioritize ease-of-use, and they care about the quality of the user experience and are thus especially bothered by the seams inherent in a modular solution. This means that integrated solutions win because nothing is ever “good enough”
the fact of the matter is that a lot of people use ChatGPT for information despite the fact it has a well-documented flaw when it comes to the truth; that flaw is acceptable, because to the customer ease-of-use is worth the loss of accuracy. Or look at plug-ins: the concept as originally implemented has already been abandoned, because the complexity in the user interface was more detrimental than whatever utility might have been possible. It seems likely this pattern will continue: of course customers will say that they want accuracy and 3rd-party tools; their actions will continue to demonstrate that convenience and ease-of-use matter most.
·stratechery.com·
The OpenAI Keynote
Generative AI’s Act Two
Generative AI’s Act Two
This page also has many infographics providing an overview of different aspects of the AI industry at time of writing.
We still believe that there will be a separation between the “application layer” companies and foundation model providers, with model companies specializing in scale and research and application layer companies specializing in product and UI. In reality, that separation hasn’t cleanly happened yet. In fact, the most successful user-facing applications out of the gate have been vertically integrated.
We predicted that the best generative AI companies could generate a sustainable competitive advantage through a data flywheel: more usage → more data → better model → more usage. While this is still somewhat true, especially in domains with very specialized and hard-to-get data, the “data moats” are on shaky ground: the data that application companies generate does not create an insurmountable moat, and the next generations of foundation models may very well obliterate any data moats that startups generate. Rather, workflows and user networks seem to be creating more durable sources of competitive advantage.
Some of the best consumer companies have 60-65% DAU/MAU; WhatsApp’s is 85%. By contrast, generative AI apps have a median of 14% (with the notable exception of Character and the “AI companionship” category). This means that users are not finding enough value in Generative AI products to use them every day yet.
generative AI’s biggest problem is not finding use cases or demand or distribution, it is proving value. As our colleague David Cahn writes, “the $200B question is: What are you going to use all this infrastructure to do? How is it going to change people’s lives?”
·sequoiacap.com·
Generative AI’s Act Two
Who needs film critics when studios can be sure influencers will praise their films?
Who needs film critics when studios can be sure influencers will praise their films?
Critiques the current state of film criticism, arguing that studios are manipulating the narrative by using influencers and free tickets to control reviews and devaluing the role of knowledgeable critics. The article suggests that audiences still crave thoughtful films and good criticism, and that both Barbie and Oppenheimer are examples of films that have inspired good writing.
·theguardian.com·
Who needs film critics when studios can be sure influencers will praise their films?
Opinion | You Want an Electric Car With a 300-Mile Range? When Was the Last Time You Drove 300 Miles?
Opinion | You Want an Electric Car With a 300-Mile Range? When Was the Last Time You Drove 300 Miles?
By improving home charging for urban apartment dwellers and prioritizing vehicles with smaller batteries, rather than road-trip-enabling charging stations and big batteries, we could maximize the miles we can affordably electrify. In an era of battery scarcity, we could have two 150-mile E.V.s for the battery capacity in every 300-mile E.V. Or, using the same 300-mile E.V. battery, you could have six plug-in hybrids with 50 miles of electric range for daily driving and a gasoline engine for those rarer road trips or many, many more e-bikes.
Rather than holding E.V. adoption hostage to our ability to make batteries match internal combustion in every way, government policy should focus on the cases where E.V.s have advantages that internal combustion will never match: waking up every morning with a full “tank” sufficient for daily commuting and errands.
·nytimes.com·
Opinion | You Want an Electric Car With a 300-Mile Range? When Was the Last Time You Drove 300 Miles?