Saved

Saved

3243 bookmarks
Custom sorting
High-tech pastoral as the new aesthetic
High-tech pastoral as the new aesthetic
Unfortunately, all definitions are tautologies—look up any word in the dictionary, read its definition, and then look up the words within that definition in the very same dictionary. Keep repeating this until you arrive back where you started.So we must give examples, and allow the brain to work the magic it does when it goes beyond words and sniffs out vibes.Simple cases: taking a Zoom call in your home office. Pajamas underneath a suit jacket, just out of frame. There are trees out the window. Another example: sitting on a porch, laptop in lap, a drink within reach.
Within capitalism it is always the secret dream of the bourgeois to transform, swan-like, into aristocrats. And if we are to judge by the housing market, something like high-tech pastoral is now the new bourgeois life target. For millennials, it is the out people are taking.
·theintrinsicperspective.com·
High-tech pastoral as the new aesthetic
Magic Mushrooms. LSD. Ketamine. The Drugs That Power Silicon Valley.
Magic Mushrooms. LSD. Ketamine. The Drugs That Power Silicon Valley.
Users rely on drug dealers for ecstasy and most other psychedelics, or in elite cases, they employ chemists. One prolific drug dealer in San Francisco who serves a slice of the tech world is known as “Costco” because users can buy bulk at a discount, according to people familiar with the business. “Cuddle puddles,” which feature groups of people embracing and showing platonic affection, have become standard fare.
·wsj.com·
Magic Mushrooms. LSD. Ketamine. The Drugs That Power Silicon Valley.
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
What Is AI Doing To Art? | NOEMA
What Is AI Doing To Art? | NOEMA
The proliferation of AI-generated images in online environments won’t eradicate human art wholesale, but it does represent a reshuffling of the market incentives that help creative economies flourish. Like the college essay, another genre of human creativity threatened by AI usurpation, creative “products” might become more about process than about art as a commodity.
Are artists using computer software on iPads to make seemingly hand-painted images engaged in a less creative process than those who produce the image by hand? We can certainly judge one as more meritorious than the other but claiming that one is more original is harder to defend.
An understanding of the technology as one that separates human from machine into distinct categories leaves little room for the messier ways we often fit together with our tools. AI-generated images will have a big impact on copyright law, but the cultural backlash against the “computers making art” overlooks the ways computation has already been incorporated into the arts.
The problem with debates around AI-generated images that demonize the tool is that the displacement of human-made art doesn’t have to be an inevitability. Markets can be adjusted to mitigate unemployment in changing economic landscapes. As legal scholar Ewan McGaughey points out, 42% of English workers were redundant after WWII — and yet the U.K. managed to maintain full employment.
Contemporary critics claim that prompt engineering and synthography aren’t emergent professions but euphemisms necessary to equate AI-generated artwork with the work of human artists. As with the development of photography as a medium, today’s debates about AI often overlook how conceptions of human creativity are themselves shaped by commercialization and labor.
Others looking to elevate AI art’s status alongside other forms of digital art are opting for an even loftier rebrand: “synthography.” This categorization suggests a process more complex than the mechanical operation of a picture-making tool, invoking the active synthesis of disparate aesthetic elements. Like Fox Talbot and his contemporaries in the nineteenth century, “synthographers” maintain that AI art simply automates the most time-consuming parts of drawing and painting, freeing up human cognition for higher-order creativity.
Separating human from camera was a necessary part of preserving the myth of the camera as an impartial form of vision. To incorporate photography into an economic landscape of creativity, however, human agency needed to ascribe to all parts of the process.
Consciously or not, proponents of AI-generated images stamp the tool with rhetoric that mirrors the democratic aspirations of the twenty-first century.
Stability AI took a similar tack, billing itself as “AI by the people, for the people,” despite turning Stable Diffusion, their text-to-image model, into a profitable asset. That the program is easy to use is another selling point. Would-be digital artists no longer need to use expensive specialized software to produce visually interesting material.
Meanwhile, communities of digital artists and their supporters claim that the reason AI-generated images are compelling at all is because they were trained with data sets that contained copyrighted material. They reject the claim that AI-generated art produces anything original and suggest it instead be thought of as a form of “twenty-first century collage.”
Erasing human influence from the photographic process was good for underscoring arguments about objectivity, but it complicated commercial viability. Ownership would need to be determined if photographs were to circulate as a new form of property. Was the true author of a photograph the camera or its human operator?
By reframing photographs as les dessins photographiques — or photographic drawings, the plaintiffs successfully established that the development of photographs in a darkroom was part of an operator’s creative process. In addition to setting up a shot, the photographer needed to coax the image from the camera’s film in a process resembling the creative output of drawing. The camera was a pencil capable of drawing with light and photosensitive surfaces, but held and directed by a human author.
Establishing photography’s dual function as both artwork and document may not have been philosophically straightforward, but it staved off a surge of harder questions.
Human intervention in the photographic process still appeared to happen only on the ends — in setup and then development — instead of continuously throughout the image-making process.
·noemamag.com·
What Is AI Doing To Art? | NOEMA
AI is killing the old web, and the new web struggles to be born
AI is killing the old web, and the new web struggles to be born
Google is trying to kill the 10 blue links. Twitter is being abandoned to bots and blue ticks. There’s the junkification of Amazon and the enshittification of TikTok. Layoffs are gutting online media. A job posting looking for an “AI editor” expects “output of 200 to 250 articles per week.” ChatGPT is being used to generate whole spam sites. Etsy is flooded with “AI-generated junk.” Chatbots cite one another in a misinformation ouroboros. LinkedIn is using AI to stimulate tired users. Snapchat and Instagram hope bots will talk to you when your friends don’t. Redditors are staging blackouts. Stack Overflow mods are on strike. The Internet Archive is fighting off data scrapers, and “AI is tearing Wikipedia apart.”
it’s people who ultimately create the underlying data — whether that’s journalists picking up the phone and checking facts or Reddit users who have had exactly that battery issue with the new DeWalt cordless ratchet and are happy to tell you how they fixed it. By contrast, the information produced by AI language models and chatbots is often incorrect. The tricky thing is that when it’s wrong, it’s wrong in ways that are difficult to spot.
The resulting write-up is basic and predictable. (You can read it here.) It lists five companies, including Columbia, Salomon, and Merrell, along with bullet points that supposedly outline the pros and cons of their products. “Columbia is a well-known and reputable brand for outdoor gear and footwear,” we’re told. “Their waterproof shoes come in various styles” and “their prices are competitive in the market.” You might look at this and think it’s so trite as to be basically useless (and you’d be right), but the information is also subtly wrong.
It’s fluent but not grounded in real-world experience, and so it takes time and expertise to unpick.
·theverge.com·
AI is killing the old web, and the new web struggles to be born
Natural Language Is an Unnatural Interface
Natural Language Is an Unnatural Interface
On the user experience of interacting with LLMs
Prompt engineers not only need to get the model to respond to a given question but also structure the output in a parsable way (such as JSON), in case it needs to be rendered in some UI components or be chained into the input of a future LLM query. They scaffold the raw input that is fed into an LLM so the end user doesn’t need to spend time thinking about prompting at all.
From the user’s side, it’s hard to decide what to ask while providing the right amount of context.From the developer’s side, two problems arise. It’s hard to monitor natural language queries and understand how users are interacting with your product. It’s also hard to guarantee that an LLM can successfully complete an arbitrary query. This is especially true for agentic workflows, which are incredibly brittle in practice.
When we speak to other people, there is a shared context that we communicate under. We’re not just exchanging words, but a larger information stream that also includes intonation while speaking, hand gestures, memories of each other, and more. LLMs unfortunately cannot understand most of this context and therefore, can only do as much as is described by the prompt
most people use LLMs for ~4 basic natural language tasks, rarely taking advantage of the conversational back-and-forth built into chat systems:Summarization: Summarizing a large amount of information or text into a concise yet comprehensive summary. This is useful for quickly digesting information from long articles, documents or conversations. An AI system needs to understand the key ideas, concepts and themes to produce a good summary.ELI5 (Explain Like I'm 5): Explaining a complex concept in a simple, easy-to-understand manner without any jargon. The goal is to make an explanation clear and simple enough for a broad, non-expert audience.Perspectives: Providing multiple perspectives or opinions on a topic. This could include personal perspectives from various stakeholders, experts with different viewpoints, or just a range of ways a topic can be interpreted based on different experiences and backgrounds. In other words, “what would ___ do?”Contextual Responses: Responding to a user or situation in an appropriate, contextualized manner (via email, message, etc.). Contextual responses should feel organic and on-topic, as if provided by another person participating in the same conversation.
Prompting nearly always gets in the way because it requires the user to think. End users ultimately do not wish to confront an empty text box in accomplishing their goals. Buttons and other interactive design elements make life easier.The interface makes all the difference in crafting an AI system that augments and amplifies human capabilities rather than adding additional cognitive load.Similar to standup comedy, delightful LLM-powered experiences require a subversion of expectation.
Users will expect the usual drudge of drafting an email or searching for a nearby restaurant, but instead will be surprised by the amount of work that has already been done for them from the moment that their intent is made clear. For example, it would a great experience to discover pre-written email drafts or carefully crafted restaurant and meal recommendations that match your personal taste.If you still need to use a text input box, at a minimum, also provide some buttons to auto-fill the prompt box. The buttons can pass LLM-generated questions to the prompt box.
·varunshenoy.substack.com·
Natural Language Is an Unnatural Interface
How the 'Barbie' Movie Came to Life
How the 'Barbie' Movie Came to Life
If you are wondering whether Barbie is a satire of a toy company’s capitalist ambitions, a searing indictment of the current fraught state of gender relations, a heartwarming if occasionally clichéd tribute to girl power, or a musical spectacle filled with earworms from Nicki Minaj and Dua Lipa, the answer is yes. All of the above. And then some.
Gerwig still can’t seem to believe she got away with making this version. “This movie is a goddamn miracle,” she says. She calls it a “surprising spicy margarita.” By the time you realize the salted rim has cayenne mixed in, it’s too late. “You can already taste the sweetness and you sort of go with the spice.”
Every single actor I spoke to cited Gerwig and the sharp script as the reason they joined the film. “I knew this was not going to shy away from the parts of Barbie that are more interesting but potentially a little bit more fraught,” says Hari Nef, who plays a doctor Barbie. “The contemporary history of feminism and body positivity—there are questions of how Barbie can fit into all of that.”
At one point Richard Dickson, COO and president of Mattel, says he took a flight to the London set to argue with Gerwig and Robbie over a particular scene, which he felt was off-brand. Dickson dials up his natural boyish exuberance, imitating himself righteously marching off the plane to meet them. But Gerwig and Robbie performed the scene for him and changed his mind. “When you look on the page, the nuance isn’t there, the delivery isn’t there,” explains Robbie.
Robbie had laid the groundwork for this with Mattel’s CEO when she met with him in 2018 in the hopes that LuckyChap could take on the Barbie project. “In that very first meeting, we impressed upon Ynon we are going to honor the legacy of your brand, but if we don’t acknowledge certain things—if we don’t say it, someone else is going to say it,” she says. “So you might as well be a part of that conversation.”
“The most important transition was from being a toy-manufacturing company that was making items to becoming an IP company that is managing franchises,” he says. It’s a particularly prescient strategy at a moment when superhero fatigue has set in and studios are desperate to find new intellectual property with a built-in fan base—from Super Mario Bros. to Dungeons & Dragons.
Issa Rae, 38, who plays President Barbie, argues that the entire point of the film is to portray a world in which there isn’t a singular ideal. “My worry was that it was going to feel too white feminist-y, but I think that it’s self-aware,” she says. “Barbie Land is perfect, right? It represents perfection. So if perfection is just a bunch of white Barbies, I don’t know that anybody can get on board with that.”
Still, in an interview for this story, Brenner called Gerwig’s film “not a feminist movie,” a sentiment echoed by other Mattel executives I spoke with. It was a striking contrast to my interpretation of the film and conversations with many of the actors, who used that term unprompted to describe the script. When I relay Mattel’s words to Robbie, she raises an eyebrow. “Who said that?” she asks then sighs. “It’s not that it is or it isn’t. It’s a movie. It’s a movie that’s got so much in it.” The bigger point, Robbie impresses upon me, is “we’re in on the joke. This isn’t a Barbie puff piece.”
Gerwig’s team built an entire neighborhood made up of Dream Houses that were missing walls. The actors had to be secured by wires so they wouldn’t topple off the second floors. The skies and clouds in the background were hand-painted to render a playroom-like quality, as was much of the rest of the set.
But McKinnon, 39, watched her sister and friends play with the dolls: they cut Barbie’s hair, drew on her face, and even set her on fire. She theorizes, “They were externalizing how they felt, and they felt different.” So when Gerwig offered McKinnon the role of Weird Barbie, a doll that’s been played with a little too aggressively in the real world, she jumped at the chance. McKinnon was impressed by the way the script dealt with girls’ complicated attachments to the doll. “It comments honestly about the positive and negative feelings,” she says. “It’s an incisive cultural critique.”
“We’re looking to create movies that become cultural events,” Kreiz says, and to do that Mattel needs visionaries to produce something more intriguing than a toy ad. “If you can excite filmmakers like Greta and Noah to embrace the opportunity and have creative freedom, you can have a real impact.”
·time.com·
How the 'Barbie' Movie Came to Life
The challenge of 'renewable' energy.
The challenge of 'renewable' energy.
Secure energy is prerequisite to the prosperity that lifts people out of poverty.  At the same time, we want to protect the environment while providing this secure energy.  Achieving that will require competing interests to play together in the “radical middle” where conflicting goals collide around energy, the economy and the environment.
More than half of what we consume in the world today is made in countries that use coal to make it. So, we sometimes close our ears and eyes, and say, “We’re green. Just keep making our stuff over there and we’ll buy it on Amazon and have it delivered to our door one small thing at a time.” This is not good for the climate.
Emissions in Asia go into the one unique atmosphere that we all share, and by not reducing our consumption of products, we are simply moving the source of those emissions far away.
Kale is healthy, but it is not dense calorically, so you would have to eat a lot of it. Beef is dense with calories to sustain life, but too much of it is not all that healthy. Wind and solar and hydroelectric power are like kale, ideal if only you could live on the energy it provides.  Coal and oil and natural gas and nuclear power are like cow, less benign, but energy dense. Not just a little denser. Several hundred times denser.
·readtangle.com·
The challenge of 'renewable' energy.
The military mutiny in Russia.
The military mutiny in Russia.
anyone claiming that I'm biased in describing climate change as driven by humans is actually experiencing what Daniel Stone (in our subscribers-only interview) called "affective polarization:" "So if you dislike someone, you're not going to admit they're right, even if the evidence is really clear they're right. It's sort of another example of how polarization drives inefficiency. It could stop us from implementing policies that we would agree on otherwise."
We know what factors drive the Earth's heating/cooling cycles. We know that our planet receives energy from the sun, radiates heat to the atmosphere, and that our atmosphere has certain "greenhouse gasses" that re-radiate that heat back to Earth. In other words, we know that heating/cooling cycles are driven by changes in energy coming in (solar cycle and Earth orbit), changes to the Earth's surface (ice cover, plant cover, and other life) that affect energy going out, and changes to the Earth's greenhouse gasses (concentration of CO2, CH4, water vapor, and others in our atmosphere) that affect energy retention. We can measure those factors today. We have a very good understanding of the solar cycle and our Earth's orbit. We have a very good understanding of our planet's surface. And we have a very good understanding of historical changes to the atmosphere, through ice core data and direct atmospheric measurement. Of the factors that contribute to warming, it's very apparent that only greenhouse gasses have increased over the past century to any significant degree (and the degree of increase is very significant). We are aware of what's causing those factors to increase. The long-lived greenhouse gasses in our atmosphere that keep our planet warm eventually return to the earth, and again to the atmosphere, through a process called the carbon cycle. Many things contribute to this cycle, and there are a lot of great arguments that support that the excess carbon is anthropogenic. One of the best arguments is that the proportion of carbon isotopes in the atmosphere is consistent with an increase in the carbon from organic matter (i.e. burned fossil fuels), and that the increase of these isotopes began with the Industrial Revolution and has increased ever since. The Earth is getting warmer. There is essentially unanimous consensus that the Earth has been warming over the past 100 years.
I think suggesting that the identifiable excess emissions from human activity are not causing the definite increase in global warming is kind of like saying, "sure, there have been a lot more deaths from car accidents after the invention of the automobile, but hey — who can say if that has anything to do with cars? People have been dying in accidents forever."
·readtangle.com·
The military mutiny in Russia.
Macho Man
Macho Man
I think there are a million things to be discouraged about in the world, but I do think that the progress being made on "what it means to be a man" is moving in the right direction. It's clear men can be terrible, and the last decade in particular has had several movements root out some of the worst offenders, but I truly think all of us no matter our gender are more alike than we've historically thought, and the more we recognize that the better off we'll be.
·birchtree.me·
Macho Man
Elegy for the Native Mac App
Elegy for the Native Mac App
Tracing a trendline from the start of the Mac apps platforms to the future of visionOS
In recent years Sketch’s Mac-ness has become a liability. Requiring every person in a large design organization to use a Mac is not an easy sell. Plus, a new generation of “internet native” users expect different things from their software than old-school Mac connoisseurs: Multiplayer editing, inline commenting, and cloud sync are now table-stakes for any successful creative app.
At the time of Sketch’s launch most UX designers were using Photoshop or Illustrator. Both were expensive and overwrought, and neither were actually created for UX design. Sketch’s innovation wasn’t any particular feature — if anything it was the lack of features. It did a few things really well, and those were exactly the things UX designers wanted. In that way it really embodied the Mac ethos: simple, single-purpose, and fun to use.
Apple pushed hard to attract artists, filmmakers, musicians, and other creative professionals. It started a virtuous cycle. More creatives using Macs meant more potential customers for creative Mac software, which meant more developers started building that software, which in turn attracted even more customers to the platform.And so the Mac ended up with an abundance of improbably-good creative tools. Usually these apps weren’t as feature-rich or powerful as their PC counterparts, but were faster and easier and cheaper and just overall more conducive to the creative process.
Apple is still very interested in selling Macs — precision-milled aluminum computers with custom-designed chips and “XDR” screens. But they no longer care much about The Mac: The operating system, the software platform, its design sensibilities, its unique features, its vibes.
The term-of-art for this style is “skeuomorphism”: modern designs inspired by their antecedents — calculator apps that look like calculators, password-entry fields that look like bank vaults, reminders that look like sticky notes, etc.This skeuomorphic playfulness made downloading a new Mac app delightful. The discomfort of opening a new unfamiliar piece of software was totally offset by the joy of seeing a glossy pixel-perfect rendition of a bookshelf or a bodega or a poker table, complete with surprising little animations.
There are literally dozens of ways to develop cross-platform apps, including Apple’s own Catalyst — but so far, none of these tools can create anything quite as polished as native implementations.So it comes down to user preference: Would you rather have the absolute best app experience, or do you want the ability to use an acceptably-functional app from any of your devices? It seems that users have shifted to prefer the latter.
Unfortunately the appeal of native Mac software was, at its core, driven by brand strategy. Mac users were sold on the idea that they were buying not just a device but an ecosystem, an experience. Apple extended this branding for third-party developers with its yearly Apple Design Awards.
for the first time since the introduction of the original Mac, they’re just computers. Yes, they were technically always “just computers”, but they used to feel like something bigger. Now Macs have become just another way, perhaps the best way, to use Slack or VSCode or Figma or Chrome or Excel.
visionOS’s story diverges from that of the Mac. Apple is no longer a scrappy upstart. Rather, they’re the largest company in the world by market cap. It’s not so much that Apple doesn’t care about indie developers anymore, it’s just that indie developers often end up as the ants crushed beneath Apple’s giant corporate feet.
I think we’ll see a lot of cool indie software for visionOS, but also I think most of it will be small utilities or toys. It takes a lot of effort to build and support apps that people rely on for their productivity or creativity. If even the wildly-popular Mac platform can’t support those kinds of projects anymore, what chance does a luxury headset have?
·medium.com·
Elegy for the Native Mac App
The 'moment has arrived' for digital creators. And they're here for it.
The 'moment has arrived' for digital creators. And they're here for it.
VidCon has “gone from weirdos to entrepreneurs.”Young people have increasingly turned to online video for entertainment. During the pandemic lockdown in 2020, digital content on platforms like YouTube and TikTok dominated, which experts at VidCon said helped propel digital media as a serious form of entertainment.
Digital-first talent are the power players today
It really drove people into watching creators, not as a hobby thing but as another linear option,” said Joe Gagliese, CEO of Viral Nation, an influencer marketing and talent management company.
creators are no longer just using social media as a jumping off point for bigger stardom. Instead, online content is the end goal. Over the years, content creation has become a serious and feasible career option for many.
Hecox said that toward the end of his and Padilla’s initial partnership, they gave priority to production quality in a way their audience didn’t like.“We had strayed too far away from digital and we started looking more like TV, and I think people didn’t connect with that,” Hecox said.
People connecting more with the self-produced aesthetic, deprioritizing production value leading to better viewer connection… is it because it’s non-fiction?
Instead of stretching themselves thin to fit a traditional mold, they've redirected their focus to their roots and what fans liked the best.
·nbcnews.com·
The 'moment has arrived' for digital creators. And they're here for it.