Found 23 bookmarks
Newest
Mark Zuckerberg Is Not Done With Politics – Pixel Envy
Mark Zuckerberg Is Not Done With Politics – Pixel Envy
Journalists do not write the headlines; I hope the editor responsible for this one is soaked with regret. Zuckerberg is not “done with politics”. He is very much playing politics. He supported some more liberal causes when it was both politically acceptable and financially beneficial, something he has continued to do today, albeit by having no discernible principles. Do not mistake this for savviness or diplomacy, either. It is political correctness for the billionaire class.
·pxlnv.com·
Mark Zuckerberg Is Not Done With Politics – Pixel Envy
Meta surrenders to the right on speech
Meta surrenders to the right on speech
Alexios Mantzarlis, the founding director of the International Fact-Checking Network, worked closely with Meta as the company set up its partnerships. He took exception on Tuesday to Zuckerberg's statement that "the fact-checkers have just been too politically biased, and have destroyed more trust than they've created, especially in the US." What Zuckerberg called bias is a reflection of the fact that the right shares more misinformation from the left, said Mantzarlis, now the director of the Security, Trust, and Safety Initiative at Cornell Tech. "He chose to ignore research that shows that politically asymmetric interventions against misinformation can result from politically asymmetric sharing of misinformation," Mantzarlis said. "He chose to ignore that a large chunk of the content fact-checkers are flagging is likely not political in nature, but low-quality spammy clickbait that his platforms have commodified. He chose to ignore research that shows Community Notes users are very much motivated by partisan motives and tend to over-target their political opponents."
while Community Notes has shown some promise on X, a former Twitter executive reminded me today that volunteer content moderation has its limits. Community Notes rarely appear on content outside the United States, and often take longer to appear on viral posts than traditional fact checks. There is also little to no empirical evidence that Community Notes are effective at harm reduction. Another wrinkle: many Community Notes currently cite as evidence fact-checks created by the fact-checking organizations that Meta just canceled all funding for.
What Zuckerberg is saying is that it will now be up to users to do what automated systems were doing before — a giant step backward for a person who prides himself on having among the world's most advanced AI systems.
"I can't tell you how much harm comes from non-illegal but harmful content," a longtime former trust and safety employee at the company told me. The classifiers that the company is now switching off meaningfully reduced the spread of hate movements on Meta's platforms, they said. "This is not the climate change debate, or pro-life vs. pro-choice. This is degrading, horrible content that leads to violence and that has the intent to harm other people."
·platformer.news·
Meta surrenders to the right on speech
Zuckerberg officially gives up
Zuckerberg officially gives up
I floated a theory of mine to Atlantic writer Charlie Warzel on this week’s episode of Panic World that content moderation, as we’ve understood, it effectively ended on January 6th, 2021. You can listen to the whole episode here, but the way I look at it is that the Insurrection was the first time Americans could truly see the radicalizing effects of algorithmic platforms like Facebook and YouTube that other parts of the world, particularly the Global South, had dealt with for years. A moment of political violence Silicon Valley could no longer ignore or obfuscate the way it had with similar incidents in countries like Myanmar, India, Ethiopia, or Brazil. And once faced with the cold, hard truth of what their platforms had been facilitating, companies like Google and Meta, at least internally, accepted that they would never be able to moderate them at scale. And so they just stopped.
After 2021, the major tech platforms we’ve relied on since the 2010s could no longer pretend that they would ever be able to properly manage the amount of users, the amount of content, the amount of influence they “need” to exist at the size they “need” to exist at to make the amount of money they “need” to exist.
Under Zuckerberg’s new “censorship”-free plan, Meta’s social networks will immediately fill up with hatred and harassment. Which will make a fertile ground for terrorism and extremism. Scams and spam will clog comments and direct messages. And illicit content, like non-consensual sexual material, will proliferate in private corners of networks like group messages and private Groups. Algorithms will mindlessly spread this slop, boosted by the loudest, dumbest, most reactionary users on the platform, helping it evolve and metastasize into darker, stickier social movements. And the network will effectively break down. But Meta is betting that the average user won’t care or notice. AI profiles will like their posts, comment on them, and even make content for them. A feedback loop of nonsense and violence. Our worst, unmoderated impulses, shared by algorithm and reaffirmed by AI. Where nothing has to be true and everything is popular.
·garbageday.email·
Zuckerberg officially gives up
Meta’s Big Squeeze – Pixel Envy
Meta’s Big Squeeze – Pixel Envy
These pieces each seem like they are circling a theme of a company finding the upper bound of its user base, and then squeezing it for activity, revenue, and promising numbers to report to investors. Unlike Zitron, I am not convinced we are watching Facebook die. I think Koebler is closer to the truth: we are watching its zombification.
·pxlnv.com·
Meta’s Big Squeeze – Pixel Envy
AI Integration and Modularization
AI Integration and Modularization
Summary: The question of integration versus modularization in the context of AI, drawing on the work of economists Ronald Coase and Clayton Christensen. Google is pursuing a fully integrated approach similar to Apple, while AWS is betting on modularization, and Microsoft and Meta are somewhere in between. Integration may provide an advantage in the consumer market and for achieving AGI, but that for enterprise AI, a more modular approach leveraging data gravity and treating models as commodities may prevail. Ultimately, the biggest beneficiary of this dynamic could be Nvidia.
The left side of figure 5-1 indicates that when there is a performance gap — when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market — companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot not optimize performance.
The issue I have with this analysis of vertical integration — and this is exactly what I was taught at business school — is that the only considered costs are financial. But there are other, more difficult to quantify costs. Modularization incurs costs in the design and experience of using products that cannot be overcome, yet cannot be measured. Business buyers — and the analysts who study them — simply ignore them, but consumers don’t. Some consumers inherently know and value quality, look-and-feel, and attention to detail, and are willing to pay a premium that far exceeds the financial costs of being vertically integrated.
Google trains and runs its Gemini family of models on its own TPU processors, which are only available on Google’s cloud infrastructure. Developers can access Gemini through Vertex AI, Google’s fully-managed AI development platform; and, to the extent Vertex AI is similar to Google’s internal development environment, that is the platform on which Google is building its own consumer-facing AI apps. It’s all Google, from top-to-bottom, and there is evidence that this integration is paying off: Gemini 1.5’s industry leading 2 million token context window almost certainly required joint innovation between Google’s infrastructure team and its model-building team.
In AI, Google is pursuing an integrated strategy, building everything from chips to models to applications, similar to Apple's approach in smartphones.
On the other extreme is AWS, which doesn’t have any of its own models; instead its focus has been on its Bedrock managed development platform, which lets you use any model. Amazon’s other focus has been on developing its own chips, although the vast majority of its AI business runs on Nvidia GPUs.
Microsoft is in the middle, thanks to its close ties to OpenAI and its models. The company added Azure Models-as-a-Service last year, but its primary focus for both external customers and its own internal apps has been building on top of OpenAI’s GPT family of models; Microsoft has also launched its own chip for inference, but the vast majority of its workloads run on Nvidia.
Google is certainly building products for the consumer market, but those products are not devices; they are Internet services. And, as you might have noticed, the historical discussion didn’t really mention the Internet. Both Google and Meta, the two biggest winners of the Internet epoch, built their services on commodity hardware. Granted, those services scaled thanks to the deep infrastructure work undertaken by both companies, but even there Google’s more customized approach has been at least rivaled by Meta’s more open approach. What is notable is that both companies are integrating their models and their apps, as is OpenAI with ChatGPT.
Google's integrated AI strategy is unique but may not provide a sustainable advantage for Internet services in the way Apple's integration does for devices
It may be the case that selling hardware, which has to be perfect every year to justify a significant outlay of money by consumers, provides a much better incentive structure for maintaining excellence and execution than does being an Aggregator that users access for free.
Google’s collection of moonshots — from Waymo to Google Fiber to Nest to Project Wing to Verily to Project Loon (and the list goes on) — have mostly been science projects that have, for the most part, served to divert profits from Google Search away from shareholders. Waymo is probably the most interesting, but even if it succeeds, it is ultimately a car service rather far afield from Google’s mission statement “to organize the world’s information and make it universally accessible and useful.”
The only thing that drives meaningful shifts in platform marketshare are paradigm shifts, and while I doubt the v1 version of Pixie [Google’s rumored Pixel-only AI assistant] would be good enough to drive switching from iPhone users, there is at least a path to where it does exactly that.
the fact that Google is being mocked mercilessly for messed-up AI answers gets at why consumer-facing AI may be disruptive for the company: the reason why incumbents find it hard to respond to disruptive technologies is because they are, at least at the beginning, not good enough for the incumbent’s core offering. Time will tell if this gives more fuel to a shift in smartphone strategies, or makes the company more reticent.
while I was very impressed with Google’s enterprise pitch, which benefits from its integration with Google’s infrastructure without all of the overhead of potentially disrupting the company’s existing products, it’s going to be a heavy lift to overcome data gravity, i.e. the fact that many enterprise customers will simply find it easier to use AI services on the same clouds where they already store their data (Google does, of course, also support non-Gemini models and Nvidia GPUs for enterprise customers). To the extent Google wins in enterprise it may be by capturing the next generation of startups that are AI first and, by definition, data light; a new company has the freedom to base its decision on infrastructure and integration.
Amazon is certainly hoping that argument is correct: the company is operating as if everything in the AI value chain is modular and ultimately a commodity, which insinuates that it believes that data gravity will matter most. What is difficult to separate is to what extent this is the correct interpretation of the strategic landscape versus a convenient interpretation of the facts that happens to perfectly align with Amazon’s strengths and weaknesses, including infrastructure that is heavily optimized for commodity workloads.
Unclear if Amazon's strategy is based on true insight or motivated reasoning based on their existing strengths
Meta’s open source approach to Llama: the company is focused on products, which do benefit from integration, but there are also benefits that come from widespread usage, particularly in terms of optimization and complementary software. Open source accrues those benefits without imposing any incentives that detract from Meta’s product efforts (and don’t forget that Meta is receiving some portion of revenue from hyperscalers serving Llama models).
The iPhone maker, like Amazon, appears to be betting that AI will be a feature or an app; like Amazon, it’s not clear to what extent this is strategic foresight versus motivated reasoning.
achieving something approaching AGI, whatever that means, will require maximizing every efficiency and optimization, which rewards the integrated approach.
the most value will be derived from building platforms that treat models like processors, delivering performance improvements to developers who never need to know what is going on under the hood.
·stratechery.com·
AI Integration and Modularization
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Some of the topics I touch on: Why I believe Vision Pro may be an over-engineered “devkit” The genius & audacity behind some of Apple’s hardware decisions Gaze & pinch is an incredible UI superpower and major industry ah-ha moment Why the Vision Pro software/content story is so dull and unimaginative Why most people won’t use Vision Pro for watching TV/movies Apple’s bet in immersive video is a total game-changer for live sports Why I returned my Vision Pro… and my Top 10 wishlist to reconsider Apple’s VR debut is the best thing that ever happened to Oculus/Meta My unsolicited product advice to Meta for Quest Pro 2 and beyond
Apple really played it safe in the design of this first VR product by over-engineering it. For starters, Vision Pro ships with more sensors than what’s likely necessary to deliver Apple’s intended experience. This is typical in a first-generation product that’s been under development for so many years. It makes Vision Pro start to feel like a devkit.
A sensor party: 6 tracking cameras, 2 passthrough cameras, 2 depth sensors(plus 4 eye-tracking cameras not shown)
it’s easy to understand two particularly important decisions Apple made for the Vision Pro launch: Designing an incredible in-store Vision Pro demo experience, with the primary goal of getting as many people as possible to experience the magic of VR through Apple’s lenses — most of whom have no intention to even consider a $4,000 purchase. The demo is only secondarily focused on actually selling Vision Pro headsets. Launching an iconic woven strap that photographs beautifully even though this strap simply isn’t comfortable enough for the vast majority of head shapes. It’s easy to conclude that this decision paid off because nearly every bit of media coverage (including and especially third-party reviews on YouTube) uses the woven strap despite the fact that it’s less comfortable than the dual loop strap that’s “hidden in the box”.
Apple’s relentless and uncompromising hardware insanity is largely what made it possible for such a high-res display to exist in a VR headset, and it’s clear that this product couldn’t possibly have launched much sooner than 2024 for one simple limiting factor — the maturity of micro-OLED displays plus the existence of power-efficient chipsets that can deliver the heavy compute required to drive this kind of display (i.e. the M2).
·hugo.blog·
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
The algorithmic anti-culture of scale
The algorithmic anti-culture of scale
Ryan Broderick's impressions of Meta's Twitter copycat, Threads
My verdict: Threads sucks shit. It has no purpose. It is for no one. It launched as a content graveyard and will assuredly only become more of one over time. It’s iFunny for people who miss The Ellen Show. It has a distinct celebrities-making-videos-during-COVID-lockdown vibe. It feels like a 90s-themed office party organized by a human resources department. And my theory, after staring into its dark heart for several days, is that it was never meant to “beat” Twitter — regardless of what Zuckerberg has been tweeting. Threads’ true purpose was to act as a fresh coat of paint for Instagram’s code in the hopes it might make the network relevant again. And Threads is also proof that Meta, even after all these years, still has no other ambition aside from scale.
·garbageday.email·
The algorithmic anti-culture of scale
Insider Trading Is Better From Home
Insider Trading Is Better From Home
Oh ElonWell, look, if I were the newly hired chief executive officer of a social media company, and if the directors and shareholders who brought me in as CEO had told me that my main mission was to turn around the company’s precarious financial situation by improving our position with advertisers, and if I spent my first few weeks reassuring advertisers and rebuilding relationships and talking up our site’s unique audience and powerful engagement, and then one day my head of software engineering came to me and said “hey boss, too many people were too engaged with too many posts, so I had to limit everyone’s ability to view posts on our site, just FYI,” I would … probably … fire ... him?
I mean I suppose I might ask questions like “Is this because of some technological limitation on our system? Is it because you were monkeying with the code without understanding it? Is it because you tried to stop people from reading the site without logging in, 3 and messed up and stopped them from reading the site even when they logged in? Is it because you fired and demoralized too many engineers so no one was left to keep the systems running normally? Is it because you forgot to pay the cloud bills? Is it because deep down you don’t like it when people read posts on our site and you want to stop them, or you don’t like relying on ad revenue and want to sabotage my ability to sell ads?”
no matter what the answers are, this guy’s gotta go. If you are in charge of the software engineers at a social media site, and you make it so that people can’t read the site, that’s bad.
Over the past 10 days, [Ultimate Fighting Championship President Dana] White said he, Mr. Musk and [Mark] Zuckerberg — aided by advisers — have negotiated behind the scenes and are inching toward physical combat. While there are no guarantees a match will happen, the broad contours of an event are taking shape, said Mr. White and three people with knowledge of the discussions.People keep emailing to ask about, like, the fiduciary duties and securities-law disclosure issues here, but I’m gonna wait until they’re in the octagon before I worry about that stuff
·bloomberg.com·
Insider Trading Is Better From Home
The VR winter — Benedict Evans
The VR winter — Benedict Evans
When I started my career 3G was the hot topic, and every investor kept asking ‘what’s the killer app for 3G?’ It turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. But with each of those, we knew what to build next, and with VR we don’t. That tells me that VR has a place in the future. It just doesn’t tell me what kind of place.
The successor to the smartphone will be something that doesn’t just merge AR and VR but make the distinction irrelevant - something that you can wear all day every day, and that can seamlessly both occlude and supplement the real world and generate indistinguishable volumetric space.
·ben-evans.com·
The VR winter — Benedict Evans
Vision Pro — Benedict Evans
Vision Pro — Benedict Evans
Meta, today, has roughly the right price and is working forward to the right device: Apple has started with the right device and will work back to the right price. Meta is trying to catalyse an ecosystem while we wait for the right hardware - Apple is trying to catalyse an ecosystem while we wait for the right price.
one of the things I wondered before the event was how Apple would show a 3D experience in 2D. Meta shows either screenshots from within the system (with the low visual quality inherent in the spec you can make and sell for $500) or shots of someone wearing the headset and grinning - neither are satisfactory. Apple shows the person in the room, with the virtual stuff as though it was really there, because it looks as though it is.
For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere - it’s a screen and there could be five different ‘metaverse’ apps. This iPhone was a piece of glass that could be anything - this is trying to be a piece of glass that can show anything.
A lot of what Apple shows is possibility and experiment - it could be this, this or that, just as when Apple launched the watch it suggested it as fitness, social or fashion, and it turn out to work best for fitness (and is now a huge business).
Mark Zuckerberg, speaking to a Meta all-hands after Apple’s event, made the perfectly reasonable point that Apple hasn’t shown much that no-one had thought of before - there’s no ‘magic’ invention. Everyone already knows we need better screens, eye-tracking and hand-tracking, in a thin and light device.
It’s worth remembering that Meta isn’t in this to make a games device, nor really to sell devices per se - rather, the thesis is that if VR is the next platform, Meta has to make sure it isn’t controlled by a platform owner who can screw them, as Apple did with IDFA in 2021.
On the other hand, the Vision Pro is an argument that current devices just aren’t good enough to break out of the enthusiast and gaming market, incremental improvement isn’t good enough either, and you need a step change in capability.
Apple’s privacy positioning, of course, has new strategic value now that it’s selling a device you wear that’s covered in cameras
the genesis of the current wave of VR was the realisation a decade ago that the VR concepts of the 1990s would work now, and with nothing more than off-the-shelf smartphone components and gaming PCs, plus a bit more work. But ‘a bit more work’ turned out to be thirty or forty billion dollars from Meta and God only knows how much more from Apple - something over $100bn combined, almost certainly.
So it might be that a wearable screen of any kind, no matter how good, is just a staging post - the summit of a foothill on the way to the top of Everest. Maybe the real Reality device is glasses, or contact lenses projecting onto your retina, or some kind of neural connection, all of which might be a decade or decades away again, and the piece of glass in our pocket remains the right device all the way through.
I think the price and the challenge of category creation are tightly connected. Apple has decided that the capabilities of the Vision Pro are the minimum viable product - that it just isn’t worth making or selling a device without a screen so good you can’t see the pixels, pass-through where you can’t see any lag, perfect eye-tracking and perfect hand-tracking. Of course the rest of the industry would like to do that, and will in due course, but Apple has decided you must do that.
For VR, better screens are merely better, but for AR Apple thinks this this level of display system is a base below which you don’t have a product at all.
For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere - it’s a screen and there could be five different ‘metaverse’ apps. The iPhone was a piece of glass that could be anything - this is trying to be a piece of glass that can show anything.
This reminds me a little of when Meta tried to make a phone, and then a Home Screen for a phone, and Mark Zuckerberg said “your phone should be about people.” I thought “no, this is a computer, and there are many apps, some of which are about people and some of which are not.” Indeed there’s also an echo of telco thinking: on a feature phone, ‘internet stuff’ was one or two icons on your portable telephone, but on the iPhone the entire telephone was just one icon on your computer. On a Vision Pro, the ‘Meta Metaverse’ is one app amongst many. You have many apps and panels, which could be 2D or 3D, or could be spaces.
·ben-evans.com·
Vision Pro — Benedict Evans
How the Push for Efficiency Changes Us
How the Push for Efficiency Changes Us
Efficiency initiatives are all about doing the same (or more) with less.  And while sometimes that can be done purely through technology, humans often bear the brunt of efficiency initiatives.
When Zuckerberg says the organization is getting “flatter,” he means that more non-management workers will have to take on types of work—coordinating, synthesizing, communicating, and affective tasks—that managers used to do. For many, that means a significant intensification of a style of work that is not for everyone.
becoming more efficient and productive seems to hold positive moral value. It goes into the plus column on the balance sheet of your character. But this moral quality of efficiency acts to turn us each into a certain kind of person. Not just a certain kind of worker, but a certain kind of voter, parent, partner, mentor, and citizen.
Social theorist Kathi Weeks argues that the responsibilities we feel toward work—and I’ll add our responsibility specifically to efficiency and productivity—have “more to do with the socially mediating role of work than its strictly productive function.” In other words, the stories we tell about work and our relationships to it are actively creating our “social, political, and familial” stories and relationships, too.
A Year of Efficiency is bound to make shareholders happy. But what does it do to the humans who create the value those shareholders add to their portfolios? A Year of Efficiency might mean you can fit in more social media posts, more podcast episodes, more emails, or even more products or services. But how do you feel at the end? How has your relationship with yourself changed? How has your relationship with others changed?  Who do you become when efficiency is your guiding principle?
It’s worth questioning the moral quality we assign to efficiency and productivity in our society is healthy, or even useful. And it’s worth asking whether efficiency and productivity are really the modes through which we want to relate to our partners, children, friends, and communities.
While I certainly won’t deny the satisfaction of learning how to do a task faster, I do think it’s worth interrogating the way efficiency comes to shape our lives.
·explorewhatworks.com·
How the Push for Efficiency Changes Us
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two sided market," where a platform sits between buyers and sellers, holding each hostage to the other, raking off an ever-larger share of the value that passes between them.
Today, Marketplace sellers are handing 45%+ of the sale price to Amazon in junk fees. The company's $31b "advertising" program is really a payola scheme that pits sellers against each other, forcing them to bid on the chance to be at the top of your search.
Search Amazon for "cat beds" and the entire first screen is ads, including ads for products Amazon cloned from its own sellers, putting them out of business (third parties have to pay 45% in junk fees to Amazon, but Amazon doesn't charge itself these fees).
This is enshittification: surpluses are first directed to users; then, once they're locked in, surpluses go to suppliers; then once they're locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit.
This made publications truly dependent on Facebook – their readers no longer visited the publications' websites, they just tuned into them on Facebook. The publications were hostage to those readers, who were hostage to each other. Facebook stopped showing readers the articles publications ran, tuning The Algorithm to suppress posts from publications unless they paid to "boost" their articles to the readers who had explicitly subscribed to them and asked Facebook to put them in their feeds.
Today, Facebook is terminally enshittified, a terrible place to be whether you're a user, a media company, or an advertiser. It's a company that deliberately demolished a huge fraction of the publishers it relied on, defrauding them into a "pivot to video" based on false claims of the popularity of video among Facebook users. Companies threw billions into the pivot, but the viewers never materialized, and media outlets folded in droves:
These videos go into Tiktok users' ForYou feeds, which Tiktok misleadingly describes as being populated by videos "ranked by an algorithm that predicts your interests based on your behavior in the app." In reality, For You is only sometimes composed of videos that Tiktok thinks will add value to your experience – the rest of the time, it's full of videos that Tiktok has inserted in order to make creators think that Tiktok is a great place to reach an audience.
"Sources told Forbes that TikTok has often used heating to court influencers and brands, enticing them into partnerships by inflating their videos’ view count.
"Monetize" is a terrible word that tacitly admits that there is no such thing as an "Attention Economy." You can't use attention as a medium of exchange. You can't use it as a store of value. You can't use it as a unit of account. Attention is like cryptocurrency: a worthless token that is only valuable to the extent that you can trick or coerce someone into parting with "fiat" currency in exchange for it.
The algorithm creates conditions for which the necessity of ads exists
For Tiktok, handing out free teddy-bears by "heating" the videos posted by skeptical performers and media companies is a way to convert them to true believers, getting them to push all their chips into the middle of the table, abandoning their efforts to build audiences on other platforms (it helps that Tiktok's format is distinctive, making it hard to repurpose videos for Tiktok to circulate on rival platforms).
every time Tiktok shows you a video you asked to see, it loses a chance to show you a video it wants you to se
I just handed Twitter $8 for Twitter Blue, because the company has strongly implied that it will only show the things I post to the people who asked to see them if I pay ransom money.
Compuserve could have "monetized" its own version of Caller ID by making you pay $2.99 extra to see the "From:" line on email before you opened the message – charging you to know who was speaking before you started listening – but they didn't.
Useful idiots on the right were tricked into thinking that the risk of Twitter mismanagement was "woke shadowbanning," whereby the things you said wouldn't reach the people who asked to hear them because Twitter's deep state didn't like your opinions. The real risk, of course, is that the things you say won't reach the people who asked to hear them because Twitter can make more money by enshittifying their feeds and charging you ransom for the privilege to be included in them.
Individual product managers, executives, and activist shareholders all give preference to quick returns at the cost of sustainability, and are in a race to see who can eat their seed-corn first. Enshittification has only lasted for as long as it has because the internet has devolved into "five giant websites, each filled with screenshots of the other four"
policymakers should focus on freedom of exit – the right to leave a sinking platform while continuing to stay connected to the communities that you left behind, enjoying the media and apps you bought, and preserving the data you created
technological self-determination is at odds with the natural imperatives of tech businesses. They make more money when they take away our freedom – our freedom to speak, to leave, to connect.
even Tiktok's critics grudgingly admitted that no matter how surveillant and creepy it was, it was really good at guessing what you wanted to see. But Tiktok couldn't resist the temptation to show you the things it wants you to see, rather than what you want to see.
·pluralistic.net·
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
Stacking the Optical Deck: Introducing Infinite Display + a Primer on Measuring Visual Quality in VR | Meta Store
Stacking the Optical Deck: Introducing Infinite Display + a Primer on Measuring Visual Quality in VR | Meta Store
Instead of looking at a large screen at a farther distance, VR users are looking at a smaller screen, much closer to their eyes and magnified by a set of lenses within an optical stack. It’s like looking at a TV through a camera lens—what you’ll see isn’t just determined by the resolution of the screen, but also by the optical properties of the lens, like magnification and sharpness.
instead, we should evaluate the full optical system’s resolution, which is measured in PPD—a combined metric that takes into account the display and optics working together. An angular measurement, PPD measures the number of pixels that are packed within 1° of the field of view (FOV). The higher the PPD, the better the system resolution of the VR headset.
·meta.com·
Stacking the Optical Deck: Introducing Infinite Display + a Primer on Measuring Visual Quality in VR | Meta Store
The $2 Per Hour Workers Who Made ChatGPT Safer
The $2 Per Hour Workers Who Made ChatGPT Safer
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
This reminds me of [[On the Social Media Ideology - Journal 75 September 2016 - e-flux]]:<br>> Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.
I haven't finished watching [[Severance]] yet but this labeling system reminds me of the way they have to process and filter data that is obfuscated as meaningless numbers. In the show, employees have to "sense" whether the numbers are "bad," which they can, somehow, and sort it into the trash bin.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
·time.com·
The $2 Per Hour Workers Who Made ChatGPT Safer
Mark Zuckerberg's Ugly Future
Mark Zuckerberg's Ugly Future
I’ve also seen a lot of users on Twitter asking “who is Horizon Worlds for?” And it’s a good question. I have an Oculus. Meta’s core metaverse platform, the thing that ostensively will be replacing Facebook soon as Meta’s main online portal, the central OS for the company’s VR world, is too boring for children, too complicated for old people, too time-consuming for anyone raising a family, and, though, it might eventually be good enough to function as some kind of inescapable cyberhell for white collar workers to have endless meetings inside of, at the moment it's hard to imagine a real use case for it. Except for one. I’ve come to conclusion that Meta’s metaversal aspirations are just a cold and cynical bet on a future where we just can’t go outside anymore. Meta’s big plan is to spend the next few years cobbling together something with enough baseline functionality that we can all migrate to it during the next pandemic. That’s the only explanation for the absolutely deranged amount of misplaced optimism Meta has about this stuff. This is a company who has decided they can make a lot of money off a catastrophic future by forcing us into their genital-free off-brand-Pixar panopticon and mining us for data while we Farmville ourselves to death.
·garbageday.email·
Mark Zuckerberg's Ugly Future
Instagram, TikTok, and the Three Trends
Instagram, TikTok, and the Three Trends
In other words, when Kylie Jenner posts a petition demanding that Meta “Make Instagram Instagram again”, the honest answer is that changing Instagram is the most Instagram-like behavior possible.
The first trend is the shift towards ever more immersive mediums. Facebook, for example, started with text but exploded with the addition of photos. Instagram started with photos and expanded into video. Gaming was the first to make this progression, and is well into the 3D era. The next step is full immersion — virtual reality — and while the format has yet to penetrate the mainstream this progression in mediums is perhaps the most obvious reason to be bullish about the possibility.
The second trend is the increase in artificial intelligence. I’m using the term colloquially to refer to the overall trend of computers getting smarter and more useful, even if those smarts are a function of simple algorithms, machine learning, or, perhaps someday, something approaching general intelligence.
The third trend is the change in interaction models from user-directed to computer-controlled. The first version of Facebook relied on users clicking on links to visit different profiles; the News Feed changed the interaction model to scrolling. Stories reduced that to tapping, and Reels/TikTok is about swiping. YouTube has gone further than anyone here: Autoplay simply plays the next video without any interaction required at all.
·stratechery.com·
Instagram, TikTok, and the Three Trends
The Age of Algorithmic Anxiety
The Age of Algorithmic Anxiety
“I’ve been on the internet for the last 10 years and I don’t know if I like what I like or what an algorithm wants me to like,” Peter wrote. She’d come to see social networks’ algorithmic recommendations as a kind of psychic intrusion, surreptitiously reshaping what she’s shown online and, thus, her understanding of her own inclinations and tastes.
Besieged by automated recommendations, we are left to guess exactly how they are influencing us, feeling in some moments misperceived or misled and in other moments clocked with eerie precision.
·newyorker.com·
The Age of Algorithmic Anxiety