Found 7 bookmarks
Newest
Meta surrenders to the right on speech
Meta surrenders to the right on speech
Alexios Mantzarlis, the founding director of the International Fact-Checking Network, worked closely with Meta as the company set up its partnerships. He took exception on Tuesday to Zuckerberg's statement that "the fact-checkers have just been too politically biased, and have destroyed more trust than they've created, especially in the US." What Zuckerberg called bias is a reflection of the fact that the right shares more misinformation from the left, said Mantzarlis, now the director of the Security, Trust, and Safety Initiative at Cornell Tech. "He chose to ignore research that shows that politically asymmetric interventions against misinformation can result from politically asymmetric sharing of misinformation," Mantzarlis said. "He chose to ignore that a large chunk of the content fact-checkers are flagging is likely not political in nature, but low-quality spammy clickbait that his platforms have commodified. He chose to ignore research that shows Community Notes users are very much motivated by partisan motives and tend to over-target their political opponents."
while Community Notes has shown some promise on X, a former Twitter executive reminded me today that volunteer content moderation has its limits. Community Notes rarely appear on content outside the United States, and often take longer to appear on viral posts than traditional fact checks. There is also little to no empirical evidence that Community Notes are effective at harm reduction. Another wrinkle: many Community Notes currently cite as evidence fact-checks created by the fact-checking organizations that Meta just canceled all funding for.
What Zuckerberg is saying is that it will now be up to users to do what automated systems were doing before — a giant step backward for a person who prides himself on having among the world's most advanced AI systems.
"I can't tell you how much harm comes from non-illegal but harmful content," a longtime former trust and safety employee at the company told me. The classifiers that the company is now switching off meaningfully reduced the spread of hate movements on Meta's platforms, they said. "This is not the climate change debate, or pro-life vs. pro-choice. This is degrading, horrible content that leads to violence and that has the intent to harm other people."
·platformer.news·
Meta surrenders to the right on speech
Zuckerberg officially gives up
Zuckerberg officially gives up
I floated a theory of mine to Atlantic writer Charlie Warzel on this week’s episode of Panic World that content moderation, as we’ve understood, it effectively ended on January 6th, 2021. You can listen to the whole episode here, but the way I look at it is that the Insurrection was the first time Americans could truly see the radicalizing effects of algorithmic platforms like Facebook and YouTube that other parts of the world, particularly the Global South, had dealt with for years. A moment of political violence Silicon Valley could no longer ignore or obfuscate the way it had with similar incidents in countries like Myanmar, India, Ethiopia, or Brazil. And once faced with the cold, hard truth of what their platforms had been facilitating, companies like Google and Meta, at least internally, accepted that they would never be able to moderate them at scale. And so they just stopped.
After 2021, the major tech platforms we’ve relied on since the 2010s could no longer pretend that they would ever be able to properly manage the amount of users, the amount of content, the amount of influence they “need” to exist at the size they “need” to exist at to make the amount of money they “need” to exist.
Under Zuckerberg’s new “censorship”-free plan, Meta’s social networks will immediately fill up with hatred and harassment. Which will make a fertile ground for terrorism and extremism. Scams and spam will clog comments and direct messages. And illicit content, like non-consensual sexual material, will proliferate in private corners of networks like group messages and private Groups. Algorithms will mindlessly spread this slop, boosted by the loudest, dumbest, most reactionary users on the platform, helping it evolve and metastasize into darker, stickier social movements. And the network will effectively break down. But Meta is betting that the average user won’t care or notice. AI profiles will like their posts, comment on them, and even make content for them. A feedback loop of nonsense and violence. Our worst, unmoderated impulses, shared by algorithm and reaffirmed by AI. Where nothing has to be true and everything is popular.
·garbageday.email·
Zuckerberg officially gives up
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two sided market," where a platform sits between buyers and sellers, holding each hostage to the other, raking off an ever-larger share of the value that passes between them.
Today, Marketplace sellers are handing 45%+ of the sale price to Amazon in junk fees. The company's $31b "advertising" program is really a payola scheme that pits sellers against each other, forcing them to bid on the chance to be at the top of your search.
Search Amazon for "cat beds" and the entire first screen is ads, including ads for products Amazon cloned from its own sellers, putting them out of business (third parties have to pay 45% in junk fees to Amazon, but Amazon doesn't charge itself these fees).
This is enshittification: surpluses are first directed to users; then, once they're locked in, surpluses go to suppliers; then once they're locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit.
This made publications truly dependent on Facebook – their readers no longer visited the publications' websites, they just tuned into them on Facebook. The publications were hostage to those readers, who were hostage to each other. Facebook stopped showing readers the articles publications ran, tuning The Algorithm to suppress posts from publications unless they paid to "boost" their articles to the readers who had explicitly subscribed to them and asked Facebook to put them in their feeds.
Today, Facebook is terminally enshittified, a terrible place to be whether you're a user, a media company, or an advertiser. It's a company that deliberately demolished a huge fraction of the publishers it relied on, defrauding them into a "pivot to video" based on false claims of the popularity of video among Facebook users. Companies threw billions into the pivot, but the viewers never materialized, and media outlets folded in droves:
These videos go into Tiktok users' ForYou feeds, which Tiktok misleadingly describes as being populated by videos "ranked by an algorithm that predicts your interests based on your behavior in the app." In reality, For You is only sometimes composed of videos that Tiktok thinks will add value to your experience – the rest of the time, it's full of videos that Tiktok has inserted in order to make creators think that Tiktok is a great place to reach an audience.
"Sources told Forbes that TikTok has often used heating to court influencers and brands, enticing them into partnerships by inflating their videos’ view count.
"Monetize" is a terrible word that tacitly admits that there is no such thing as an "Attention Economy." You can't use attention as a medium of exchange. You can't use it as a store of value. You can't use it as a unit of account. Attention is like cryptocurrency: a worthless token that is only valuable to the extent that you can trick or coerce someone into parting with "fiat" currency in exchange for it.
The algorithm creates conditions for which the necessity of ads exists
For Tiktok, handing out free teddy-bears by "heating" the videos posted by skeptical performers and media companies is a way to convert them to true believers, getting them to push all their chips into the middle of the table, abandoning their efforts to build audiences on other platforms (it helps that Tiktok's format is distinctive, making it hard to repurpose videos for Tiktok to circulate on rival platforms).
every time Tiktok shows you a video you asked to see, it loses a chance to show you a video it wants you to se
I just handed Twitter $8 for Twitter Blue, because the company has strongly implied that it will only show the things I post to the people who asked to see them if I pay ransom money.
Compuserve could have "monetized" its own version of Caller ID by making you pay $2.99 extra to see the "From:" line on email before you opened the message – charging you to know who was speaking before you started listening – but they didn't.
Useful idiots on the right were tricked into thinking that the risk of Twitter mismanagement was "woke shadowbanning," whereby the things you said wouldn't reach the people who asked to hear them because Twitter's deep state didn't like your opinions. The real risk, of course, is that the things you say won't reach the people who asked to hear them because Twitter can make more money by enshittifying their feeds and charging you ransom for the privilege to be included in them.
Individual product managers, executives, and activist shareholders all give preference to quick returns at the cost of sustainability, and are in a race to see who can eat their seed-corn first. Enshittification has only lasted for as long as it has because the internet has devolved into "five giant websites, each filled with screenshots of the other four"
policymakers should focus on freedom of exit – the right to leave a sinking platform while continuing to stay connected to the communities that you left behind, enjoying the media and apps you bought, and preserving the data you created
technological self-determination is at odds with the natural imperatives of tech businesses. They make more money when they take away our freedom – our freedom to speak, to leave, to connect.
even Tiktok's critics grudgingly admitted that no matter how surveillant and creepy it was, it was really good at guessing what you wanted to see. But Tiktok couldn't resist the temptation to show you the things it wants you to see, rather than what you want to see.
·pluralistic.net·
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
Instagram, TikTok, and the Three Trends
Instagram, TikTok, and the Three Trends
In other words, when Kylie Jenner posts a petition demanding that Meta “Make Instagram Instagram again”, the honest answer is that changing Instagram is the most Instagram-like behavior possible.
The first trend is the shift towards ever more immersive mediums. Facebook, for example, started with text but exploded with the addition of photos. Instagram started with photos and expanded into video. Gaming was the first to make this progression, and is well into the 3D era. The next step is full immersion — virtual reality — and while the format has yet to penetrate the mainstream this progression in mediums is perhaps the most obvious reason to be bullish about the possibility.
The second trend is the increase in artificial intelligence. I’m using the term colloquially to refer to the overall trend of computers getting smarter and more useful, even if those smarts are a function of simple algorithms, machine learning, or, perhaps someday, something approaching general intelligence.
The third trend is the change in interaction models from user-directed to computer-controlled. The first version of Facebook relied on users clicking on links to visit different profiles; the News Feed changed the interaction model to scrolling. Stories reduced that to tapping, and Reels/TikTok is about swiping. YouTube has gone further than anyone here: Autoplay simply plays the next video without any interaction required at all.
·stratechery.com·
Instagram, TikTok, and the Three Trends
The Age of Algorithmic Anxiety
The Age of Algorithmic Anxiety
“I’ve been on the internet for the last 10 years and I don’t know if I like what I like or what an algorithm wants me to like,” Peter wrote. She’d come to see social networks’ algorithmic recommendations as a kind of psychic intrusion, surreptitiously reshaping what she’s shown online and, thus, her understanding of her own inclinations and tastes.
Besieged by automated recommendations, we are left to guess exactly how they are influencing us, feeling in some moments misperceived or misled and in other moments clocked with eerie precision.
·newyorker.com·
The Age of Algorithmic Anxiety