Found 16 bookmarks
Newest
The AI trust crisis
The AI trust crisis
The AI trust crisis 14th December 2023 Dropbox added some new AI features. In the past couple of days these have attracted a firestorm of criticism. Benj Edwards rounds it up in Dropbox spooks users with new AI features that send data to OpenAI when used. The key issue here is that people are worried that their private files on Dropbox are being passed to OpenAI to use as training data for their models—a claim that is strenuously denied by Dropbox. As far as I can tell, Dropbox built some sensible features—summarize on demand, “chat with your data” via Retrieval Augmented Generation—and did a moderately OK job of communicating how they work... but when it comes to data privacy and AI, a “moderately OK job” is a failing grade. Especially if you hold as much of people’s private data as Dropbox does! Two details in particular seem really important. Dropbox have an AI principles document which includes this: Customer trust and the privacy of their data are our foundation. We will not use customer data to train AI models without consent. They also have a checkbox in their settings that looks like this: Update: Some time between me publishing this article and four hours later, that link stopped working. I took that screenshot on my own account. It’s toggled “on”—but I never turned it on myself. Does that mean I’m marked as “consenting” to having my data used to train AI models? I don’t think so: I think this is a combination of confusing wording and the eternal vagueness of what the term “consent” means in a world where everyone agrees to the terms and conditions of everything without reading them. But a LOT of people have come to the conclusion that this means their private data—which they pay Dropbox to protect—is now being funneled into the OpenAI training abyss. People don’t believe OpenAI # Here’s copy from that Dropbox preference box, talking about their “third-party partners”—in this case OpenAI: Your data is never used to train their internal models, and is deleted from third-party servers within 30 days. It’s increasing clear to me like people simply don’t believe OpenAI when they’re told that data won’t be used for training. What’s really going on here is something deeper then: AI is facing a crisis of trust. I quipped on Twitter: “OpenAI are training on every piece of data they see, even when they say they aren’t” is the new “Facebook are showing you ads based on overhearing everything you say through your phone’s microphone” Here’s what I meant by that. Facebook don’t spy on you through your microphone # Have you heard the one about Facebook spying on you through your phone’s microphone and showing you ads based on what you’re talking about? This theory has been floating around for years. From a technical perspective it should be easy to disprove: Mobile phone operating systems don’t allow apps to invisibly access the microphone. Privacy researchers can audit communications between devices and Facebook to confirm if this is happening. Running high quality voice recognition like this at scale is extremely expensive—I had a conversation with a friend who works on server-based machine learning at Apple a few years ago who found the entire idea laughable. The non-technical reasons are even stronger: Facebook say they aren’t doing this. The risk to their reputation if they are caught in a lie is astronomical. As with many conspiracy theories, too many people would have to be “in the loop” and not blow the whistle. Facebook don’t need to do this: there are much, much cheaper and more effective ways to target ads at you than spying through your microphone. These methods have been working incredibly well for years. Facebook gets to show us thousands of ads a year. 99% of those don’t correlate in the slightest to anything we have said out loud. If you keep rolling the dice long enough, eventually a coincidence will strike. Here’s the thing though: none of these arguments matter. If you’ve ever experienced Facebook showing you an ad for something that you were talking about out-loud about moments earlier, you’ve already dismissed everything I just said. You have personally experienced anecdotal evidence which overrides all of my arguments here.
One consistent theme I’ve seen in conversations about this issue is that people are much more comfortable trusting their data to local models that run on their own devices than models hosted in the cloud. The good news is that local models are consistently both increasing in quality and shrinking in size.
·simonwillison.net·
The AI trust crisis
How Elon Musk Got Tangled Up in Blue
How Elon Musk Got Tangled Up in Blue
Mr. Musk had largely come to peace with a price of $100 a year for Blue. But during one meeting to discuss pricing, his top assistant, Jehn Balajadia, felt compelled to speak up. “There’s a lot of people who can’t even buy gas right now,” she said, according to two people in attendance. It was hard to see how any of those people would pony up $100 on the spot for a social media status symbol. Mr. Musk paused to think. “You know, like, what do people pay for Starbucks?” he asked. “Like $8?” Before anyone could raise objections, he whipped out his phone to set his word in stone. “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit,” he tweeted on Nov. 1. “Power to the people! Blue for $8/month.”
·nytimes.com·
How Elon Musk Got Tangled Up in Blue
The secret digital behaviors of Gen Z
The secret digital behaviors of Gen Z

shift from traditional notions of information literacy to "information sensibility" among Gen Zers, who prioritize social signals and peer influence over fact-checking. The research by Jigsaw, a Google subsidiary, reveals that Gen Zers spend their digital lives in "timepass" mode, engaging with light content and trusting influencers over traditional news sources.

Comment sections for social validation and information signaling

·businessinsider.com·
The secret digital behaviors of Gen Z
The Signal and the Corrective
The Signal and the Corrective

A technical breakdown of 'narratives' and how they operate: narratives simplify issues by focusing on a main "signal" while ignoring other relevant "noise", and this affects discussions between those with opposing preferred signals. It goes into many examples across basically any kind of ideological or cultural divide.

AI summary:

  • The article explores how different people can derive opposing narratives from the same set of facts, with each viewing their interpretation as the "signal" and opposing views as "noise"
  • Key concepts:
    • Signal: The core belief or narrative someone holds as fundamentally true
    • Corrective: The moderating adjustments made to account for exceptions to the core belief
    • Figure-ground inversion: How the same reality can be interpreted in opposite ways
  • Examples of opposing narratives include:
    • Government as public service vs. government as pork distribution
    • Medical care as healing vs. medical care as harmful intervention
    • Capitalism as wealth creation vs. capitalism as exploitation
    • Nature vs. nurture in human behavior
    • Science as gradual progress vs. science as paradigm shifts
  • Communication dynamics:
    • People are more likely to fall back on pure signals (without correctives) when:
      • Discussions become abstract
      • Communication bandwidth is limited
      • Under stress or emotional pressure
      • Speaking to unfamiliar audiences
      • In hostile environments
  • Persuasion insights:
    • It's easier to add correctives to someone's existing signal than to completely change their core beliefs
    • People must feel their fundamental views are respected before accepting criticism
    • Acknowledging partial validity of opposing views is crucial for productive dialogue
  • Problems in modern discourse:
    • Online debates often lack real-world consequences
    • When there's no need for cooperation, people prefer conquest over consensus
    • Lack of real relationships reduces incentives for civility and understanding
  • The author notes that while most people hold moderate views with both signals and correctives, fundamental differences can be masked when discussing specific policies but become apparent in discussions of general principles
  • The piece maintains a thoughtful, analytical tone while acknowledging the complexity and challenges of human communication and belief systems
  • The author expresses personal examples and vulnerability in describing how they themselves react differently to criticism based on whether it comes from those who share their fundamental values
narratives contradicting each other means that they simplify and generalize in different ways and assign goodness and badness to things in opposite directions. While that might look like contradiction it isn’t, because generalizations and value judgments aren’t strictly facts about the world. As a consequence, the more abstracted and value-laden narratives get the more they can contradict each other without any of them being “wrong”.
“The free market is extremely powerful and will work best as a rule, but there are a few outliers where it won’t, and some people will be hurt so we should have a social safety net to contain the bad side effects.” and “Capitalism is morally corrupt and rewards selfishness and greed. An economy run for the people by the people is a moral imperative, but planned economies don’t seem to work very well in practice so we need the market to fuel prosperity even if it is distasteful.” . . . have very different fundamental attitudes but may well come down quite close to each other in terms of supported policies. If you model them as having one “main signal” (basic attitude) paired with a corrective to account for how the basic attitude fails to match reality perfectly, then this kind of difference is understated when the conversation is about specific issues (because then signals plus correctives are compared and the correctives bring “opposite” people closer together) but overstated when the conversation is about general principles — because then it’s only about the signal.
I’ve said that when discussions get abstract and general people tend to go back to their main signals and ignore correctives, which makes participants seem further apart than they really are. The same thing happens when the communication bandwidth is low for some reason. When dealing with complex matters human communication tends not to be super efficient in the first place and if something makes subtlety extra hard — like a 140 character limit, only a few minutes to type during a bathroom break at work, little to no context or a noisy discourse environment — you’re going to fall back to simpler, more basic messages. Internal factors matter too. When you’re stressed, don’t have time to think, don’t know the person you’re talking to and don’t really care about them, when emotions are heated, when you feel attacked, when an audience is watching and you can’t look weak, or when you smell blood in the water, then you’re going to go simple, you’re going to go basic, you’re going to push in a direction rather than trying to hit a target. And whoever you’re talking to is going to do the same. You both fall back in different directions, exactly when you shouldn’t.
It makes sense to think of complex disagreements as not about single facts but about narratives made up of generalizations, abstractions and interpretations of many facts, most of which aren’t currently on the table. And the status of our favorite narratives matters to us, because they say what’s happening, who the heroes are and who the villains are, what’s matters and what doesn’t, who owes and who is owed. Most of us, when not in our very best moods, will make sure our most cherished narratives are safe before we let any others thrive.
Most people will accept that their main signals have correctives, but they will not accept that their main signals have no validity or legitimacy. It’s a lot easier to install a corrective in someone than it is to dislodge their main signal (and that might later lead to a more fundamental change of heart) — but to do that you must refrain from threatening the signal because that makes people defensive. And it’s not so hard. Listen and acknowledge that their view has greater than zero validity.
In an ideal world, any argumentation would start with laying out its own background assumptions, including stating if what it says should be taken as a corrective on top of its opposite or a complete rejection of it.
·everythingstudies.com·
The Signal and the Corrective
Fake It ’Til You Fake It
Fake It ’Til You Fake It
On the long history of photo manipulation dating back to the origins of photography. While new technologies have made manipulation much easier, the core questions around trust and authenticity remain the same and have been asked for over a century.
The criticisms I have been seeing about the features of the Pixel 8, however, feel like we are only repeating the kinds of fears of nearly two hundred years. We have not been able to wholly trust photographs pretty much since they were invented. The only things which have changed in that time are the ease with which the manipulations can happen, and their availability.
We all live with a growing sense that everything around us is fraudulent. It is striking to me how these tools have been introduced as confidence in institutions has declined. It feels like a death spiral of trust — not only are we expected to separate facts from their potentially misleading context, we increasingly feel doubtful that any experts are able to help us, yet we keep inventing new ways to distort reality.
The questions that are being asked of the Pixel 8’s image manipulation capabilities are good and necessary because there are real ethical implications. But I think they need to be more fully contextualized. There is a long trail of exactly the same concerns and, to avoid repeating ourselves yet again, we should be asking these questions with that history in mind. This era feels different. I think we should be asking more precisely why that is.
The questions we ask about generative technologies should acknowledge that we already have plenty of ways to lie, and that lots of the information we see is suspect. That does not mean we should not believe anything, but it does mean we ought to be asking questions about what is changed when tools like these become more widespread and easier to use.
·pxlnv.com·
Fake It ’Til You Fake It
Wikipedia:Guide to addressing bias - Wikipedia
Wikipedia:Guide to addressing bias - Wikipedia
Encyclopedias are a compendium and summary of accepted human knowledge. Their purpose is not to provide compelling and interesting articles, but to provide accurate and verifiable information. To this end, encyclopedias strive to always represent each point-of-view in a controversy with an amount of weight and credulity equal to the weight and credulity afforded to it by the best sources of information on the subject. This means that the consensus of experts in a subject will be treated as a fact, whereas theories with much less acceptance among experts, or with acceptance only among non-experts will be presented as inaccurate and untrue.
Before you even begin to try to raise the issue at a talk page, you should ask yourself "Is this article really biased, or does it accurately reflect the views of authoritative sources about this subject?" Do some research. Read the sources used by the article and find other reliable sources on the subject. Do they present the subject as controversial, or do they tend to take a side? If there's a clear controversy, what field of study would impart expertise on this, and what side do people who work in that field tend to take? Do the claims made by the article match the claims made by the sources? Depending on the answers to these questions, the article may not be biased at all.
·en.wikipedia.org·
Wikipedia:Guide to addressing bias - Wikipedia
‘Woke’ and other bogus political terms, decoded
‘Woke’ and other bogus political terms, decoded
See also "On Bullshit"
“The media” (or “mainstream media”): a meaningless phrase because there are countless very different media, which don’t act in concert.
“Gets it”: a social media phrase that is used to mean “agrees with me”.
Usually, though, people who claim to have been “cancelled” mean “criticised”, “convicted of sexual assault”, “replaced by somebody who isn’t an overt bigot” or simply “ignored”.
“Political language is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind,” wrote George Orwell in his 1946 essay “Politics and the English Language” (the complete guide on how to write in just 13 pages). He lists other “worn-out and useless” words and phrases that were disappearing in his day: jackboot, Achilles heel, hotbed, melting pot, acid test, veritable inferno. The same fate later befell words overused in the aftermath of the 9/11 attacks: “heroes” (a euphemism for victims) and “greatest country on earth” (meaning largest military and GDP).
·ft.com·
‘Woke’ and other bogus political terms, decoded
Blocking Kiwifarms
Blocking Kiwifarms
we need a mechanism when there is an emergency threat to human life for infrastructure providers to work expediently with legal authorities in order to ensure the decisions we make are grounded in due process. Unfortunately, that mechanism does not exist and so we are making this uncomfortable emergency decision alone.
·blog.cloudflare.com·
Blocking Kiwifarms
‘Silicon Values’
‘Silicon Values’
York points to a 1946 U.S. Supreme Court decision, Marsh v. Alabama, which held that private entities can become sufficiently large and public to require them to be subject to the same Constitutional constraints as government entities. Though York says this ruling has “not as of this writing been applied to the quasi-public spaces of the internet”
even if YouTube were treated as an extension of government due to its size and required to retain every non-criminal video uploaded to its service, it would make as much of a political statement elsewhere, if not more. In France and Germany, it — like any other company — must comply with laws that require the removal of hate speech, laws which in the U.S. would be unconstitutional
Several European countries have banned Google Analytics because it is impossible for their citizens to be protected against surveillance by American intelligence agencies.
TikTok has downplayed the seriousness of its platform by framing it as an entertainment venue. As with other platforms, disinformation on TikTok spreads and multiplies. These factors may have an effect on how people vote. But the sudden alarm over yet-unproved allegations of algorithmic meddling in TikTok to boost Chinese interests is laughable to those of us who have been at the mercy of American-created algorithms despite living elsewhere. American state actors have also taken advantage of the popularity of social networks in ways not dissimilar from political adversaries.
what York notes is how aligned platforms are with the biases of upper-class white Americans; not coincidentally, the boards and executive teams of these companies are dominated by people matching that description.
It should not be so easy to point to similarities in egregious behaviour; corruption of legal processes should not be so common. I worry that regulators in China and the U.S. will spend so much time negotiating which of them gets to treat the internet as their domain while the rest of us get steamrolled by policies that maximize their self-preferencing.
to ensure a clear set of values projected into the world. One way to achieve that is to prefer protocols over platforms.
This links up with Ben Thompson’s idea about splitting twitter into a protocol company and a social media company
Yes, the country’s light touch approach to regulation and generous support of its tech industry has brought the world many of its most popular products and services. But it should not be assumed that we must rely on these companies built in the context of middle- and upper-class America.
·pxlnv.com·
‘Silicon Values’
To Thrive, Our Democracy Needs Digital Public Infrastructure
To Thrive, Our Democracy Needs Digital Public Infrastructure
Facebook, Twitter and YouTube each took first steps to rein in the worst behavior on their platforms in the heat of the election, but none have confronted how their spaces were structured to become ideal venues for outrage and incitement.
The first step in the process is realizing that the problems we’re experiencing in digital life — how to gather strangers together in public in ways that make it so people generally behave themselves — aren’t new. They’re problems that physical communities have wrestled with for centuries. In physical communities, businesses play a critical role — but so do public libraries, schools, parks and roads. These spaces are often the groundwork that private industry builds itself around: Schools teach and train the next generation of workers; new public parks and plazas often spur private real estate development; businesses transport goods on publicly funded roads; and so on. Public spaces and private industry work symbiotically, if sometimes imperfectly.
These kinds of public spaces mostly don’t exist online. Twitter, Facebook, YouTube and Twitch each offer some aspects of these experiences. But ultimately, they’re all organized around the need for growth and revenue — incentives which are in tension with the critical community functions these institutions also serve, and with the heavy staffing models they require.
Recent peer-reviewed research from three professors at the University of Virginia demonstrates how dramatically the design of platforms can affect how people behave on them. In their study, in months where conservative-leaning users visited Facebook more, they saw much more ideological content than normal, whereas in months where they visited Reddit more they “read news that was 50 percent more moderate than what they typically read.” (This effect was smaller but similar for political liberals). Same people, different platforms, and dramatically different news diets as a result.
Wikipedia is probably the best-known example of this kind of institution — a nonprofit, mission-driven piece of digital infrastructure. The nonprofit Internet Archive, which bills itself as a free “digital library,” a repository of books, movies and music and over 500 billion archived webpages to create a living history of the internet, is another. But what we need are not just information services with a mission-driven agenda, but spaces where people can talk, share and relate without those relationships being distorted and shaped by profit-seeking incentive structures.
Users can post only once a day, every post is read by a moderating team, and if you’re too salty or run afoul of other norms, you’re encouraged to rewrite. This is terrible for short-term engagement — flame wars drive attention and use, after all — and as a business model, all those moderators are costly. But there’s a long-term payoff: two-thirds of Vermont households are on the Forum, and many Vermonters find it a valuable place for thoughtful public discussions.
In fact, public digital infrastructures might be the right place to start exploring how to reinvent governance and civil society more broadly.
If mission, design and governance are important ingredients, the final component is what might be called digital essential workers — professionals like librarians whose job is to manage, steward, and care for the people in these spaces. This care work is one of the pillars of successful physical communities, which has been abstracted away by the existing tech platforms. S
The truth is that Facebook, Google and Twitter have displaced and sucked the revenue out of an entire ecosystem of local journalistic enterprises and other institutions that served some of these public functions.
·politico.com·
To Thrive, Our Democracy Needs Digital Public Infrastructure