Found 13 bookmarks
Newest
Consider the Plight of the VC-Backed Privacy Burglars
Consider the Plight of the VC-Backed Privacy Burglars
Also, even putting aside the fact that first-party apps necessarily have certain advantages third-party apps do not (otherwise, there’d be no distinction), apps from the same developer have broad permission to share data and resources via app groups. Gmail can talk to Google Calendar, and Google Calendar has full access to Gmail’s address book. It’s no more “fundamentally anticompetitive” for Messages and Apple Mail to have full access to your Contacts address book than it was for Meta to launch Threads by piggybacking on the existing accounts and social graph of Instagram. If it’s unfair, it’s only unfair in the way that life in general is unfair.
·daringfireball.net·
Consider the Plight of the VC-Backed Privacy Burglars
Surprise! The Latest ‘Comprehensive’ US Privacy Bill Is Doomed
Surprise! The Latest ‘Comprehensive’ US Privacy Bill Is Doomed
Deleting sections of a bill holding companies accountable for making data-driven decisions that could lead to discrimination in housing, employment, health care, and the like spurred a strong response from civil society organizations including the NAACP, the Japanese American Citizens League, the Autistic Self Advocacy Network, and Asian Americans Advancing Justice, among dozens of others.
In a letter this week to E&C Democrats, obtained by WIRED, the groups wrote: “Privacy rights and civil rights are no longer separate concepts—they are inextricably bound together and must be protected. Abuse of our data is no longer limited to targeted advertising or data breaches. Instead, our data are used in decisions about who gets a mortgage, who gets into which schools, and who gets hired—and who does not.”
these provisions contained generous “pro-business” caveats. For instance, users would be able to opt out of algorithmic decisionmaking only if doing so wasn’t “prohibitively costly” or “demonstrably impracticable due to technological limitations.” Similarly, companies could have limited the public’s knowledge about the results of any audits by simply hiring an independent assessor to complete the task rather than doing so internally.
·wired.com·
Surprise! The Latest ‘Comprehensive’ US Privacy Bill Is Doomed
AI and problems of scale — Benedict Evans
AI and problems of scale — Benedict Evans
Scaling technological abilities can itself represent a qualitative change, where a difference in degree becomes a difference in kind, requiring new ways of thinking about ethical and regulatory implications. These are usually a matter of social, cultural, and political considerations rather than purely technical ones
what if every police patrol car had a bank of cameras that scan not just every number plate but every face within a hundred yards against a national database of outstanding warrants? What if the cameras in the subway do that? All the connected cameras in the city? China is already trying to do this, and we seem to be pretty sure we don’t like that, but why? One could argue that there’s no difference in principle, only in scale, but a change in scale can itself be a change in principle.
As technology advances, things that were previously possible only on a small scale can become practically feasible at a massive scale, which can change the nature and implications of those capabilities
Generative AI is now creating a lot of new examples of scale itself as a difference in principle. You could look the emergent abuse of AI image generators, shrug, and talk about Photoshop: there have been fake nudes on the web for as long as there’s been a web. But when high-school boys can load photos of 50 or 500 classmates into an ML model and generate thousands of such images (let’s not even think about video) on a home PC (or their phone), that does seem like an important change. Faking people’s voices has been possible for a long time, but it’s new and different that any idiot can do it themselves. People have always cheated at homework and exams, but the internet made it easy and now ChatGPT makes it (almost) free. Again, something that has always been theoretically possible on a small scale becomes practically possible on a massive scale, and that changes what it means.
This might be a genuinely new and bad thing that we don’t like at all; or, it may be new and we decide we don’t care; we may decide that it’s just a new (worse?) expression of an old thing we don’t worry about; and, it may be that this was indeed being done before, even at scale, but somehow doing it like this makes it different, or just makes us more aware that it’s being done at all. Cambridge Analytica was a hoax, but it catalysed awareness of issues that were real
As new technologies emerge, there is often a period of ambivalence and uncertainty about how to view and regulate them, as they may represent new expressions of old problems or genuinely novel issues.
·ben-evans.com·
AI and problems of scale — Benedict Evans
Generative AI and intellectual property — Benedict Evans
Generative AI and intellectual property — Benedict Evans
A person can’t mimic another voice perfectly (impressionists don’t have to pay licence fees) but they can listen to a thousand hours of music and make something in that style - a ‘pastiche’, we sometimes call it. If a person did that, they wouldn’t have to pay a fee to all those artists, so if we use a computer for that, do we need to pay them?
I think most people understand that if I post a link to a news story on my Facebook feed and tell my friends to read it, it’s absurd for the newspaper to demand payment for this. A newspaper, indeed, doesn’t pay a restaurant a percentage when it writes a review.
one way to think about this might be that AI makes practical at a massive scale things that were previously only possible on a small scale. This might be the difference between the police carrying wanted pictures in their pockets and the police putting face recognition cameras on every street corner - a difference in scale can be a difference in principle. What outcomes do we want? What do we want the law to be? What can it be?
OpenAI hasn’t ‘pirated’ your book or your story in the sense that we normally use that word, and it isn’t handing it out for free. Indeed, it doesn’t need that one novel in particular at all. In Tim O’Reilly’s great phrase, data isn’t oil; data is sand. It’s only valuable in the aggregate of billions,, and your novel or song or article is just one grain of dust in the Great Pyramid.
it’s supposed to be inferring ‘intelligence’ (a placeholder word) from seeing as much as possible of how people talk, as a proxy for how they think.
it doesn’t need your book or website in particular and doesn’t care what you in particular wrote about, but it does need ‘all’ the books and ‘all’ the websites. It would work if one company removed its content, but not if everyone did.
What if I use an engine trained on the last 50 years of music to make something that sounds entirely new and original? No-one should be under the delusion that this won’t happen.
I can buy the same camera as Cartier-Bresson, and I can press the button and make a picture without being able to draw or paint, but that’s not what makes the artist - photography is about where you point the camera, what image you see and which you choose. No-one claims a machine made the image.
Spotify already has huge numbers of ‘white noise’ tracks and similar, gaming the recommendation algorithm and getting the same payout per play as Taylor Swift or the Rolling Stones. If we really can make ‘music in the style of the last decade’s hits,’ how much of that will there be, and how will we wade through it? How will we find the good stuff, and how will we define that? Will we care?
·ben-evans.com·
Generative AI and intellectual property — Benedict Evans
Yale Law Journal - Amazon’s Antitrust Paradox
Yale Law Journal - Amazon’s Antitrust Paradox
Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead. Through this strategy, the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it. Elements of the firm’s structure and conduct pose anticompetitive concerns—yet it has escaped antitrust scrutiny.
This Note argues that the current framework in antitrust—specifically its pegging competition to “consumer welfare,” defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output. Specifically, current doctrine underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive.
These concerns are heightened in the context of online platforms for two reasons. First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible.
Second, because online platforms serve as critical intermediaries, integrating across business lines positions these platforms to control the essential infrastructure on which their rivals depend. This dual role also enables a platform to exploit information collected on companies using its services to undermine them as competitors.
·yalelawjournal.org·
Yale Law Journal - Amazon’s Antitrust Paradox