Found 46 bookmarks
Custom sorting
Data Laced with History: Causal Trees & Operational CRDTs
Data Laced with History: Causal Trees & Operational CRDTs
After mulling over my bullet points, it occurred to me that the network problems I was dealing with—background cloud sync, editing across multiple devices, real-time collaboration, offline support, and reconciliation of distant or conflicting revisions—were all pointing to the same question: was it possible to design a system where any two revisions of the same document could be merged deterministically and sensibly without requiring user intervention?
It’s what happened after sync that was troubling. On encountering a merge conflict, you’d be thrown into a busy conversation between the network, model, persistence, and UI layers just to get back into a consistent state. The data couldn’t be left alone to live its peaceful, functional life: every concurrent edit immediately became a cross-architectural matter.
I kept several questions in mind while doing my analysis. Could a given technique be generalized to arbitrary and novel data types? Did the technique pass the PhD Test? And was it possible to use the technique in an architecture with smart clients and dumb servers?
Concurrent edits are sibling branches. Subtrees are runs of characters. By the nature of reverse timestamp+UUID sort, sibling subtrees are sorted in the order of their head operations.
This is the underlying premise of the Causal Tree. In contrast to all the other CRDTs I’d been looking into, the design presented in Victor Grishchenko’s brilliant paper was simultaneously clean, performant, and consequential. Instead of dense layers of theory and labyrinthine data structures, everything was centered around the idea of atomic, immutable, metadata-tagged, and causally-linked operations, stored in low-level data structures and directly usable as the data they represented.
I’m going to be calling this new breed of CRDTs operational replicated data types—partly to avoid confusion with the exiting term “operation-based CRDTs” (or CmRDTs), and partly because “replicated data type” (RDT) seems to be gaining popularity over “CRDT” and the term can be expanded to “ORDT” without impinging on any existing terminology.
Much like Causal Trees, ORDTs are assembled out of atomic, immutable, uniquely-identified and timestamped “operations” which are arranged in a basic container structure. (For clarity, I’m going to be referring to this container as the structured log of the ORDT.) Each operation represents an atomic change to the data while simultaneously functioning as the unit of data resultant from that action. This crucial event–data duality means that an ORDT can be understood as either a conventional data structure in which each unit of data has been augmented with event metadata; or alternatively, as an event log of atomic actions ordered to resemble its output data structure for ease of execution
To implement a custom data type as a CT, you first have to “atomize” it, or decompose it into a set of basic operations, then figure out how to link those operations such that a mostly linear traversal of the CT will produce your output data. (In other words, make the structure analogous to a one- or two-pass parsable format.)
OT and CRDT papers often cite 50ms as the threshold at which people start to notice latency in their text editors. Therefore, any code we might want to run on a CT—including merge, initialization, and serialization/deserialization—has to fall within this range. Except for trivial cases, this precludes O(n2) or slower complexity: a 10,000 word article at 0.01ms per character would take 7 hours to process! The essential CT functions have to be O(nlogn) at the very worst.
Of course, CRDTs aren’t without their difficulties. For instance, a CRDT-based document will always be “live”, even when offline. If a user inadvertently revises the same CRDT-based document on two offline devices, they won’t see the familiar pick-a-revision dialog on reconnection: both documents will happily merge and retain any duplicate changes. (With ORDTs, this can be fixed after the fact by filtering changes by device, but the user will still have to learn to treat their documents with a bit more caution.) In fully decentralized contexts, malicious users will have a lot of power to irrevocably screw up the data without any possibility of a rollback, and encryption schemes, permission models, and custom protocols may have to be deployed to guard against this. In terms of performance and storage, CRDTs contain a lot of metadata and require smart and performant peers, whereas centralized architectures are inherently more resource-efficient and only demand the bare minimum of their clients. You’d be hard-pressed to use CRDTs in data-heavy scenarios such as screen sharing or video editing. You also won’t necessarily be able to layer them on top of existing infrastructure without significant refactoring.
Perhaps a CRDT-based text editor will never quite be as fast or as bandwidth-efficient as Google Docs, for such is the power of centralization. But in exchange for a totally decentralized computing future? A world full of devices that control their own data and freely collaborate with one another? Data-centric code that’s entirely free from network concerns? I’d say: it’s surely worth a shot!
·archagon.net·
Data Laced with History: Causal Trees & Operational CRDTs
The Californian Ideology
The Californian Ideology
Summary: The Californian Ideology is a mix of cybernetics, free market economics, and counter-culture libertarianism that originated in California and has become a global orthodoxy. It asserts that technological progress will inevitably lead to a future of Jeffersonian democracy and unrestrained free markets. However, this ideology ignores the critical role of government intervention in technological development and the social inequalities perpetuated by free market capitalism.
·metamute.org·
The Californian Ideology
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
To be a Technologist is to be Human - Letters to a Young Technologist
To be a Technologist is to be Human - Letters to a Young Technologist
In fact, more people are technologists than ever before, insofar as a “technologist” can be defined as someone inventing, implementing or repurposing technology. In particular, the personal computer has allowed anyone to live in the unbounded wilderness of the internet as they please. Anyone can build highly specific corners of cyberspace and quickly invent digital tools, changing their own and others’ technological realities. “Technologist” is a common identity that many different people occupy, and anyone can occupy. Yet the public perceptions of a “technologist” still constitute a very narrow image.
A technologist makes reason out of the messiness of the world, leverages their understanding to envision a different reality, and builds a pathway to make their vision happen. All three of these endeavors—to try to understand the world, to imagine something different, and to build something that fulfills that vision—are deeply human.
Humans are continually distilling and organizing reality into representations and models—to varying degrees of accuracy and implicitness—that we can understand and navigate. Our intelligence involves making models of all aspects of our realities: models of the climate, models of each other’s minds, models of fluid dynamics.
mental models
We are an unprecedentedly self-augmenting species, with a fundamental drive to organize, imagine, construct and exercise our will in the world. And we can measure our technological success on the basis of how much they increase our humanity. What we need is a vision for that humanity, and to enact this vision. What do we, humans, want to become?
As a general public, we can collectively hold technologists to a higher ethical standard, as their work has important human consequences for us all. We must begin to think of them as doing deeply human work, intervening in our present realities and forging our futures. Choosing how best to model the world, impressing their will on it, and us. We must insist that they understand their role as augmenting and modifying humanity, and are responsible for the implications. Collective societal expectations are powerful; if we don’t, they won’t.
·letterstoayoungtechnologist.com·
To be a Technologist is to be Human - Letters to a Young Technologist
Interface Aesthetics - An Introduction - Rhizome
Interface Aesthetics - An Introduction - Rhizome
Nevertheless, the interface pushes back with its prescribed methodologies, workflows, and limitations. Interface and artist are an antagonistic pair. Perhaps the best description of the polemic between the two is one of productive cannibalism. Just as the interface evolves under the pressure of innovation to accommodate new pragmatic uses, the artists’ will continue to deconstruct and push its aesthetic and behavioral properties to their limits.
·rhizome.org·
Interface Aesthetics - An Introduction - Rhizome
What comes after smartphones? — Benedict Evans
What comes after smartphones? — Benedict Evans
Mainframes were followed by PCs, and then the web, and then smartphones. Each of these new models started out looking limited and insignificant, but each of them unlocked a new market that was so much bigger that it pulled in all of the investment, innovation and company creation and so grew to overtake the old one. Meanwhile, the old models didn’t go away, and neither, mostly, did the companies that had been created by them. Mainframes are still a big business and so is IBM; PCs are still a big business and so is Microsoft. But they don’t set the agenda anymore - no-one is afraid of them.
We’ve spent the last few decades getting to the point that we can now give everyone on earth a cheap, reliable, easy-to-use pocket computer with access to a global information network. But so far, though over 4bn people have one of these things, we’ve only just scratched the surface of what we can do with them.
There’s an old saying that the first fifty years of the car industry were about creating car companies and working out what cars should look like, and the second fifty years were about what happened once everyone had a car - they were about McDonalds and Walmart, suburbs and the remaking of the world around the car, for good and of course bad. The innovation in cars became everything around the car. One could suggest the same today about smartphones - now the innovation comes from everything else that happens around them.
·ben-evans.com·
What comes after smartphones? — Benedict Evans
Yale Law Journal - Amazon’s Antitrust Paradox
Yale Law Journal - Amazon’s Antitrust Paradox
Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead. Through this strategy, the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it. Elements of the firm’s structure and conduct pose anticompetitive concerns—yet it has escaped antitrust scrutiny.
This Note argues that the current framework in antitrust—specifically its pegging competition to “consumer welfare,” defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output. Specifically, current doctrine underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive.
These concerns are heightened in the context of online platforms for two reasons. First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible.
Second, because online platforms serve as critical intermediaries, integrating across business lines positions these platforms to control the essential infrastructure on which their rivals depend. This dual role also enables a platform to exploit information collected on companies using its services to undermine them as competitors.
·yalelawjournal.org·
Yale Law Journal - Amazon’s Antitrust Paradox