Building Fluid Interfaces. How to create natural gestures and… | by Nathan Gitter | Medium

Saved
Back to the Future of Twitter – Stratechery by Ben Thompson
This is all build-up to my proposal for what Musk — or any other bidder for Twitter, for that matter — ought to do with a newly private Twitter.
First, Twitter’s current fully integrated model is a financial failure.
Second, Twitter’s social graph is extremely valuable.
Third, Twitter’s cultural impact is very large, and very controversial.
Given this, Musk (who I will use as a stand-in for any future CEO of Twitter) should start by splitting Twitter into two companies.
One company would be the core Twitter service, including the social graph.
The other company would be all of the Twitter apps and the advertising business.
TwitterServiceCo would open up its API to any other company that might be interested in building their own client experience; each company would:
Pay for the right to get access to the Twitter service and social graph.
Monetize in whatever way they see fit (i.e. they could pursue a subscription model).
Implement their own moderation policy.
This last point would cut a whole host of Gordian Knots:
A truly open TwitterServiceCo has the potential to be a new protocol for the Internet — the notifications and identity protocol; unlike every other protocol, though, this one would be owned by a private company. That would be insanely valuable, but it is a value that will never be realized as long as Twitter is a public company led by a weak CEO and ineffective board driving an integrated business predicated on a business model that doesn’t work.
Twitter’s Reluctance
How to give constructive design feedback over email | by Julius Tarng | Medium
How To: Code Your First Web App (Part 1) — PaulStamatiou.com
Interview: James Cuda Tells about Present and Future of Procreate App
Apps are too complex so maybe features should be ownable and tradable (Interconnected)
Clues for software design in how we sketch maps of cities (Interconnected)
Day 1 notes from picking up a modern VR headset (Interconnected)
Given all that, here’s what made me gasp on day 1 and what I’m still thinking about.
Peeping through passthrough. The way it works is that you “draw” (in VR) a box on the floor. Within that box, you are immersed in 3D virtual reality. Near the edge, you see the box around you outlined as a grid. As you touch the edge, a hole appears… you can poke your head through, and you see the real world beyond, in black and white fuzzy passthrough. I found myself leaning out to have a chat or to grab a drink. Delightful.
A Godzilla’s eye’s view. Playing mini golf, I found a button that let me zoom out. Suddenly I was standing in the middle of this golf course arranged on a mountain, the mountain halfway up my chest. Walking just a foot or two, and bending down, and leaning close, I could examine bridges and trees and caves and courses. An incredible, examinable overview, in a way that would take multiple steps on any other device.
Height, space, and scale. In the first room of Anne Frank’s house, there’s a steep staircase leading down, but it’s inaccessible from the tour. However I was able to kneel down, put my head through the bannisters, and peer over the edge, down the stairwell, and into the next room. I know this is what VR is all about, but the sense of being located continues to astound. What do we do now the gamut of interaction can include vertigo and awe? It’s like suddenly being given an extra colour.
Objects that cross the threshold. When I pick up the real-life controls, they appear in VR space. Headset on, everything black and gone – except the controls in my hands are still there. And now they have extra green lights and details on them! Janus objects that face both ways into physical and fictive reality. The controls are real… but realer in VR. So many opportunities for play.
NFTs Are Put to New Use in China, Countering Censorship During Pandemic - WSJ
Sundown Towns Are Still A Problem For Black Drivers
The Distribution of Users’ Computer Skills: Worse Than You Think
'Decision' Cannes Win Is Vindication For Korean Culture - Variety
Five Technology Design Principles to Combat Domestic Abuse
Ask HN: Is there a site popular with Gen Z where users can write HTML and CSS? | Hacker News
Privacy, ads and confusion — Benedict Evans
Advertisers don’t really want to know who you are - they want to show diaper ads to people who have babies, not to show them to people who don’t, and to have some sense of which ads drove half a million sales and which ads drove a million sales.
In practice, ‘showing car ads to people who read about cars’ led the adtech industry to build vast piles of semi-random personal data, aggregated, disaggregated, traded, passed around and sometimes just lost, partly because it could and partly because that appeared to be the only way to do it. After half a decade of backlash, there are now a bunch of projects trying to get to the same underlying advertiser aims - to show ads that are relevant, and get some measure of ad effectiveness - while keeping the private data private.
Apple has pursued a very clear theory that analysis and tracking is private if it happens on your device and is not private if leaves your device or happens in the cloud. Hence, it’s built a complex system of tracking and analysis on your iPhone, but is adamant that this is private because the data stays on the device. People have seemed to accept this (so far - or perhaps the just haven’t noticed it), but acting on the same theory Apple also created a CSAM scanning system that it thought was entirely private - ‘it only happens your device!’ - that created a huge privacy backlash, because a bunch of other people think that if your phone is scanning your photos, that isn’t ‘private’ at all. So is ‘on device’ private or not? What’s the rule? What if Apple tried the same model for ‘private’ ads in Safari? How will the public take FLoC? I don’t think we know.
On / off device is one test, but another and much broader is first party / third party: the idea it’s OK for a website to track what you do on that website but not OK for adtech companies to track you across many different websites. This is the core of the cookie question
At this point one answer is to cut across all these questions and say that what really matters is whether you disclose whatever you’re doing and get consent. Steve Jobs liked this argument. But in practice, as we've discovered, ‘get consent’ means endless cookie pop-ups full of endless incomprehensible questions that no normal consumer should be expected to understand, and that just train people to click ‘stop bothering me’. Meanwhile, Apple’s on-device tracking doesn't ask for permission, and opts you in by default, because, of course, Apple thinks that if it's on the device it's private. Perhaps ‘consent’ is not a complete solution after all.
If you can only analyse behaviour within one site but not across many sites, or make it much harder to do that, companies that have a big site where people spend lots of time have better targeting information and make more money from advertising. If you can only track behaviour across lots of different sites if you do it ‘privately’ on the device or in the browser, then the companies that control the device or the browser have much more control over that advertising
Digital gardens | Chase McCoy
Ask HN: Bluetooth kinda sucks. Why don't we have something better? | Hacker News
when you try to implement one of these specs you quickly realize that you cannot do it with the spec alone. You need example code, base implementations, test suite software and test data to build conformant software. Unfortunately, the Bluetooth SIG hides these resources behind a membership wall. Guess what happens then? You get lots of implementations of these specs that are a little bit off and don't handle all edge cases.
Spotify’s In-house Agency Propels Its Pivot to 'Playfulness'
Trump versus the rule of law in 2024
Ad Tech Revenue Statements Indicate Unclear Effects of App Tracking Transparency
it is very difficult to figure out what specific effect ATT has because there are so many factors involved
If ATT were so significantly kneecapping revenue, I would think we would see a pronounced skew against North America compared to elsewhere. But that is not the case. Revenue in North America is only slightly off compared to the company total, and it is increasing how much it earns per North American user compared to the rest of the world.
iOS is far more popular in the U.S. and Canada than it is in Europe, but Meta incurred a greater revenue decline — in absolute terms and, especially, in percentage terms — in Europe.
Meta was still posting year-over-year gains in both those regions until this most recent quarter, even though ATT rolled out over a year ago.
there are those who believe highly-targeted advertisements are a fair trade-off because they offer businesses a more accurate means of finding their customers, and the behavioural data collected from all of us is valuable only in the aggregate. That is, as I understand it, the view of analysts like Seufert, Benedict Evans, and Ben Thompson. Frequent readers will not be surprised to know I disagree with this premise. Regardless of how many user agreements we sign and privacy policies we read, we cannot know the full extent of the data economy. Personal information about us is being collected, shared, combined, and repackaged. It may only be profitable in aggregate, but it is useful with finer granularity, so it is unsurprising that it is indefinitely warehoused in detail.
Seufert asked, rhetorically, “what happens when ads aren’t personalized?”, answering “digital ads resemble TV ads: jarring distractions from core content experience. Non-personalized is another way of saying irrelevant, or at best, randomly relevant.”
opinion in support or personalized ads
does it make sense to build the internet’s economy on the backs of a few hundred brokers none of us have heard of, trading and merging our personal information in the hope of generating a slightly better click-through rate?
Then there is the much bigger question of whether people should even be able to opt into such widespread tracking. We simply cannot be informed consumers in every aspect of our lives, and we cannot foresee how this information will be used and abused in the full extent of time. It sounds boring, but what is so wrong with requiring data minimization at every turn, permitting only the most relevant personal data to be collected, and restricting the ability for this information to be shared or combined?
Does ATT really “[deprive] consumers of widespread ad relevancy and advertisers and publishers of commercial opportunity”? Even if it does — which I doubt — has that commercial opportunity really existed with meaningful consumer awareness and choice? Or is this entire market illegitimate, artificially inflated by our inability to avoid becoming its subjects?
I've thought this too. Do click through rates really improve so much from targeting that the internet industries' obsession with this practice is justified?
Conflicts like these are one of many reasons why privacy rights should be established by regulators, not individual companies. Privacy must not be a luxury good, or something you opt into, and it should not be a radical position to say so. We all value different degrees of privacy, but it should not be possible for businesses to be built on whether we have rights at all. The digital economy should not be built on such rickety and obviously flawed foundations.
Great and succinct summary of points on user privacy
The Age of Algorithmic Anxiety
“I’ve been on the internet for the last 10 years and I don’t know if I like what I like or what an algorithm wants me to like,” Peter wrote. She’d come to see social networks’ algorithmic recommendations as a kind of psychic intrusion, surreptitiously reshaping what she’s shown online and, thus, her understanding of her own inclinations and tastes.
Besieged by automated recommendations, we are left to guess exactly how they are influencing us, feeling in some moments misperceived or misled and in other moments clocked with eerie precision.
Using type in AR & VR – Fonts Knowledge - Google Fonts
Folk (Browser) Interfaces
For the layman to build their own Folk Interfaces, jigs to wield the media they care about, we must offer simple primitives. A designer in Blender thinks in terms of lighting, camera movements, and materials. An editor in Premiere, in sequences, transitions, titles, and colors. Critically, this is different from automating existing patterns, e.g. making it easy to create a website, simulate the visuals of film photography, or 3D-scan one's room. Instead, it's about building a playground in which those novel computational artifacts can be tinkered with and composed, via a grammar native to their own domain, to produce the fruits of the users' own vision.
The goal of the computational tool-maker then is not to teach the layman about recursion, abstraction, or composition, but to provide meaningful primitives (i.e. a system) with which the user can do real work. End-user programming is a red herring: We need to focus on materiality, what some disparage as mere "side effects." The goal is to enable others to feel the agency and power that comes when the world ceases to be immutable.
This feels strongly related to another quote about software as ideology / a system of metaphors that influence the way we assign value to digital actions and content.
I hope this mode can
paint the picture of software, not as a teleological instrument careening
towards automation and ease, but as a medium for intimacy with the matter of
our time (images, audio, video), yielding a sense of agency with what, to most,
feels like an indelible substrate.
Warner Bros. Weighing Fate of ‘The Flash’ as Its Ezra Miller Problem Grows
Words are polluted. Plots are polluted.
I care about people more than I care about positions or beliefs, which I tend to consider a subservient class of psychological phenomena. That is to say: I think people wear beliefs like clothes; they wear what they have grown to consider sensible or attractive; they wear what they feel flatters them; they wear what keeps them dry and warm in inclement winter. They believe their opinions, tastes, philosophies are who they are, but they are mistaken. (Aging is largely learning what one is not, it seems to me).
Criticism must serve some function to justify the pain it causes: it must, say, avert a disastrous course of action being deliberated by a group, or help thwart the rise of a barbarous politician. But this rarely occurs. Most criticism, even of the most erudite sort, is, as we all know, wasted breath: preached to one’s own choir, comically or indignantly cruel to those one doesn’t respect, unlikely to change the behavior of anyone not already in agreement.On the other hand! There persists the idea that culture arises out of the scrum of colliding perspectives, and that it is therefore a moral duty to remonstrate against stupidity, performative emoting, deceitful art, destructively banal fiction, and so on. If one doesn’t speak up, one cannot lament the triumph of moral and imaginative vacuity.
One must believe, of course, that there are abstractions worth protecting, and therefore abstractions worth hurting others for, in order to criticize; and the endless repetitiveness of cultural history seems to devalue such abstractions as surely as bad art and cliche devalue words.
Haus, a VC-backed aperitif startup, is up for sale after Series A falls through
Design with materials, not features | thesephist.com
Material-based software can also have gentler learning curves, because the user only needs to learn a small set of rules about how different metaphors in the software interact with each other rather than learning deep hierarchies of menus and terminologies. In the best case, users can continue to discover new capabilities and workflows long after the initial release of the software.
Take nonlinear video editing software, like Final Cut Pro and Premiere Pro, for example. Though they have their fair share of menu complexity, it’s pretty intuitive for anyone to understand the basic building blocks of video and audio clips sitting on a layered timeline. Here, the video and audio clips are composed of a software material that have their own laws of physics: they can grow and shrink to take up more or less time on the timeline. They can be moved around and layered in front of or behind other clips, but they can’t occupy the same space on the same “track” of the timeline. The timeline that runs left-to-right is a kind of “world” in which the materials of audio and video exist.
Kevin Kelly on Why Technology Has a Will
The game is that every time we create a new technology, we’re creating new possibilities, new choices that didn’t exist before. Those choices themselves—even the choice to do harm—are a good, they’re a plus.
We want an economy that’s growing in the second sense: unlimited betterment, unlimited increase in wisdom, and complexity, and choices. I don’t see any limit there. We don’t want an economy that’s just getting fatter and fatter, and bigger and bigger, in terms of its size. Can we imagine such a system? That’s hard, but I don’t think it’s impossible.
On the Social Media Ideology
Social networking is much more than just a dominant discourse. We need to go beyond text and images and include its software, interfaces, and networks that depend on a technical infrastructure consisting of offices and their consultants and cleaners, cables and data centers, working in close concert with the movements and habits of the connected billions. Academic internet studies circles have shifted their attention from utopian promises, impulses, and critiques to “mapping” the network’s impact. From digital humanities to data science we see a shift in network-oriented inquiry from Whether and Why, What and Who, to (merely) How. From a sociality of causes to a sociality of net effects. A new generation of humanistic researchers is lured into the “big data” trap, and kept busy capturing user behavior whilst producing seductive eye candy for an image-hungry audience (and vice versa).
We need to politicize the New Electricity, the privately owned utilities of our century, before they disappear into the background.
What remains particularly unexplained is the apparent paradox between the hyper-individualized subject and the herd mentality of the social.
Before we enter the social media sphere, everyone first fills out a profile and choses a username and password in order to create an account. Minutes later, you’re part of the game and you start sharing, creating, playing, as if it has always been like that. The profile is the a priori part and the profiling and targeted advertising cannot operate without it. The platforms present themselves as self-evident. They just are—facilitating our feature-rich lives. Everyone that counts is there. It is through the gate of the profile that we become its subject.
We pull in updates, 24/7, in a real-time global economy of interdependencies, having been taught to read news feeds as interpersonal indicators of the planetary condition
Treating social media as ideology means observing how it binds together media, culture, and identity into an ever-growing cultural performance (and related “cultural studies”) of gender, lifestyle, fashion, brands, celebrity, and news from radio, television, magazines, and the web—all of this imbricated with the entrepreneurial values of venture capital and start-up culture, with their underside of declining livelihoods and growing inequality.
Software, or perhaps more precisely operating systems, offer us an imaginary relationship to our hardware: they do not represent transistors but rather desktops and recycling bins. Software produces users. Without operating system (OS) there would be no access to hardware; without OS no actions, no practices, and thus no user. Each OS, through its advertisements, interpellates a “user”: calls it and offers it a name or image with which to identify.
We could say that social media performs the same function, and is even more powerful.
In the age of social media we seem to confess less what we think. It’s considered too risky, too private. We share what we do, and see, in a staged manner. Yes, we share judgments and opinions, but no thoughts. Our Self is too busy for that, always on the move, flexible, open, sporty, sexy, and always ready to connect and express.
Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
How Technocrats Triumphed at Apple