Claude summary: > This article argues that the only enduring justification for space exploration is its potential to fundamentally transform human civilization and our understanding of ourselves. The author traces the history of space exploration, from the mystical beliefs of early rocket pioneers to the geopolitical motivations of the Space Race, highlighting how current economic, scientific, and military rationales fall short of sustaining long-term commitment. The author contends that achieving interstellar civilization will require unprecedented organizational efforts and societal commitment, likely necessitating institutions akin to governments or religions. Ultimately, the piece suggests that only a society that embraces the pursuit of interstellar civilization as its central legitimating project may succeed in this monumental endeavor, framing space exploration not as an inevitable outcome of progress, but as a deliberate choice to follow a "golden path to a destiny among the stars."
The Only Reason to Explore Space
Calculating Empires: A Genealogy of Technology and Power since 1500
The Californian Ideology
Summary: The Californian Ideology is a mix of cybernetics, free market economics, and counter-culture libertarianism that originated in California and has become a global orthodoxy. It asserts that technological progress will inevitably lead to a future of Jeffersonian democracy and unrestrained free markets. However, this ideology ignores the critical role of government intervention in technological development and the social inequalities perpetuated by free market capitalism.
The most hated workplace software on the planet
LinkedIn, Reddit, and Blind abound with enraged job applicants and employees sharing tales of how difficult it is to book paid leave, how Kafkaesque it is to file an expense, how nerve-racking it is to close out a project. "I simply hate Workday. Fuck them and those who insist on using it for recruitment," one Reddit user wrote. "Everything is non-intuitive, so even the simplest tasks leave me scratching my head," wrote another. "Keeping notes on index cards would be more effective." Every HR professional and hiring manager I spoke with — whose lives are supposedly made easier by Workday — described Workday with a sense of cosmic exasperation.
If candidates hate Workday, if employees hate Workday, if HR people and managers processing and assessing those candidates and employees through Workday hate Workday — if Workday is the most annoying part of so many workers' workdays — how is Workday everywhere? How did a software provider so widely loathed become a mainstay of the modern workplace?
This is a saying in systems thinking: The purpose of a system is what it does (POSIWID), not what it fails to do. And the reality is that what Workday — and its many despised competitors — does for organizations is far more important than the anguish it causes everyone else.
In 1988, PeopleSoft, backed by IBM, built the first fully fledged Human Resources Information System. In 2004, Oracle acquired PeopleSoft for $10.3 billion. One of its founders, David Duffield, then started a new company that upgraded PeopleSoft's model to near limitless cloud-based storage — giving birth to Workday, the intractable nepo baby of HR software.
Workday is indifferent to our suffering in a job hunt, because we aren't Workday's clients, companies are. And these companies — from AT&T to Bank of America to Teladoc — have little incentive to care about your application experience, because if you didn't get the job, you're not their responsibility. For a company hiring and onboarding on a global scale, it is simply easier to screen fewer candidates if the result is still a single hire.
A search on a job board can return hundreds of listings for in-house Workday consultants: IT and engineering professionals hired to fix the software promising to fix processes.
For recruiters, Workday also lacks basic user-interface flexibility. When you promise ease-of-use and simplicity, you must deliver on the most basic user interactions. And yet: Sometimes searching for a candidate, or locating a candidate's status feels impossible. This happens outside of recruiting, too, where locating or attaching a boss's email to approve an expense sheet is complicated by the process, not streamlined. Bureaucratic hell is always about one person's ease coming at the cost of someone else's frustration, time wasted, and busy work. Workday makes no exceptions.
Workday touts its ability to track employee performance by collecting data and marking results, but it is employees who must spend time inputting this data. A creative director at a Fortune 500 company told me how in less than two years his company went "from annual reviews to twice-annual reviews to quarterly reviews to quarterly reviews plus separate twice-annual reviews." At each interval higher-ups pressed HR for more data, because they wanted what they'd paid for with Workday: more work product. With a press of a button, HR could provide that, but the entire company suffered thousands more hours of busy work. Automation made it too easy to do too much. (Workday's "customers choose the frequency at which they conduct reviews, not Workday," said the spokesperson.)
At the scale of a large company, this is simply too much work to expect a few people to do and far too user-specific to expect automation to handle well. It's why Workday can be the worst while still allowing that Paychex is the worst, Paycom is the worst, Paycor is the worst, and Dayforce is the worst. "HR software sucking" is a big tent.
Workday finds itself between enshittification steps two and three. The platform once made things faster, simpler for workers. But today it abuses workers by cutting corners on job-application and reimbursement procedures. In the process, it provides the value of a one-stop HR shop to its paying customers. It seems it's only a matter of time before Workday and its competitors try to split the difference and cut those same corners with the accounts that pay their bills.
Workday reveals what's important to the people who run Fortune 500 companies: easily and conveniently distributing busy work across large workforces. This is done with the arbitrary and perfunctory performance of work tasks (like excessive reviews) and with the throttling of momentum by making finance and HR tasks difficult. If your expenses and reimbursements are difficult to file, that's OK, because the people above you don't actually care if you get reimbursed. If it takes applicants 128% longer to apply, the people who implemented Workday don't really care. Throttling applicants is perhaps not intentional, but it's good for the company.
Looking for AI use-cases — Benedict Evans
- LLMs have impressive capabilities, but many people struggle to find immediate use-cases that match their own needs and workflows.
- Realizing the potential of LLMs requires not just technical advancements, but also identifying specific problems that can be automated and building dedicated applications around them.
- The adoption of new technologies often follows a pattern of initially trying to fit them into existing workflows, before eventually changing workflows to better leverage the new tools.
if you had showed VisiCalc to a lawyer or a graphic designer, their response might well have been ‘that’s amazing, and maybe my book-keeper should see this, but I don’t do that’. Lawyers needed a word processor, and graphic designers needed (say) Postscript, Pagemaker and Photoshop, and that took longer.
I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.
A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time.
no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this.
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’.
This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The concept of product-market fit is that normally you have to iterate your idea of the product and your idea of the use-case and customer towards each other - and then you need sales.
Meanwhile, spreadsheets were both a use-case for a PC and a general-purpose substrate in their own right, just as email or SQL might be, and yet all of those have been unbundled. The typical big company today uses hundreds of different SaaS apps, all them, so to speak, unbundling something out of Excel, Oracle or Outlook. All of them, at their core, are an idea for a problem and an idea for a workflow to solve that problem, that is easier to grasp and deploy than saying ‘you could do that in Excel!’ Rather, you instantiate the problem and the solution in software - ‘wrap it’, indeed - and sell that to a CIO. You sell them a problem.
there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL.
Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem, and with a general-purpose, open-ended prompt, the user has to think of all of that themselves, every single time, or hope it’s already in the training data. So, can the GUI itself be generative? Or do we need another whole generation of Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
From Tech Critique to Ways of Living — The New Atlantis
Yuk Hui's concept of "cosmotechnics" combines technology with morality and cosmology. Inspired by Daoism, it envisions a world where advanced tech exists but cultures favor simpler, purposeful tools that guide people towards contentment by focusing on local, relational, and ironic elements. A Daoist cosmotechnics points to alternative practices and priorities - learning how to live from nature rather than treating it as a resource to be exploited, valuing embodied relation over abstract information
We might think of the shifting relationship of human beings to the natural world in the terms offered by German sociologist Gerd-Günter Voß, who has traced our movement through three different models of the “conduct of life.”
The first, and for much of human history the only conduct of life, is what he calls the traditional. Your actions within the traditional conduct of life proceed from social and familial circumstances, from what is thus handed down to you. In such a world it is reasonable for family names to be associated with trades, trades that will be passed down from father to son: Smith, Carpenter, Miller.
But the rise of the various forces that we call “modernity” led to the emergence of the strategic conduct of life: a life with a plan, with certain goals — to get into law school, to become a cosmetologist, to get a corner office.
thanks largely to totalizing technology’s formation of a world in which, to borrow a phrase from Marx and Engels, “all that is solid melts into air,” the strategic model of conduct is replaced by the situational. Instead of being systematic planners, we become agile improvisers: If the job market is bad for your college major, you turn a side hustle into a business. But because you know that your business may get disrupted by the tech industry, you don’t bother thinking long-term; your current gig might disappear at any time, but another will surely present itself, which you will assess upon its arrival.
The movement through these three forms of conduct, whatever benefits it might have, makes our relations with nature increasingly instrumental. We can see this shift more clearly when looking at our changing experience of time
Within the traditional conduct of life, it is necessary to take stewardly care of the resources required for the exercise of a craft or a profession, as these get passed on from generation to generation.
But in the progression from the traditional to the strategic to the situational conduct of life, continuity of preservation becomes less valuable than immediacy of appropriation: We need more lithium today, and merely hope to find greater reserves — or a suitable replacement — tomorrow. This revaluation has the effect of shifting the place of the natural order from something intrinsic to our practices to something extrinsic. The whole of nature becomes what economists tellingly call an externality.
The basic argument of the SCT goes like this. We live in a technopoly, a society in which powerful technologies come to dominate the people they are supposed to serve, and reshape us in their image. These technologies, therefore, might be called prescriptive (to use Franklin’s term) or manipulatory (to use Illich’s). For example, social networks promise to forge connections — but they also encourage mob rule.
all things increasingly present themselves to us as technological: we see them and treat them as what Heidegger calls a “standing reserve,” supplies in a storeroom, as it were, pieces of inventory to be ordered and conscripted, assembled and disassembled, set up and set aside
In his exceptionally ambitious book The Question Concerning Technology in China (2016) and in a series of related essays and interviews, Hui argues, as the title of his book suggests, that we go wrong when we assume that there is one question concerning technology, the question, that is universal in scope and uniform in shape. Perhaps the questions are different in Hong Kong than in the Black Forest. Similarly, the distinction Heidegger draws between ancient and modern technology — where with modern technology everything becomes a mere resource — may not universally hold.
Thesis: Technology is an anthropological universal, understood as an exteriorization of memory and the liberation of organs, as some anthropologists and philosophers of technology have formulated it;
Antithesis: Technology is not anthropologically universal; it is enabled and constrained by particular cosmologies, which go beyond mere functionality or utility. Therefore, there is no one single technology, but rather multiple cosmotechnics.
osmotechnics is the integration of a culture's worldview and ethical framework with its technological practices, illustrating that technology is not just about functionality but also embodies a way of life realized through making.
I think Hui’s cosmotechnics, generously leavened with the ironic humor intrinsic to Daoism, provides a genuine Way — pun intended — beyond the limitations of the Standard Critique of Technology. I say this even though I am not a Daoist; I am, rather, a Christian. But it should be noted that Daoism is both daojiao, an organized religion, and daojia, a philosophical tradition. It is daojia that Hui advocates, which makes the wisdom of Daoism accessible and attractive to a Christian like me. Indeed, I believe that elements of daojia are profoundly consonant with Christianity, and yet underdeveloped in the Christian tradition, except in certain modes of Franciscan spirituality, for reasons too complex to get into here.
this technological Daoism as an embodiment of daojia, is accessible to people of any religious tradition or none. It provides a comprehensive and positive account of the world and one’s place in it that makes a different approach to technology more plausible and compelling. The SCT tends only to gesture in the direction of a model of human flourishing, evokes it mainly by implication, whereas Yuk Hui’s Daoist model gives an explicit and quite beautiful account.
The application of Daoist principles is most obvious, as the above exposition suggests, for “users” who would like to graduate to the status of “non-users”: those who quietly turn their attention to more holistic and convivial technologies, or who simply sit or walk contemplatively. But in the interview I quoted from earlier, Hui says, “Some have quipped that what I am speaking about is Daoist robots or organic AI” — and this needs to be more than a quip. Peter Thiel’s longstanding attempt to make everyone a disciple of René Girard is a dead end. What we need is a Daoist culture of coders, and people devoted to “action without acting” making decisions about lithium mining.
Tools that do not contribute to the Way will neither be worshipped nor despised. They will simply be left to gather dust as the people choose the tools that will guide them in the path of contentment and joy: utensils to cook food, devices to make clothes.
Of course, the food of one village will differ from that of another, as will the clothing. Those who follow the Way will dwell among the “ten thousand things” of this world — what we call nature — in a certain manner that cannot be specified legally: Verse 18 of the Tao says that when virtue arises only from rules, that is a sure sign that the Way is not present and active. A cosmotechnics is a living thing, always local in the specifics of its emergence in ways that cannot be specified in advance.
It is from the ten thousand things that we learn how to live among the ten thousand things; and our choice of tools will be guided by what we have learned from that prior and foundational set of relations. This is cosmotechnics.
Multiplicity avoids the universalizing, totalizing character of technopoly. The adherents of technopoly, Hui writes, “wishfully believ[e] that the world process will stamp out differences and diversities” and thereby achieve a kind of techno-secular “theodicy,” a justification of the ways of technopoly to its human subjects. But the idea of multiple cosmotechnics is also necessary, Hui believes, in order to avoid the simply delusional attempt to find “a way out of modernity” by focusing on the indigenous or biological “Other.” An aggressive hostility to modernity and a fetishizing of pre-modernity is not the Daoist way.
“I believe that to overcome modernity without falling back into war and fascism, it is necessary to reappropriate modern technology through the renewed framework of a cosmotechnics.” His project “doesn’t refuse modern technology, but rather looks into the possibility of different technological futures.”
“Thinking rooted in the earthy virtue of place is the motor of cosmotechnics. However, for me, this discourse on locality doesn’t mean a refusal of change and of progress, or any kind of homecoming or return to traditionalism; rather, it aims at a re-appropriation of technology from the perspective of the local and a new understanding of history.”
Always Coming Home illustrates cosmotechnics in a hundred ways. Consider, for instance, information storage and retrieval. At one point we meet the archivist of the Library of the Madrone Lodge in the village of Wakwaha-na. A visitor from our world is horrified to learn that while the library gives certain texts and recordings to the City of Mind, some of their documents they simply destroy. “But that’s the point of information storage and retrieval systems! The material is kept for anyone who wants or needs it. Information is passed on — the central act of human culture.” But that is not how the librarian thinks about it. “Tangible or intangible, either you keep a thing or you give it. We find it safer to give it” — to practice “unhoarding.”
It is not information, but relation. This too is cosmotechnics.
The modern technological view treats information as a resource to be stored and optimized. But the archivist in Le Guin's Daoist-inspired society takes a different approach, one where documents can be freely discarded because what matters is not the hoarding of information but the living of life in sustainable relation
a cosmotechnics is the point at which a way of life is realized through making.
The point may be illustrated with reference to an ancient tale Hui offers, about an excellent butcher who explains to a duke what he calls the Dao, or “way,” of butchering. The reason he is a good butcher, he says, it not his mastery of a skill, or his reliance on superior tools. He is a good butcher because he understands the Dao: Through experience he has come to rely on his intuition to thrust the knife precisely where it does not cut through tendons or bones, and so his knife always stays sharp. The duke replies: “Now I know how to live.” Hui explains that “it is thus the question of ‘living,’ rather than that of technics, that is at the center of the story.”
Strong and weak technologies - cdixon
Strong technologies capture the imaginations of technology enthusiasts. That is why many important technologies start out as weekend hobbies. Enthusiasts vote with their time, and, unlike most of the business world, have long-term horizons. They build from first principles, making full use of the available resources to design technologies as they ought to exist.
Project Xanadu - Wikipedia
DAK and the Golden Age of Gadget Catalogs
Fake It ’Til You Fake It
On the long history of photo manipulation dating back to the origins of photography. While new technologies have made manipulation much easier, the core questions around trust and authenticity remain the same and have been asked for over a century.
The criticisms I have been seeing about the features of the Pixel 8, however, feel like we are only repeating the kinds of fears of nearly two hundred years. We have not been able to wholly trust photographs pretty much since they were invented. The only things which have changed in that time are the ease with which the manipulations can happen, and their availability.
We all live with a growing sense that everything around us is fraudulent. It is striking to me how these tools have been introduced as confidence in institutions has declined. It feels like a death spiral of trust — not only are we expected to separate facts from their potentially misleading context, we increasingly feel doubtful that any experts are able to help us, yet we keep inventing new ways to distort reality.
The questions that are being asked of the Pixel 8’s image manipulation capabilities are good and necessary because there are real ethical implications. But I think they need to be more fully contextualized. There is a long trail of exactly the same concerns and, to avoid repeating ourselves yet again, we should be asking these questions with that history in mind. This era feels different. I think we should be asking more precisely why that is.
The questions we ask about generative technologies should acknowledge that we already have plenty of ways to lie, and that lots of the information we see is suspect. That does not mean we should not believe anything, but it does mean we ought to be asking questions about what is changed when tools like these become more widespread and easier to use.
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
What Is AI Doing To Art? | NOEMA
The proliferation of AI-generated images in online environments won’t eradicate human art wholesale, but it does represent a reshuffling of the market incentives that help creative economies flourish. Like the college essay, another genre of human creativity threatened by AI usurpation, creative “products” might become more about process than about art as a commodity.
Are artists using computer software on iPads to make seemingly hand-painted images engaged in a less creative process than those who produce the image by hand? We can certainly judge one as more meritorious than the other but claiming that one is more original is harder to defend.
An understanding of the technology as one that separates human from machine into distinct categories leaves little room for the messier ways we often fit together with our tools. AI-generated images will have a big impact on copyright law, but the cultural backlash against the “computers making art” overlooks the ways computation has already been incorporated into the arts.
The problem with debates around AI-generated images that demonize the tool is that the displacement of human-made art doesn’t have to be an inevitability. Markets can be adjusted to mitigate unemployment in changing economic landscapes. As legal scholar Ewan McGaughey points out, 42% of English workers were redundant after WWII — and yet the U.K. managed to maintain full employment.
Contemporary critics claim that prompt engineering and synthography aren’t emergent professions but euphemisms necessary to equate AI-generated artwork with the work of human artists. As with the development of photography as a medium, today’s debates about AI often overlook how conceptions of human creativity are themselves shaped by commercialization and labor.
Others looking to elevate AI art’s status alongside other forms of digital art are opting for an even loftier rebrand: “synthography.” This categorization suggests a process more complex than the mechanical operation of a picture-making tool, invoking the active synthesis of disparate aesthetic elements. Like Fox Talbot and his contemporaries in the nineteenth century, “synthographers” maintain that AI art simply automates the most time-consuming parts of drawing and painting, freeing up human cognition for higher-order creativity.
Separating human from camera was a necessary part of preserving the myth of the camera as an impartial form of vision. To incorporate photography into an economic landscape of creativity, however, human agency needed to ascribe to all parts of the process.
Consciously or not, proponents of AI-generated images stamp the tool with rhetoric that mirrors the democratic aspirations of the twenty-first century.
Stability AI took a similar tack, billing itself as “AI by the people, for the people,” despite turning Stable Diffusion, their text-to-image model, into a profitable asset. That the program is easy to use is another selling point. Would-be digital artists no longer need to use expensive specialized software to produce visually interesting material.
Meanwhile, communities of digital artists and their supporters claim that the reason AI-generated images are compelling at all is because they were trained with data sets that contained copyrighted material. They reject the claim that AI-generated art produces anything original and suggest it instead be thought of as a form of “twenty-first century collage.”
Erasing human influence from the photographic process was good for underscoring arguments about objectivity, but it complicated commercial viability. Ownership would need to be determined if photographs were to circulate as a new form of property. Was the true author of a photograph the camera or its human operator?
By reframing photographs as les dessins photographiques — or photographic drawings, the plaintiffs successfully established that the development of photographs in a darkroom was part of an operator’s creative process. In addition to setting up a shot, the photographer needed to coax the image from the camera’s film in a process resembling the creative output of drawing. The camera was a pencil capable of drawing with light and photosensitive surfaces, but held and directed by a human author.
Establishing photography’s dual function as both artwork and document may not have been philosophically straightforward, but it staved off a surge of harder questions.
Human intervention in the photographic process still appeared to happen only on the ends — in setup and then development — instead of continuously throughout the image-making process.
Pessimists Archive
Pessimists Archive™ is a project to educate people on and archive the history of technophobia and moral panics. We believe the best antidote to fear of the new is looking back at fear of the old.
Only by looking back at fears of old things when they were new, can we have rational constructive debates about emerging technologies today that avoids the pitfalls of moral panic and incumbent protectionism.
Pessimists Archive™ is a project to educate people on and archive the history of technophobia and moral panics. We believe the best antidote to fear of the new is looking back at fear of the old.Only by looking back at fears of old things when they were new, can we have rational constructive debates about emerging technologies today that avoids the pitfalls of moral panic and incumbent protectionism.
The genius behind Zelda is at the peak of his power — and feeling his age
Aonuma became co-director of “Ocarina,” which revolutionized how game characters move and fight each other in a 3D space. Unlike cinema, video games require audience control of the camera. “Ocarina” created a “camera-locking” system to focus the perspective while you use the controller for character movement. The system, still used by games today, is a large reason “Ocarina” is often compared to the work of Orson Welles, who redefined how cinema was shot.
The “ethos of Zelda” focuses on such new, unexpected concepts of play — even as many other modern games prioritize story, like TV and film do. With “Tears,” at “the beginning of development, there really isn’t a story,” Fujibayashi said. “Once we got to the point where we felt confident in the gameplay experience, that’s when the story starts to emerge.”
Will A.I. Become the New McKinsey?
Funniest/Most Insightful Comments Of The Week At Techdirt
Twitter is not a public square controlled by a socialist government – it is a private company in a capitalist economy for the purpose of making money through advertising. Twitter has ZERO interest in promoting the public good.
Congressmembers would have better expertise on tech matters if the Office of Technology Assessment still existed. It was defunded in 1995 under Newt Gingrich’s “Contract to America” plan, because it was an unbiased organization that wouldn’t cow to political narratives. The Chew hearing is one of many instances that highlight both why Newt wanted to defund it, and why eliminating the agency was a detriment to politicians. (Ironically, Newt suggested shortly after the midterms that Republicans should come around to using TikTok to court young voters, despite the allegations of the app’s security risk.) Hopefully, someone in Congress will introduce legislation aimed at reviving the OTA somewhere down the line.
Something Pretty Right | Retool - a history of Microsoft’s Visual Basic
Folklore.org: The Macintosh Spirit
the desire to ship quickly was counterbalanced by a demanding, comprehensive perfectionism. Most commercial projects are driven by commercial values, where the goal is to maximize profits by outperforming your competition. In contrast, the Macintosh was driven more by artistic values, oblivious to competition, where the goal was to be transcendently brilliant and insanely great.
Unlike other parts of Apple, which were becoming more conservative and bureaucratic as the company grew, the early Mac team was organized more like a start-up company. We eschewed formal structure and hierarchy, in favor of a flat meritocracy with minimal managerial oversight, like the band of revolutionaries we aspired to be.
Evolution of AR and VR: UX inspiration from history
Classic HCI demos
Menus, Metaphors and Materials - Milestones of User Interface Design
Whenever Facebook changes its interface, it has an impact on how millions of people communicate. In this sense, user interfaces are cultural artefacts.
20 Years of SEO: A Brief History of Search Engine Optimization
In 2011, Google found its search results facing severe scrutiny because so-called “content farms” (websites that produced high volumes of low-quality content) were dominating the search results.
Google’s SERPs were also cluttered with websites featuring unoriginal and auto-generated content – and even, in some instances, scraper sites were outranking content originators
To be a Technologist is to be Human - Letters to a Young Technologist
In fact, more people are technologists than ever before, insofar as a “technologist” can be defined as someone inventing, implementing or repurposing technology. In particular, the personal computer has allowed anyone to live in the unbounded wilderness of the internet as they please. Anyone can build highly specific corners of cyberspace and quickly invent digital tools, changing their own and others’ technological realities. “Technologist” is a common identity that many different people occupy, and anyone can occupy. Yet the public perceptions of a “technologist” still constitute a very narrow image.
A technologist makes reason out of the messiness of the world, leverages their understanding to envision a different reality, and builds a pathway to make their vision happen. All three of these endeavors—to try to understand the world, to imagine something different, and to build something that fulfills that vision—are deeply human.
Humans are continually distilling and organizing reality into representations and models—to varying degrees of accuracy and implicitness—that we can understand and navigate. Our intelligence involves making models of all aspects of our realities: models of the climate, models of each other’s minds, models of fluid dynamics.
mental models
We are an unprecedentedly self-augmenting species, with a fundamental drive to organize, imagine, construct and exercise our will in the world. And we can measure our technological success on the basis of how much they increase our humanity. What we need is a vision for that humanity, and to enact this vision. What do we, humans, want to become?
As a general public, we can collectively hold technologists to a higher ethical standard, as their work has important human consequences for us all. We must begin to think of them as doing deeply human work, intervening in our present realities and forging our futures. Choosing how best to model the world, impressing their will on it, and us. We must insist that they understand their role as augmenting and modifying humanity, and are responsible for the implications. Collective societal expectations are powerful; if we don’t, they won’t.
Technologies shape the way people live (Do Machines Make History? on JSTOR)