Found 3483 bookmarks
Newest
Value Beyond Instrumentalization - Letters to a Young Technologist
Value Beyond Instrumentalization - Letters to a Young Technologist
Resist being context-collapsed to a one-dimensional being.
Technologists intervene in our present realities and forge the future, and in doing so, choose how best to model the world and impress their will upon it. The public must insist that technologists are responsible for thinking about the human implications of their work.
·letterstoayoungtechnologist.com·
Value Beyond Instrumentalization - Letters to a Young Technologist
Study the Past, Create the Future — Letters to a Young Technologist
Study the Past, Create the Future — Letters to a Young Technologist
You will find in every case that these technologies were only possible because of active decisions taken by governments, corporations, and individuals. They never just “happened”, whatever that might mean. People make history happen, and by reading history, you can see yourself as one of those people. What better motivation could there be for working to make things happen yourself?
·letterstoayoungtechnologist.com·
Study the Past, Create the Future — Letters to a Young Technologist
Umwelt - Wikipedia
Umwelt - Wikipedia
The umwelt theory states that the mind and the world are inseparable because it is the mind that interprets the world for the organism. Because of the individuality and uniqueness of the history of every single organism, the umwelten of different organisms differ.
·en.wikipedia.org·
Umwelt - Wikipedia
A bicycle for the senses
A bicycle for the senses
We can take nature’s superpowers and expand them across many more vectors that are interesting to humans: Across scale — far and near, binoculars, zoom, telescope, microscope Across wavelength — UV, IR, heatmaps, nightvision, wifi, magnetic fields, electrical and water currents Across time — view historical imagery, architectural, terrain, geological, and climate changes Across culture — experience the relevance of a place in books, movies, photography, paintings, and language Across space — travel immersively to other locations for tourism, business, and personal connections Across perspective — upside down, inside out, around corners, top down, wider, narrower, out of body Across interpretation — alter the visual and artistic interpretation of your environment, color-shifting, saturation, contrast, sharpness
Headset displays connect sensory extensions directly to your vision. Equipped with sensors that perceive beyond human capabilities, and access to the internet, they can provide information about your surroundings wherever you are. Until now, visual augmentation has been constrained by the tiny display on our phone. By virtue of being integrated with your your eyesight, headsets can open up new kinds of apps that feel more natural. Every app is a superpower. Sensory computing opens up new superpowers that we can borrow from nature. Animals, plants and other organisms can sense things that humans can’t
The first mass-market bicycle for the senses was Apple’s AirPods. Its noise cancellation and transparency mode replace and enhance your hearing. Earbuds are turning into ear computers that will become more easily programmable. This can enable many more kinds of hearing. For example, instantaneous translation may soon be a reality
For the past seven decades, computers have been designed to enhance what your brain can do — think and remember. New kinds of computers will enhance what your senses can do — see, hear, touch, smell, taste. The term spatial computing is emerging to encompass both augmented and virtual reality. I believe we are exploring an even broader paradigm: sensory computing. The phone was a keyhole for peering into this world, and now we’re opening the door.
What happens when put on a headset and open the “Math” app? How could seeing the world through math help you understand both better?
Advances in haptics may open up new kinds of tactile sensations. A kind of second skin, or softwear, if you will. Consider that Apple shipped a feature to help you find lost items that vibrates more strongly as you get closer. What other kinds of data could be translated into haptic feedback?
It may sound far-fetched, but converting olfactory patterns into visual patterns could open up some interesting applications. Perhaps a new kind of cooking experience? Or new medical applications that convert imperceptible scents into visible patterns?
·stephango.com·
A bicycle for the senses
Evergreen notes turn ideas into objects that you can manipulate
Evergreen notes turn ideas into objects that you can manipulate
Evergreen notes turn ideas into objects. By turning ideas into objects you can manipulate them, combine them, stack them. You don’t need to hold them all in your head at the same time.
Evergreen notes allow you to think about complex ideas by building them up from smaller composable ideas.
·stephango.com·
Evergreen notes turn ideas into objects that you can manipulate
Hixie's Natural Log: Reflecting on 18 years at Google
Hixie's Natural Log: Reflecting on 18 years at Google
Much of these problems with Google today stem from a lack of visionary leadership from Sundar Pichai, and his clear lack of interest in maintaining the cultural norms of early Google. A symptom of this is the spreading contingent of inept middle management. Take Jeanine Banks, for example, who manages the department that somewhat arbitrarily contains (among other things) Flutter, Dart, Go, and Firebase. Her department nominally has a strategy, but I couldn't leak it if I wanted to; I literally could never figure out what any part of it meant, even after years of hearing her describe it. Her understanding of what her teams are doing is minimal at best; she frequently makes requests that are completely incoherent and inapplicable. She treats engineers as commodities in a way that is dehumanising, reassigning people against their will in ways that have no relationship to their skill set. She is completely unable to receive constructive feedback (as in, she literally doesn't even acknowledge it). I hear other teams (who have leaders more politically savvy than I) have learned how to "handle" her to keep her off their backs, feeding her just the right information at the right time. Having seen Google at its best, I find this new reality depressing.
·ln.hixie.ch·
Hixie's Natural Log: Reflecting on 18 years at Google
AI Models in Software UI - LukeW
AI Models in Software UI - LukeW
In the first approach, the primary interface affordance is an input that directly (for the most part) instructs an AI model(s). In this paradigm, people are authoring prompts that result in text, image, video, etc. generation. These prompts can be sequential, iterative, or un-related. Marquee examples are OpenAI's ChatGPT interface or Midjourney's use of Discord as an input mechanism. Since there are few, if any, UI affordances to guide people these systems need to respond to a very wide range of instructions. Otherwise people get frustrated with their primarily hidden (to the user) limitations.
The second approach doesn't include any UI elements for directly controlling the output of AI models. In other words, there's no input fields for prompt construction. Instead instructions for AI models are created behind the scenes as people go about using application-specific UI elements. People using these systems could be completely unaware an AI model is responsible for the output they see.
The third approach is application specific UI with AI assistance. Here people can construct prompts through a combination of application-specific UI and direct model instructions. These could be additional controls that generate portions of those instructions in the background. Or the ability to directly guide prompt construction through the inclusion or exclusion of content within the application. Examples of this pattern are Microsoft's Copilot suite of products for GitHub, Office, and Windows.
they could be overlays, modals, inline menus and more. What they have in common, however, is that they supplement application specific UIs instead of completely replacing them.
·lukew.com·
AI Models in Software UI - LukeW
LLM Powered Assistants for Complex Interfaces - Nick Arner
LLM Powered Assistants for Complex Interfaces - Nick Arner
complexity can make it difficult for both domain novices and experts alike to learn how to use the interface. LLMs can help reduce this barrier by being leveraged to prove assistance to the user if they’re trying to accomplish something, but don’t exactly know how to navigate the interface.The user could tell the program what they’re trying to do via a text or voice interface, or perhaps, the program may be able to infer the user’s intent or goals based on what actions they’ve taken so far.Modern GUI apps are slowly starting to add in more features for assisting users with navigating the space of available commands and actions via command palettes; popularised in software iA Writer and Superhuman.
for executing a sequence of tasks as part of a complex workflow, LLM powered interfaces afford a richer opportunity for learning and using complex software.The program could walk them through the task they’re trying to accomplish by highlighting and selecting the interface elements in the correct order to accomplish the task, along with explanations provided.
Expert interfaces that take advantage of LLMs may end up looking like they currently do - again, complex tasks require complex interfaces. However, it may be easier and faster for users to learn how to use these interfaces thanks to built-in LLM-powered assistants. This will help them to get into flow faster, improving their productivity and feeling of satisfaction when using this complex software.
unlike Clippy, these new types of assistant would be able to act on the interface directly. These actions will be made in accordance to the goals of the person using them, but each discrete action taken by the assistant on the interface will not be done according to explicit human actions - the goals are directed by he human user, but the steps to achieve those goals are unknown to the user, which is why they’re engaging with the assistant in the first place
·nickarner.com·
LLM Powered Assistants for Complex Interfaces - Nick Arner
Why Did I Leave Google Or, Why Did I Stay So Long? - LinkedIn
Why Did I Leave Google Or, Why Did I Stay So Long? - LinkedIn
If I had to summarize it, I would say that the signal to noise ratio is what wore me down. We start companies to build products that serve people, not to sit in meetings with lawyers.  You need to be able to answer the "what have I done for our users today" question with "not much but I got promoted" and be happy with that answer to be successful in Corp-Tech.
being part of a Corporation means that the signal to noise ratio changes dramatically.  The amount of time and effort spent on Legal, Policy, Privacy - on features that have not shipped to users yet, meant a significant waste of resources and focus. After the acquisition, we have an extremely long project that consumed many of our best engineers to align our data retention policies and tools to Google. I am not saying this is not important BUT this had zero value to our users. An ever increasing percent of our time went to non user value creation tasks and that changes the DNA of the company quickly, from customer focused to corporate guidelines focused.
the salaries are so high and the options so valuable that it creates many misalignments.  The impact of an individual product on the Corp-Tech stock is minimal so equity is basically free money.  Regardless of your performance (individually) or your product performance, you equity grows significantly so nothing you do has real economic impact on your family. The only control you have to increase your economic returns are whether you get promoted, since that drives your equity and salary payments.  This breaks the traditional tech model of risk reward.
·linkedin.com·
Why Did I Leave Google Or, Why Did I Stay So Long? - LinkedIn
Jason on X: "Full text from Jack Dorsey to Block employees via Insider: I want us to build a culture of excellence. Excellence in service to our customers, excellence in our craft, excellence in our respective disciplines, and excellence to each other. We want to help everyone achieve…" / X
Jason on X: "Full text from Jack Dorsey to Block employees via Insider: I want us to build a culture of excellence. Excellence in service to our customers, excellence in our craft, excellence in our respective disciplines, and excellence to each other. We want to help everyone achieve…" / X
·twitter.com·
Jason on X: "Full text from Jack Dorsey to Block employees via Insider: I want us to build a culture of excellence. Excellence in service to our customers, excellence in our craft, excellence in our respective disciplines, and excellence to each other. We want to help everyone achieve…" / X
A few thoughts about Humane’s Ai Pin
A few thoughts about Humane’s Ai Pin
The Ai Pin makes the same conceptual mistake behind all the assistants that preceded it: to treat all people as if they were so utterly helpless and clueless to manage even basic stuff. And to grossly miscalculate which tasks people find tedious and willing to delegate to a machine. These assistants want to assist with stuff people have no problem doing themselves, and they do so through an interaction model that ultimately makes things more awkward, impractical, and longer to accomplish. (On the other hand, it’s a good interaction model for people who have different types of motoric or visual disabilities and need assistance when sending and receiving messages, collecting information, etc.).
Instead there’s this urge to create The Next Big Thing that will be a hit for everyone, everywhere. And to create it in one fell swoop, skipping all the steps that might help you really get there.
·morrick.me·
A few thoughts about Humane’s Ai Pin