Frustrated with science denial : r/QAnonCasualties
WebGPU Fundamentals
Timeful Texts
Consider texts like the Bible and the Analects of Confucius. People integrate ideas from those books into their lives over time—but not because authors designed them that way. Those books work because they’re surrounded by rich cultural activity. Weekly sermons and communities of practice keep ideas fresh in readers’ minds and facilitate ongoing connections to lived experiences. This is a powerful approach for powerful texts, requiring extensive investment from readers and organizers. We can’t build cathedrals for every book. Sophisticated readers adopt similar methods to study less exalted texts, but most people lack the necessary skills, drive, and cultural contexts. How might we design texts to more widely enable such practices?
spaced repetition is just one approach for writing timeful texts. What other powerful tools might it be possible to create, making future books into artifacts that transcend their pages, as they slowly help readers shape their lives?
AI Copilots Are Changing How Coding Is Taught
Less Emphasis on Syntax, More on Problem SolvingThe fundamentals and skills themselves are evolving. Most introductory computer science courses focus on code syntax and getting programs to run, and while knowing how to read and write code is still essential, testing and debugging—which aren’t commonly part of the syllabus—now need to be taught more explicitly.
Zingaro, who coauthored a book on AI-assisted Python programming with Porter, now has his students work in groups and submit a video explaining how their code works. Through these walk-throughs, he gets a sense of how students use AI to generate code, what they struggle with, and how they approach design, testing, and teamwork.
educators are modifying their teaching strategies. “I used to have this singular focus on students writing code that they submit, and then I run test cases on the code to determine what their grade is,” says Daniel Zingaro, an associate professor of computer science at the University of Toronto Mississauga. “This is such a narrow view of what it means to be a software engineer, and I just felt that with generative AI, I’ve managed to overcome that restrictive view.”
“We need to be teaching students to be skeptical of the results and take ownership of verifying and validating them,” says Matthews.Matthews adds that generative AI “can short-circuit the learning process of students relying on it too much.” Chang agrees that this overreliance can be a pitfall and advises his fellow students to explore possible solutions to problems by themselves so they don’t lose out on that critical thinking or effective learning process. “We should be making AI a copilot—not the autopilot—for learning,” he says.
LinkedIn is not a social or professional network, it's a learning network
Maybe one frame is through taking control of your own personal development and learning: after all “learning is the one thing your employer can’t take away from you”
Over the years we’ve seen the rise of bro-etry and cringe “thought leadership” and crying CEOs. When I scroll my feed I have to sidestep the clearly threadboi and #personalbrand engagement-farming posts and try and focus on the real content.
Networking is useful, but distasteful to many. Instead, participating in self-directed learning communities is networking
“Don’t become a marketing manager, become someone who knows how to run user research”
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work.
But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
The Little Book of Deep Learning
LLM Powered Assistants for Complex Interfaces - Nick Arner
complexity can make it difficult for both domain novices and experts alike to learn how to use the interface. LLMs can help reduce this barrier by being leveraged to prove assistance to the user if they’re trying to accomplish something, but don’t exactly know how to navigate the interface.The user could tell the program what they’re trying to do via a text or voice interface, or perhaps, the program may be able to infer the user’s intent or goals based on what actions they’ve taken so far.Modern GUI apps are slowly starting to add in more features for assisting users with navigating the space of available commands and actions via command palettes; popularised in software iA Writer and Superhuman.
for executing a sequence of tasks as part of a complex workflow, LLM powered interfaces afford a richer opportunity for learning and using complex software.The program could walk them through the task they’re trying to accomplish by highlighting and selecting the interface elements in the correct order to accomplish the task, along with explanations provided.
Expert interfaces that take advantage of LLMs may end up looking like they currently do - again, complex tasks require complex interfaces. However, it may be easier and faster for users to learn how to use these interfaces thanks to built-in LLM-powered assistants. This will help them to get into flow faster, improving their productivity and feeling of satisfaction when using this complex software.
unlike Clippy, these new types of assistant would be able to act on the interface directly. These actions will be made in accordance to the goals of the person using them, but each discrete action taken by the assistant on the interface will not be done according to explicit human actions - the goals are directed by he human user, but the steps to achieve those goals are unknown to the user, which is why they’re engaging with the assistant in the first place
The Myth of Education
The purpose of education is to socialise human beings into a life of complete institutional dependency. School teaches you that justice must come from someone in institutional authority, that meaningful activity must come from a ‘career path,’ that if you want to express yourself you must first gain access to centralised speech platforms,1 that if you want to do something, you must first of all gain a licence or a qualification and that, above all, your own desires and instincts are invalid.
ChatGPT sends shockwaves across college campuses
Across universities, professors have been looking into ways to engage students so cheating with ChatGPT is not as attractive, such as making assignments more personalized to students’ interests and requiring students to complete brainstorming assignments and essay drafts instead of just one final paper.
Super Memory: Forget about forgetting
Building complex reasoning skills online through open-ended activities
What This Site Will Try to Teach You // Think Like (a) Git
Productive Procrastination
I do a pretty good job of channeling my procrastination into adjacent creative tasks which, in the end, influence, shape, and improve the chunks of work I do complete. And that looks like productivity from the outside. But trust me, from the inside, it usually just feels like avoidance and procrastination. But I’ve learned to accept that’s the cost of doing the kind of work I feel good at, so I let it be what it is.
The particularly nice thing about coding is that it offers many little “wins”: I get a function working, I figure out a piece of design
“Can’t face work? Then cultivate some side projects — and channel your work-avoidance into fun opportunities to learn” and once you’re done, you’ll 1) have something productive to show for it, and 2) be much more fit, rested, and ready to tackle that project at work.
In other words: rather than fight your penchant for procrastination, work with it. It’s a judo move: don’t fight your enemy, use its momentum against it for your benefit.
Modern JavaScript Explained For Dinosaurs
Kagi - Universal Summarizer
SwiftUI Bootcamp (Beginner Level) - YouTube
Regex Learn - Step by step, from zero to advanced.
Stanford SwiftUI course on Developing Apps for iOS