“What if -- despite all the hype -- we are in fact underestimating the effect LLMs will have on the nature of software distribution and end-user programming? some early, v tentative thoughts: 1/”
This is a pretty exciting moment in tech Like clockwork every decade or so since the broad adoption of electricity there’s been a new technical innovation that completely upends society once it beco
I’ve been thinking about a fruitful way to frame the act of writing code in the age of Copilot/Codex, and I don’t think “autocomplete on steriods” is it. Prompt-driven programming with an LLM is better thought of as a compiler. Just like what we understand as a compiler today translates from a high-level programming language like C++ or Java to machine code (actually, assembly language), you could view an LLM as a compiler that translates from English to a high-level language.
Researchers at Stanford Introduce Parsel: An Artificial Intelligence AI Framework That Enables Automatic Implementation And Validation of Complex Algorithms With Code Large Language Models LLMs
Though recent advances have been made in large language model (LLM) reasoning, LLMs still have a hard time with hierarchical multi-step reasoning tasks like developing sophisticated programs. Human programmers, in contrast to other token generators, have (usually) learned to break down difficult tasks into manageable components that work alone (modular) and work together (compositional). As a bonus, if human-generated tokens cause problems with a function, it should be possible to rewrite that part of the software without affecting the rest of the application. In contrast, it is naively anticipated that code LLMs will produce token sequences free from errors. This