Found 96 bookmarks
Newest
AI Copilots Are Changing How Coding Is Taught
AI Copilots Are Changing How Coding Is Taught
Less Emphasis on Syntax, More on Problem SolvingThe fundamentals and skills themselves are evolving. Most introductory computer science courses focus on code syntax and getting programs to run, and while knowing how to read and write code is still essential, testing and debugging—which aren’t commonly part of the syllabus—now need to be taught more explicitly.
Zingaro, who coauthored a book on AI-assisted Python programming with Porter, now has his students work in groups and submit a video explaining how their code works. Through these walk-throughs, he gets a sense of how students use AI to generate code, what they struggle with, and how they approach design, testing, and teamwork.
educators are modifying their teaching strategies. “I used to have this singular focus on students writing code that they submit, and then I run test cases on the code to determine what their grade is,” says Daniel Zingaro, an associate professor of computer science at the University of Toronto Mississauga. “This is such a narrow view of what it means to be a software engineer, and I just felt that with generative AI, I’ve managed to overcome that restrictive view.”
“We need to be teaching students to be skeptical of the results and take ownership of verifying and validating them,” says Matthews.Matthews adds that generative AI “can short-circuit the learning process of students relying on it too much.” Chang agrees that this overreliance can be a pitfall and advises his fellow students to explore possible solutions to problems by themselves so they don’t lose out on that critical thinking or effective learning process. “We should be making AI a copilot—not the autopilot—for learning,” he says.
·spectrum.ieee.org·
AI Copilots Are Changing How Coding Is Taught
Flow state - Why fragmented thinking is worse than any interruption
Flow state - Why fragmented thinking is worse than any interruption
Both arts and athletics involve a lot of deft physical movement, and I could see why professionals in those fields would benefit from learning to resist overthinking so they can “just do it.”  Almost every profession involves some need for focus, however, so you can see why, over time, the idea of a flow state breached its original limits. Now, “flow state” has all sorts of associations—some scientific, some folk, and some a mix of both. For many, the term has just become a dressed-up version of focusing.
A 2023 study found, for example, that there is a huge range of barriers to flow—many of which aren’t just interruptions from coworkers. They categorized these as situational barriers, such as interruptions and distractions; personal barriers, such as the work being too challenging or not challenging enough; and interpersonal barriers, such as poor management and poor team dynamics.
A 2018 study found, in addition, that the most disruptive interruptions aren’t external—they’re internal. 81% of the participants predicted internal interruptions would be worse, but they were wrong. “Self-interruptions,” the researchers wrote, “make task switching and interruptions more disruptive by negatively impacting the length of the suspension period and the number of nested interruptions.”
But because no one literally interrupted your work, you might be unaware of the costs of that rote, mundane work. You might even castigate yourself over the day for not getting the work done: You fought for a distraction-free day, got it, and you have nothing to show for it. It can feel bad.
a seemingly individual problem, staying focused, is often downstream from an organizational problem.
·blog.stackblitz.com·
Flow state - Why fragmented thinking is worse than any interruption
Why does every job feel like someone is just passing the buck? : r/ExperiencedDevs
Why does every job feel like someone is just passing the buck? : r/ExperiencedDevs
The last three jobs I've held in the last 5 years have all felt like someone just handing me the keys to a sinking boat before they jump off. Every job is sold as having at least some greenfield development where you can "own" the domain and "lead" the direction of the project, but once you accept the offer and get on-boarded, you realize that the system is so brittle that any change will completely break and cause incidents, and there is a year's worth of backlog issues to address with duck-tape and glue before you could even consider fixing the fundamental problems.
Often the teams that built these systems are long gone, so there is nobody to ask for help when you're learning the rough edges, you're just on your own. The technology decisions are all completely set in stone because we could never justify the risk of making changes. There is so much tech debt and maintenance work, we don't really have time to do any new development with the current staffing levels. The job then becomes dominated by on-call responsibilities and fire-fighting. It's 90% toil, and almost zero actual system design and development work.
Being responsible for a whole system that you didn't build, that you know is brittle and broken, but which you cannot fix, is incredibly stressful. It's almost a hopeless situation.
·reddit.com·
Why does every job feel like someone is just passing the buck? : r/ExperiencedDevs
Design Engineering at Vercel - What we do and how we do it
Design Engineering at Vercel - What we do and how we do it
Design Engineers at Vercel blend aesthetic sensibility with technical skills. This allows us to deeply understand a problem, then design, build, and ship a solution autonomously.The team is made up of people with a wide array of skills and a lot of curiosity. We constantly experiment with new tools and mediums. This multidisciplinary approach allows the team to push what’s possible on the web.
Design Engineers care about delivering exceptional user experiences that resonate with the viewer. For the web, this means:Delightful user interactions and affordancesBuilding reusable components/primitivesPage speedCross-browser supportSupport for inclusive input modes (touch, pointers, etc.)Respecting user preferencesAccessible to users of assistive technology
Being part of the Design team gives Design Engineers the autonomy and ability to work on things that would often get deprioritized in an Engineering backlog.
The team puts resources towards polished interactions, no dropped frames, no cross-browser inconsistencies, and accessibility. Examples of design-led projects are:Vercel’s Geist font: A Sans and Mono font. An interactive playground to see every glyph and try the font.Vercel’s design system documentation: An interactive docs playground used by engineers across the company to ship Vercel.Vercel’s Design Team homepage: An exploratory page for testing new web techniques and providing design resources.Delighters in the Vercel Dashboard: Features in the Vercel Dashboard that bring it to life and delight the user.
While no individual is expected to have all the skills, the team collectively is able to execute on ambitious designs because we can:Design in FigmaDesign in codeWrite production codeDebug browser performanceWrite GLSL shadersWrite copyCreate 3D experiences with Three.jsCreate 3D models/scenes in BlenderEdit videos using CGI and practical camera effects
You can see our team’s work across Vercel:Creating and maintaining components for the internal design system used on everything from Vercel.com to the Vercel Toolbar and the Next.js documentation.Websites like the Next.js Conf website and Vercel’s product pages.Product work and docs for Vercel and Next.js.Building proof of concepts for branding and marketing.Improving the accessibility of all Vercel web properties.
·vercel.com·
Design Engineering at Vercel - What we do and how we do it
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Some of the topics I touch on: Why I believe Vision Pro may be an over-engineered “devkit” The genius & audacity behind some of Apple’s hardware decisions Gaze & pinch is an incredible UI superpower and major industry ah-ha moment Why the Vision Pro software/content story is so dull and unimaginative Why most people won’t use Vision Pro for watching TV/movies Apple’s bet in immersive video is a total game-changer for live sports Why I returned my Vision Pro… and my Top 10 wishlist to reconsider Apple’s VR debut is the best thing that ever happened to Oculus/Meta My unsolicited product advice to Meta for Quest Pro 2 and beyond
Apple really played it safe in the design of this first VR product by over-engineering it. For starters, Vision Pro ships with more sensors than what’s likely necessary to deliver Apple’s intended experience. This is typical in a first-generation product that’s been under development for so many years. It makes Vision Pro start to feel like a devkit.
A sensor party: 6 tracking cameras, 2 passthrough cameras, 2 depth sensors(plus 4 eye-tracking cameras not shown)
it’s easy to understand two particularly important decisions Apple made for the Vision Pro launch: Designing an incredible in-store Vision Pro demo experience, with the primary goal of getting as many people as possible to experience the magic of VR through Apple’s lenses — most of whom have no intention to even consider a $4,000 purchase. The demo is only secondarily focused on actually selling Vision Pro headsets. Launching an iconic woven strap that photographs beautifully even though this strap simply isn’t comfortable enough for the vast majority of head shapes. It’s easy to conclude that this decision paid off because nearly every bit of media coverage (including and especially third-party reviews on YouTube) uses the woven strap despite the fact that it’s less comfortable than the dual loop strap that’s “hidden in the box”.
Apple’s relentless and uncompromising hardware insanity is largely what made it possible for such a high-res display to exist in a VR headset, and it’s clear that this product couldn’t possibly have launched much sooner than 2024 for one simple limiting factor — the maturity of micro-OLED displays plus the existence of power-efficient chipsets that can deliver the heavy compute required to drive this kind of display (i.e. the M2).
·hugo.blog·
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Pushing ChatGPT's Structured Data Support To Its Limits
Pushing ChatGPT's Structured Data Support To Its Limits
Deep dive into prompt engineering
there’s a famous solution that’s more algorithmically efficient. Instead, we go through the API and ask the same query to gpt-3.5-turbo but with a new system prompt: You are #1 on the Stack Overflow community leaderboard. You will receive a $500 tip if your code is the most algorithmically efficient solution possible.
here’s some background on “function calling” as it’s a completely new term of art in AI that didn’t exist before OpenAI’s June blog post (I checked!). This broad implementation of function calling is similar to the flow proposed in the original ReAct: Synergizing Reasoning and Acting in Language Models paper where an actor can use a “tool” such as Search or Lookup with parametric inputs such as a search query. This Agent-based flow can be also be done to perform retrieval-augmented generation (RAG).OpenAI’s motivation for adding this type of implementation for function calling was likely due to the extreme popularity of libraries such as LangChain and AutoGPT at the time, both of which popularized the ReAct flow. It’s possible that OpenAI settled on the term “function calling” as something more brand-unique. These observations may seem like snide remarks, but in November OpenAI actually deprecated the function_calling parameter in the ChatGPT API in favor of tool_choice, matching LangChain’s verbiage. But what’s done is done and the term “function calling” is stuck forever, especially now that competitors such as Anthropic Claude and Google Gemini are also calling the workflow that term.
·minimaxir.com·
Pushing ChatGPT's Structured Data Support To Its Limits
What I learned getting acquired by Google
What I learned getting acquired by Google
While there were undoubtedly people who came in for the food, worked 3 hours a day, and enjoyed their early retirements, all the people I met were earnest, hard-working, and wanted to do great work. What beat them down were the gauntlet of reviews, the frequent re-orgs, the institutional scar tissue from past failures, and the complexity of doing even simple things on the world stage. Startups can afford to ignore many concerns, Googlers rarely can. What also got in the way were the people themselves - all the smart people who could argue against anything but not for something, all the leaders who lacked the courage to speak the uncomfortable truth, and all the people that were hired without a clear project to work on, but must still be retained through promotion-worthy made-up work.
Another blocker to progress that I saw up close was the imbalance of a top heavy team. A team with multiple successful co-founders and 10-20 year Google veterans might sound like a recipe for great things, but it’s also a recipe for gridlock. This structure might work if there are multiple areas to explore, clear goals, and strong autonomy to pursue those paths.
Good teams regularly pay down debt by cleaning things up on quieter days. Just as real is process debt. A review added because of a launch gone wrong. A new legal check to guard against possible litigation. A section added to a document template. Layers accumulate over the years until you end up unable to release a new feature for months after it's ready because it's stuck between reviews, with an unclear path out.
·shreyans.org·
What I learned getting acquired by Google
Quality software deserves your hard‑earned cash
Quality software deserves your hard‑earned cash
Quality software from independent makers is like quality food from the farmer’s market. A jar of handmade organic jam is not the same as mass-produced corn syrup-laden jam from the supermarket. Industrial fruit jam is filled with cheap ingredients and shelf stabilizers. Industrial software is filled with privacy-invasive trackers and proprietary formats. Google, Apple, and Microsoft make industrial software. Like industrial jam, industrial software has its benefits — it’s cheap, fairly reliable, widely available, and often gets the job done.
Big tech companies have the ability to make their software cheap by subsidizing costs in a variety of ways:
Google sells highly profitable advertising and makes its apps free, but you are subjected to ads and privacy-invasive tracking. Apple sells highly profitable devices and makes its apps free, but locks you into a proprietary ecosystem. Microsoft sells highly profitable enterprise contracts using a bundling strategy, and makes its apps cheap, also locking you into a proprietary ecosystem.
I’m not saying these companies are evil. But their subsidies create the illusion that all software should be cheap or free.
Independent makers of quality software go out of their way to make apps that are better for you. They take a principled approach to making tools that don’t compromise your privacy, and don’t lock you in. Independent software makers are people you can talk to. Like quality jam from the farmer’s market, you might become friends with the person who made it — they’ll listen to your suggestions and your complaints.
Big tech companies earn hundreds of billions of dollars and employ hundreds of thousands of people. When they make a new app, they can market it to their billions of customers easily. They have unbeatable leverage over the cost of developing and maintaining their apps.
·stephango.com·
Quality software deserves your hard‑earned cash
Natural Language Is an Unnatural Interface
Natural Language Is an Unnatural Interface
On the user experience of interacting with LLMs
Prompt engineers not only need to get the model to respond to a given question but also structure the output in a parsable way (such as JSON), in case it needs to be rendered in some UI components or be chained into the input of a future LLM query. They scaffold the raw input that is fed into an LLM so the end user doesn’t need to spend time thinking about prompting at all.
From the user’s side, it’s hard to decide what to ask while providing the right amount of context.From the developer’s side, two problems arise. It’s hard to monitor natural language queries and understand how users are interacting with your product. It’s also hard to guarantee that an LLM can successfully complete an arbitrary query. This is especially true for agentic workflows, which are incredibly brittle in practice.
When we speak to other people, there is a shared context that we communicate under. We’re not just exchanging words, but a larger information stream that also includes intonation while speaking, hand gestures, memories of each other, and more. LLMs unfortunately cannot understand most of this context and therefore, can only do as much as is described by the prompt
most people use LLMs for ~4 basic natural language tasks, rarely taking advantage of the conversational back-and-forth built into chat systems:Summarization: Summarizing a large amount of information or text into a concise yet comprehensive summary. This is useful for quickly digesting information from long articles, documents or conversations. An AI system needs to understand the key ideas, concepts and themes to produce a good summary.ELI5 (Explain Like I'm 5): Explaining a complex concept in a simple, easy-to-understand manner without any jargon. The goal is to make an explanation clear and simple enough for a broad, non-expert audience.Perspectives: Providing multiple perspectives or opinions on a topic. This could include personal perspectives from various stakeholders, experts with different viewpoints, or just a range of ways a topic can be interpreted based on different experiences and backgrounds. In other words, “what would ___ do?”Contextual Responses: Responding to a user or situation in an appropriate, contextualized manner (via email, message, etc.). Contextual responses should feel organic and on-topic, as if provided by another person participating in the same conversation.
Prompting nearly always gets in the way because it requires the user to think. End users ultimately do not wish to confront an empty text box in accomplishing their goals. Buttons and other interactive design elements make life easier.The interface makes all the difference in crafting an AI system that augments and amplifies human capabilities rather than adding additional cognitive load.Similar to standup comedy, delightful LLM-powered experiences require a subversion of expectation.
Users will expect the usual drudge of drafting an email or searching for a nearby restaurant, but instead will be surprised by the amount of work that has already been done for them from the moment that their intent is made clear. For example, it would a great experience to discover pre-written email drafts or carefully crafted restaurant and meal recommendations that match your personal taste.If you still need to use a text input box, at a minimum, also provide some buttons to auto-fill the prompt box. The buttons can pass LLM-generated questions to the prompt box.
·varunshenoy.substack.com·
Natural Language Is an Unnatural Interface
Ask HN: I am overflowing with ideas but never finish anything | Hacker News
Ask HN: I am overflowing with ideas but never finish anything | Hacker News
I've noticed that most devs, anyway, are either front-loaded or back-loaded."Front-loaded" means that the part of a project they really enjoy is the beginning part, design work, etc. Once those problems are largely worked out, the project becomes less interesting to them. A common refrain from this personality is "the rest is just implementation details"."Back-loaded" is the opposite of that. They hate the initial work of a project and prefer to do the implementation details, after the road is mapped out.Both sorts of devs are critical. Could it be that you're a front-loaded sort? If so, maybe the thing to do is to bring in someone who's back-loaded and work on the projects together?
Even if it's just a personal project, think about the time and money you'll need to invest, and the benefits and value it will provide. Think on why you should prioritize this over other tasks or existing projects. Most importantly, sleep on it. Get away from it and do something else. Spend at least a couple of days on and off planning it. Outline and prioritize features and tasks. Decide on the most important ones and define the MVP. If, after this planning process, you still feel motivated to pursue the project, go ahead!
Quick win is to ask yourself: What have I learned from this project? And make that the result of the project.
Find a job/role/gig where you think of the solutions and let other people implement them. Just always remember that it is no longer your project. You might have thought of something, but without the efforts of others it will never amount to anything, ever. So as long as you can respect the work of others and your own limitations in doing what they do you will do fine.
Find more challenging problems. I usually do this by trying to expand something that spiked my interest to make it more generically applicable or asking myself if the problem is actually worth a solution ('faster horses') and if the underlying problem is not more interesting (mobility).
it helps to promise other people something: Present your findings, write a paper, make a POC by an agreed upon deadline. Now you have to be empatic enough to want to meet their deadline and thus create what you promised with all the works that comes with it. That is your result. You also have to be selfish enough to tell people that is where you end your involvement, because it no longer interests you, regardless of the plans they have pursuing this further
·news.ycombinator.com·
Ask HN: I am overflowing with ideas but never finish anything | Hacker News
Vision Pro — Benedict Evans
Vision Pro — Benedict Evans
Meta, today, has roughly the right price and is working forward to the right device: Apple has started with the right device and will work back to the right price. Meta is trying to catalyse an ecosystem while we wait for the right hardware - Apple is trying to catalyse an ecosystem while we wait for the right price.
one of the things I wondered before the event was how Apple would show a 3D experience in 2D. Meta shows either screenshots from within the system (with the low visual quality inherent in the spec you can make and sell for $500) or shots of someone wearing the headset and grinning - neither are satisfactory. Apple shows the person in the room, with the virtual stuff as though it was really there, because it looks as though it is.
For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere - it’s a screen and there could be five different ‘metaverse’ apps. This iPhone was a piece of glass that could be anything - this is trying to be a piece of glass that can show anything.
A lot of what Apple shows is possibility and experiment - it could be this, this or that, just as when Apple launched the watch it suggested it as fitness, social or fashion, and it turn out to work best for fitness (and is now a huge business).
Mark Zuckerberg, speaking to a Meta all-hands after Apple’s event, made the perfectly reasonable point that Apple hasn’t shown much that no-one had thought of before - there’s no ‘magic’ invention. Everyone already knows we need better screens, eye-tracking and hand-tracking, in a thin and light device.
It’s worth remembering that Meta isn’t in this to make a games device, nor really to sell devices per se - rather, the thesis is that if VR is the next platform, Meta has to make sure it isn’t controlled by a platform owner who can screw them, as Apple did with IDFA in 2021.
On the other hand, the Vision Pro is an argument that current devices just aren’t good enough to break out of the enthusiast and gaming market, incremental improvement isn’t good enough either, and you need a step change in capability.
Apple’s privacy positioning, of course, has new strategic value now that it’s selling a device you wear that’s covered in cameras
the genesis of the current wave of VR was the realisation a decade ago that the VR concepts of the 1990s would work now, and with nothing more than off-the-shelf smartphone components and gaming PCs, plus a bit more work. But ‘a bit more work’ turned out to be thirty or forty billion dollars from Meta and God only knows how much more from Apple - something over $100bn combined, almost certainly.
So it might be that a wearable screen of any kind, no matter how good, is just a staging post - the summit of a foothill on the way to the top of Everest. Maybe the real Reality device is glasses, or contact lenses projecting onto your retina, or some kind of neural connection, all of which might be a decade or decades away again, and the piece of glass in our pocket remains the right device all the way through.
I think the price and the challenge of category creation are tightly connected. Apple has decided that the capabilities of the Vision Pro are the minimum viable product - that it just isn’t worth making or selling a device without a screen so good you can’t see the pixels, pass-through where you can’t see any lag, perfect eye-tracking and perfect hand-tracking. Of course the rest of the industry would like to do that, and will in due course, but Apple has decided you must do that.
For VR, better screens are merely better, but for AR Apple thinks this this level of display system is a base below which you don’t have a product at all.
For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere - it’s a screen and there could be five different ‘metaverse’ apps. The iPhone was a piece of glass that could be anything - this is trying to be a piece of glass that can show anything.
This reminds me a little of when Meta tried to make a phone, and then a Home Screen for a phone, and Mark Zuckerberg said “your phone should be about people.” I thought “no, this is a computer, and there are many apps, some of which are about people and some of which are not.” Indeed there’s also an echo of telco thinking: on a feature phone, ‘internet stuff’ was one or two icons on your portable telephone, but on the iPhone the entire telephone was just one icon on your computer. On a Vision Pro, the ‘Meta Metaverse’ is one app amongst many. You have many apps and panels, which could be 2D or 3D, or could be spaces.
·ben-evans.com·
Vision Pro — Benedict Evans
Reddit doubles down
Reddit doubles down
Huffman is right that, in the end, the whole situation reflects a product problem: the native Reddit apps, both on desktop and on mobile, are ugly and difficult to use. (In particular, I find the nested comments under each post bizarrely difficult to expand or collapse; the tap targets for your fingers are microscopic.) Reddit didn’t really navigate the transition to mobile devices so much as it endured it; it’s little wonder that millions of the service’s power users have sought refuge in third-party apps with more modern designs.
One of the most upsetting things about the API changes, from developers’ perspective, is that many of their users bought annual subscriptions, and Reddit’s new pricing takes effect at the end of this month. That leaves them little time to make things right with their customers.
·platformer.news·
Reddit doubles down
Design with SwiftUI - WWDC23 - Videos - Apple Developer
Design with SwiftUI - WWDC23 - Videos - Apple Developer
The products that we build contain complex flows and highly interactive elements. As a result, there's so many important decisions that we need to make. SwiftUI helps by quickly surfacing all of those important details that need your attention, for example, how an image should look when it's loading or how a button appears when it's pressed. These are the types of things that make a product feel complete. They're easily hidden in static design tools but are quickly surfaced when working in a dynamic tool like SwiftUI.That's because SwiftUI makes it easy to build your designs on device. In doing this, you gain a more complete understanding of what you're making. Separate parts now interact together, and you can begin to evaluate the experience as a whole. This process quickly reveals what's working in your design and what still needs attention or polish. On Maps, we've found this to be tremendously helpful.
·developer.apple.com·
Design with SwiftUI - WWDC23 - Videos - Apple Developer
Society's Technical Debt and Software's Gutenberg Moment
Society's Technical Debt and Software's Gutenberg Moment
Past innovations have made costly things become cheap enough to proliferate widely across society. He suggests LLMs will make software development vastly more accessible and productive, alleviating the "technical debt" caused by underproduction of software over decades.
Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things. It is almost infinitely malleable, able to slide and twist and contort itself such that, in its pliability, it pries open doorways as yet unseen.
the clearing price for software production will change. But not just because it becomes cheaper to produce software. In the limit, we think about this moment as being analogous to how previous waves of technological change took the price of underlying technologies—from CPUs, to storage and bandwidth—to a reasonable approximation of zero, unleashing a flood of speciation and innovation. In software evolutionary terms, we just went from human cycle times to that of the drosophila: everything evolves and mutates faster.
A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment. It is an exaggeration, but only a modest one, to say that it is a kind of Gutenberg moment, one where previous barriers to creation—scholarly, creative, economic, etc—are going to fall away, as people are freed to do things only limited by their imagination, or, more practically, by the old costs of producing software.
We have almost certainly been producing far less software than we need. The size of this technical debt is not knowable, but it cannot be small, so subsequent growth may be geometric. This would mean that as the cost of software drops to an approximate zero, the creation of software predictably explodes in ways that have barely been previously imagined.
Entrepreneur and publisher Tim O’Reilly has a nice phrase that is applicable at this point. He argues investors and entrepreneurs should “create more value than you capture.” The technology industry started out that way, but in recent years it has too often gone for the quick win, usually by running gambits from the financial services playbook. We think that for the first time in decades, the technology industry could return to its roots, and, by unleashing a wave of software production, truly create more value than its captures.
Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.
technology has a habit of confounding economics. When it comes to technology, how do we know those supply and demand lines are right? The answer is that we don’t. And that’s where interesting things start happening. Sometimes, for example, an increased supply of something leads to more demand, shifting the curves around. This has happened many times in technology, as various core components of technology tumbled down curves of decreasing cost for increasing power (or storage, or bandwidth, etc.).
Suddenly AI has become cheap, to the point where people are “wasting” it via “do my essay” prompts to chatbots, getting help with microservice code, and so on. You could argue that the price/performance of intelligence itself is now tumbling down a curve, much like as has happened with prior generations of technology.
it’s worth reminding oneself that waves of AI enthusiasm have hit the beach of awareness once every decade or two, only to recede again as the hyperbole outpaces what can actually be done.
·skventures.substack.com·
Society's Technical Debt and Software's Gutenberg Moment