Board

Board

2272 bookmarks
Custom sorting
On Prompt Injection
On Prompt Injection
Adversarial prompting. Current systems prefix, suffix, or otherwise template user prompts into an instruction prompt before sending it to the model. That might be ChatGPT giving the model instructions "Assistant is a large language model trained by OpenAI. Knowledge cutoff: ..." or Bing's Sydney. Adversarial prompting techniques range from simple "Return
·matt-rickard.ghost.io·
On Prompt Injection
All the -Ops
All the -Ops
A list of -Ops in software development. DevOps (role, category, practice, Devs + IT) – Everything around optimizing the software development lifecycle. From developer experience (configuring environments) to helping developers deploy their code (now, to the cloud). The center of gravity for DevOps engineers is now managing cloud infrastructure (infrastructure-as-code, cloud APIs,
·matt-rickard.ghost.io·
All the -Ops
On React.js
On React.js
A React.js documentary was recently released, and it's an interesting watch (link). Some interesting takeaways: Sometimes doing extra work on the margin is worth it if the implementation is significantly simpler. Before React, model-view-controller architectures had complicated two-way data binding rules and complex DOM interactions. While rendering (essentially) the
·matt-rickard.ghost.io·
On React.js
Introducing Subscribe to Mint — Mirror Development
Introducing Subscribe to Mint — Mirror Development
Today, Mirror is excited to launch Subscribe to Mint, a new way for creators to transform their collectors into an engaged web3 community.
·dev.mirror.xyz·
Introducing Subscribe to Mint — Mirror Development
Sustainability with Rust | Amazon Web Services
Sustainability with Rust | Amazon Web Services
Rust is a programming language implemented as a set of open source projects. It combines the performance and resource efficiency of systems programming languages like C with the memory safety of languages like Java. Rust started in 2006 as a personal project of Graydon Hoare before becoming a research project at Mozilla in 2010. Rust […]
·aws.amazon.com·
Sustainability with Rust | Amazon Web Services
Components, Components, Where Are You… Distributed!
Components, Components, Where Are You… Distributed!
Why Will We Go Higher — As Always
One thing we should take a look: Low-Code and No-Code development environment. Do you think that a business person (non-developer) would understand the eight fallacies of distributed computing? I don’t think so, so for them we need to make the abstraction level higher than before:
At the end of the day not all of us need to develop operating systems or low level stuffs. Many of us will “just” build high-level applications for many business domains.
My recommendation: enjoy your last day writing code which is full of technical error handlings. Welcome to the new world of pure business code and stay optimistic, we’ll get there… as always 😉😅
“It’s just matter of time that we won’t care whether we have in-process or remote calls. Everything will be transparent for developers. No semantic difference. No network or in-process difference. All will be abstracted so that developers have only the in-process semantics”.
·medium.com·
Components, Components, Where Are You… Distributed!
Kelsey Hightower on Twitter
Kelsey Hightower on Twitter
“This is great feedback and we should change the messaging and fix the docs. HTTP frameworks hide DNS lookups. Service meshes hide TLS and authorization. Some RPC frameworks attempt to hide it all. It's better to be transparent and provide tools to deal with the complexity.”
·twitter.com·
Kelsey Hightower on Twitter
Martin Kleppmann @martin@nondeterministic.computer on Twitter
Martin Kleppmann @martin@nondeterministic.computer on Twitter
“Google's Service Weaver claims that it can hide the difference between RPC and local method calls. Just like CORBA claimed in 1991. Is it different this time? The docs say barely a word about handing failures, such as RPCs timing out https://t.co/NQjT9IxgZy”
·twitter.com·
Martin Kleppmann @martin@nondeterministic.computer on Twitter
Considerations for AI-Native Startups
Considerations for AI-Native Startups
How to think about product approach and defensibility for applications built with LLMs
In categories where private data is needed, there is less risk of all the value being captured directly by the models. To be clear, even in ones that require private data, many application companies may pop up in the same space, but all of the application companies will have some benefit of workflow/switching cost for the customers they serve, that will prevent them from being fully commoditized by the models, at least.
In these cases, the companies are bringing AI into the existing tool/workflow, rather than creating a new tool from the ground up. While sometimes these businesses seem a bit niche, the approach can serve as a good wedge to then expand further. In addition, there have been numerous examples of large companies built as say plugins in Powerpoint (ThinkCell) and other products. But the risk they face is that the incumbents may integrate these indirectly, in which case they will always need to be multiple times better to stay ahead or relevant.
An important note is that this is more of a spectrum than a hard choice, and there isn’t one right choice. For example, take the UI design space: Diagram is building a plugin within Figma, Galileo is building a standalone application that uses AI to generate an interface design that can be edited in Figma, Uizard is building a standalone AI-powered design product that can essentially replace Figma for some designers.
·tanay.substack.com·
Considerations for AI-Native Startups
Competitive Moats
Competitive Moats
Defensibility Is For Dummies
A fund is how you generate the most carry with the fewest people, you're just optimizing the mechanism. A firm, on the other hand, is (1) generating exceptional returns, but also (2) building enduring enterprise value.
·investing1012dot0.substack.com·
Competitive Moats
Summer of Protocols
Summer of Protocols
@Summer of Protocols | @Application | @Program Calendar |@People | @Pilot Study
·efdn.notion.site·
Summer of Protocols
Compute-Storage Separation Explained | Nile
Compute-Storage Separation Explained | Nile
Last week, I saw Gunnar Morling post a good question on Twitter:
With predicate pushdown, you send the query "where" clause down to the storage cluster. Each storage node filters the data and only sends a subset over the network to the compute layer. The difference in network traffic is meaningful and allows the system to avoid the network bottleneck. This solution is extra nifty because use the pros of the architecture, the fact that storage has its own compute, to solve the bottleneck that the architecture created. A bit of a Judo move.
So, hopefully you learned about the different meanings of compute/storage separation, why storage still has compute and why storage/compute separation doesn't conflict with predicate pushdown - in fact they are better together!
·thenile.dev·
Compute-Storage Separation Explained | Nile
3 Deno
3 Deno
Deno is a relatively new JavaScript runtime. I find quite interesting and aesthetically appealing, in-line with the recent trend to rein in the worse-is-better law of software evolution. This post explains why.
Although scripting and plumbing should be a way to combat complexity, just getting to the point where every contributor to your software can run scripts requires a docker container a great deal of futzing with the environment!
Deno doesn’t solve the problem of just being already there on every imaginable machine. However, it strives very hard to not create additional problems once you get the deno binary onto the machine. Some manifestations of that:
Deno comes with a code formatter (deno fmt) and an LSP server (deno lsp) out of the box.
Similarly, Deno is a TypeScript runtime — there’s no transpilation step involved, you just deno main.ts.
Deno does not rely on system’s shell. Most scripting environments, including node, python, and ruby, make a grave mistake of adding an API to spawn a process intermediated by the shell. This is slow, insecure, and brittle (which shell was that, again?). I have a longer post about the issue. Deno doesn’t have this vulnerable API. Not that “not having an API” is a particularly challenging technical achievement, but it is better than the current default.
directory with more indirection. In contrast, Deno runs the scripts in deno_task_shell — a purpose-built small cross-platform shell. You no longer need to worry that rm might behave differently depending on which rm it is, because it’s a shell’s built-in now.
These are all engineering nice-to-haves. They don’t necessary matter as much in isolation, but together they point at project values which align very well with my own ones. But there are a couple of innovative, bigger features as well.
The first big feature is the permissions system
The second big feature is Deno’s interesting, minimal, while still practical, take on dependency management.
·matklad.github.io·
3 Deno
How Rust went from a side project to the world’s most-loved programming language
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
It’s enjoyable to write Rust, which is maybe kind of weird to say, but it’s just the language is fantastic. It’s fun. You feel like a magician, and that never happens in other languages,” he says. “We definitely took a big bet—it’s a new technology.”
·technologyreview.com·
How Rust went from a side project to the world’s most-loved programming language
Behind the Scenes with React.js: the Documentary
Behind the Scenes with React.js: the Documentary
The first-ever documentary about the story of React has premiered on YouTube. I sat down with the movie’s creator, Ida Lærke, for a behind-the-scenes look at how the movie was made.
·newsletter.pragmaticengineer.com·
Behind the Scenes with React.js: the Documentary
Four Ways to Build Web Apps
Four Ways to Build Web Apps
Intro This is my opinionated list of four approaches to building websites and web applications. Publicly hosted on the internet, serving HTML, CSS, JavaScript, images, etc over HTTP. #1: Hugo Static Sites + Progressive Web Apps Static websites are boring. Vendors rarely talk about them because the margins are miniscule compared to flashy, compute-heavy services. It is seen as a table stakes offering. Though they have received more attention during the “JAM Stack” trend, my position is that they are still underappreciated and underutilized.
Complexity is conserved, meaning complexity cannot be removed it can only be moved around. For example, you can pay a provider to handle a class of complexity for you or bear those yourself - someone is doing it.
The interesting areas are where the complexity-cost tradeoff gets bent by innovation or commoditization. For example, nearly free static website hosting removed the need to run an apache or nginx webserver for these usecases.
·tomhummel.com·
Four Ways to Build Web Apps
Destroyed at the Boundaries
Destroyed at the Boundaries
Shared mutable state invites complexity into our programs. Programming languages help with this complexity inside a program, but not across network boundaries between programs.
Traditionally, databases weren't designed to run arbitrary business logic–we don't move the application code to the data. Instead, we move a subset of the data to the application server, by sending the database a query, effectively short declarative programs to get a subset of the entire data set.
We've tried to solve this with object-relational mapping (ORM) libraries, but often to no avail. [12]
Does it need to be that way? What if we moved the code closer to the data instead?
·interjectedfuture.com·
Destroyed at the Boundaries
AI, ChatGPT, and Bing…Oh My
AI, ChatGPT, and Bing…Oh My
And Sydney too. Consolidating some thoughts on an exciting two weeks of surprises, advances, and retreats in AI.
5/ First, in the next 6–12 months every product (site/app) that has a free form text field will have an “AI-enhanced” text field. All text entered (spoken) will be embellished, corrected, refined, or “run through” an LLM. Every text box becomes a prompt box.
8/ This reminds me of the mundane example of how spell-checking moved from a stand alone feature to integrated into word processing to suites and then 💥 it showed up in the browser. All of a sudden it wasn’t an app feature but every text box had squiggles.
PS/ Some will immediately want to ban the use of a tool that is “wrong” or “removes humans”. You might think spell checking is trivial, but I had to get permission to use it frosh year to write papers. In High School I had to ask the principle to use it. just like calculators.
Consumers acquire productivity tools for the “worst case” not the simple or only case.
16/ Critically, the winning product is one that does the most *important* work, not the most mundane work. Ex, typewriters were good at filling out forms, but word processors were not. It took years before forms were a WP thing, but books, manifestos, etc. → WORD!
17/ What matters is doing important work, not simply automating cheap or easy work. The tools that win will generalize to the most important problems people face. The cost of adding additional tools or “point solutions” is much higher than savings.
where
19/ Better tools bring creation closer to where “human in the loop” adds most value. PowerPoint is an example of that. We all might bemoan “slides” but with a great tool (for *important* work) the most skilled/knowledgable will use the tools to do what was previously “support” work.
23/ Many seem to think LLMs will “eliminate” jobs or wipe out whole swaths of creative work. I think what will transpire will happen in two phases. First, all creation tools used will be augmented with LLM, and very quickly. Everyone will use these enhancements — human in the loop.
26/ Why things evolve this way is subtle. In the work environment, there is no shortage of “important” — everyone thinks their work is importnt. Every department. Every creative task. There are endless requirements or “needs” that will be thrown up as barriers to change.
This shows that even in highly domain specific and advanced tooling, that massive improvements do not simply make everyone’s job easier (or vanish) but add work for people — for experts — to do more, to create more, and most of all to be humans in the loop.
New tools don’t simply automate, they create work too
LLMs meet an important & necessary but not sufficient criteria for platform shifts. They don’t yet work all the time, boundary cases are plentiful. Recall from “Hardcore Software” first decade of PCs was literally “making them work”. The Internet? security, broken links, etc.
The winters keep happening because the technologists and punditry tend to take a single advance and generalize it. Like advances in programming languages, AI can indeed make one scenario easier and doesn’t need to make all scenarios easier/possible.
If LLMs simply use the crawling side of the internet without returning links then the incentives to permit crawling and ultimately linking go away.For the largest content sites that have subscriptions or can afford this it is ok. But as we saw with news, almost none can.
The browser *not* having the rendering power of Word was a feature. Lacking a security model was a feature. Lacking centralization was a feature. Broken links led to a whole series of inventions. The fragility of the PC compared to “IBM” unleashed innovation. And on and on.
·medium.learningbyshipping.com·
AI, ChatGPT, and Bing…Oh My
Modern Health, frameworks, performance, and harm
Modern Health, frameworks, performance, and harm
Performance, accessibility, and usability are more than just inconvenient truths you can pretend don’t exist. They have a direct impact on the quality of someone’s life…
·ericwbailey.website·
Modern Health, frameworks, performance, and harm