Dev-Productivity-Metrics-Guide.pdf

Board
Join me for New Relic Platform Fundamentals Training Series this October
Advanced Observability–Below the Glass Training
Platform Fundamentals Training (October series) Below the glass is a relatively new analogy that defines the streamlining of technical processes that lie below a smartphone’s surface.
Five Pillars of PLG with Carilu Dietrich
The Evolution of Infrastructure Markets — Cloud Computing
unpacking how new markets evolved around AWS
Battery-Ventures-OpenCloud-Report__2022.pdf
Venture Capital Is Ripe for Disruption
A world where a billion is a drop in the bucket
To understand how this is possible, we first need to understand how it occurred. If you want to win at something as complex as startups, you have to understand the game behind the game—in this case, the funding dynamics.
Everyone working in venture capital is smart. You don’t get to play the game of high finance without having some amount of capability. The VC product becoming subpar isn’t the result of stupid decisions or people ignoring obvious data. It’s the result of multiple parties making individually rational choices that have resulted in systemic levels of risk.
This is never how it actually happens, but you get the idea. Startups should receive risk capital to literally derisk certain aspects of the business.
It’s tempting to subscribe to the heroic stereotype of venture capital: the lone contrarian, bucking social convention, and investing in entrepreneurs when no one else believes in them is the mythos of the VC. Unfortunately, this tale wildly diverges from reality.
If you want to build anything less than a $50B company, this product is not meant for you. To be fair, this venture product does work for some! It is still a good way to make money if you’re building or funding enormous companies. But the product continues to move upmarket and is abandoning significant fiscal opportunity. What is more important is that it doesn’t work for most companies.
If you have accepted venture capital you only have two options: shut down the business or pivot. This is the case even if you have a solid business that would comfortably be a $20 million-plus revenue enterprise. Again we are left looking for an alternative to traditional venture capital because these businesses deserve more options.
The original name for venture capital was adventure capital. Technology’s life-giving veins used to be lined with copper and silicon. It was the spark of the soldering gun, the ring of the hammer that were the sensory signals of Silicon Valley. In dilapidated workshops and musty garages, tinkerers tried to make cool stuff and see where that took them.
On Corporate VC | Reaction Wheel
I know I’m a bit late to this, but I just ran across Fred Wilson’s comments about corporate VC from two months ago. “I am never, ever, ever, ever, ever going to do that again,” he…
On his blog Wilson clarifies a bit: corporate VCs are of two types, passive or active. If they are passive, they can be good, because they act like VCs; if they are active they are bad, “The corporate strategic investor’s objectives are generally at odds with the objectives of the entrepreneur, the company, and the financial investors.” And, “I strongly advise against entering into these kinds of relationships.”
But this cuts both ways. Because a corporate VC does not need to exit their investments in a relatively short time-frame, they can be more supportive than a VC firm. Since corporates are not necessarily in it to make money, they can put money and time into a company for strategic reasons, even if it doesn’t increase the market value of the company in the short-term.
These things are the same, no? Except they’re not, because corporates and VC firms define success differently. For a VC firm success is selling their equity to someone else for a lot more money in a relatively short time frame. For a corporate VC success is having a company become powerful and entrenched so they can learn many things from them (and prevent competitors from dominating a market.
Power Laws in Venture | Reaction Wheel
…The more rightward-skewed the distribution is, whether Pareto-Levy, log normal, or some related form, the more difficult it is to hedge against risk by supporting sizable portfolios of innov…
Normal distributions are well-understood, and easy to work with. Almost all of modern finance theory is built around the assumption that things like prices and returns are normally distributed (or lognormally distributed: a lognormal distribution becomes a normal distribution if you take the logarithm of the x-axis, useful when an increase in x is multiplicative rather than arithmetic.) Normal distributions underlie insurance and allow investors to minimize risk using modern portfolio theory
Power laws have a property that normal distributions do not: they have “fat tails.” Normal curves fall off much more quickly the further out the x-axis you get
The most important thing about a power law distribution is the alpha. The smaller the alpha, the heavier the right tail of the curve is.
Similarly, a very small number of days accounts for the bulk of stock market movements: Just ten trading days can represent half the returns of a decade.
Venture capitalists hold investments for an average of 4 years. They expect year over year growth of about 30%, meaning a continuously compounded growth rate of 26%. With these the model gives us an alpha of (1/(.26 * 4)) + 1 = 1.96. How does this compare to the real world?
Power Law distributions must have an alpha greater than one54. They do not have a standard deviation if alpha is less than three. They do not have an average if alpha is less than two.
Thiel thinks this is not possible. Venture capitalists have always faced this tension: the average growth rate of all small businesses in the US is closer to 7.5% than 30%. The pool of companies that can grow fast enough is limited. How many companies can you find that will grow fast enough, knowing that when you’re wrong about the growth rate, you’re probably wildly wrong?
The best explanation is supply and demand. When alphas of less than two are available–the supply of fast-growth companies has increased–venture capitalists have an incentive to make more investments, so they raise more money and start more funds, increasing the demand for these companies until the alpha returns to 259.
Strategies Against Systems | Reaction Wheel
There is one other circumstance, peculiar to human conduct, which stands in the way of successful social prediction and planning. Public predictions of future social developments are frequently not…
I want to explain myself. I don’t usually feel the need to. I can’t say I don’t try to persuade you when I write, but I dislike persuading. I dislike pretending that somehow I have thought of things you cannot. I prefer to think that by providing you with the ideas that have caused me to believe certain things, you will persuade yourselves.
Naturally, this often goes astonishingly awry.
Perez’ answer is that there is always fear and greed. And for the past 250 years, at least, capital has been controlled by them.
I align myself chaotic good, as most of you probably do. I do not want to live in the 1950s as a company man in a grey flannel suit. I did that, I worked at IBM as an engineer for several years out of college. My colleagues loved the freedom from fear of a bi-weekly paycheck. I found I could not fit in2. This world, where I can benefit society by finding rules to break, was a godsend. The last thing I want is to lose it.
Power Laws in Venture Portfolio Construction | Reaction Wheel
Every article that has ever given advice on investing in venture capital has said that you need to invest in a portfolio of companies, because each investment on its own is probably going to be wor…
There’s a 40% chance any single company returns nothing, a 30% chance it returns what you invested, a 20% chance it returns three times what you invested, and a 10% chance it returns ten times. Call this the Basic Model. Note that its expected value is \(30\%*1 + 20\%*3 + 10\%*10=1.9\).
Fred’s post says: “If you make just one investment, you are likely going to lose everything. If you make two, you are still likely to lose money. If you make five, you might get all your money back across all five investments. If you make ten, you might start making money on the aggregate set of investments.”
Why do VCs insist on only investing in high-risk, high-return companies? | Reaction Wheel
Sorry this is so short. It’s an interesting topic that I don’t have time right now to do justice. Last week I updated my “am I going to run out of money before I die” spread…
One Process | Reaction Wheel
This is completely irrelevant to the current moment. Enjoy. We build models to see what the future will hold and then tailor our actions to what the models tell us. If the models are accurately pre…
This post is about a specific model that we believe because we want to believe there are two ways of existing in our workaday lives: the heroic and the ordinary.
He says that scientific progress happens in one of two ways: slowly and smoothly, or in sudden leaps of change. The former he called normal science, the latter scientific revolution.
If you only reward innovators for results, the results you get will be anemic. If you support them for potential, your results might be spectacular
Most new technology comes about by combining existing technologies in a new way. For instance, the microprocessor was invented by combining the integrated circuit with a Von Neumann computer architecture. The integrated circuit was a combination of transistor-transistor logic with single-wafer silicon lithography. And so on, down to the more fundamental phenomena of quantum physics and Boolean algebra (and beyond, but you get the picture).
Functional Programming Made Easier
A Functional Programming book from beginner to advanced without skipping a single step along the way. In my 40 years of programming, I've felt that programming books always let me down, especially Functional Programming books. So, I wrote the book I wish I had 5 years ago. Functional Programming will never be easy, but it can be easier.
Advancing low-code with Domain-Specific Serverless
This new evolution is 'Domain-Specific Serverless' (DSS). The idea boils down to allowing users to write code that tightly integrates with a product - becau
The ability to combine the functionality of multiple services by weaving together their APIs is a powerful skill, and it's becoming a more common method of building workflows, processes, and even entire products. I've heard it referred to as 'Vendor Engineering', which I think is an over-simplification of a complicated practice.
In my head, this new evolution is 'Domain-Specific Serverless' (DSS). The idea boils down to allowing users to write code that tightly integrates with a product - because it runs within it. By re-imagining integrations as serverless functions that users have full control over, we allow them to be expressed with the highest level of customization.
If SaaS products begin embedding serverless capabilities directly, it allows users to take full advantage of their offerings without needing to undergo the huge lift of developing custom software. This is the key advantage of DSS; a tight integration between the business logic of an application and the execution of users' serverless functions, making it possible to customize a product with little effort. Not to mention the vast performance and security benefits this method has over traditional webhooks.
Koyeb Combines Functions and Containers in Its Serverless Engine
Koyeb now includes the ability to run not only user-written functions but also containers, side by side on the same event-driven platform.
We remove some limits you have in current offerings, where end users have to select a specific cloud technology between functions and containers, when they just want to be able to, for instance, process images and videos. We are letting them select the technology which is the most suitable depending on their needs.”
With the newly launched ability to run both on the Koyeb Serverless Engine, Léger says they hope to ease the developer experience, allowing users to run whatever code they have, without having to worry about time limits or compatible runtimes.
Our Web Tech Stack for 2022
Our key bets for our Web app this year
We've published a fun, curated list of links to help learn to build progressive web apps
Checkout pwaresources.dev
On Building a Decentralized Database – Fission
Today we're sharing an update on Dialog, a far edge database for local-first applications and autonomous computing agents.
JavaScript Containers
The majority of server programs are Linux programs. They consist of a file system, some executable files, maybe some shared libraries, they probably
interface with system software like systemd or nsswitch.
The more we can remove unnecessary abstractions, the closer we can get to the concept of "The Network Is the Computer". Cloudflare Workers is essentially an implementation of this concept in the Cloudflare network. Deno Deploy is a new implementation of this idea (on the GCP network).
At Deno we are exploring these ideas; we’re trying to
radically simplify the server abstraction.
In this emerging server abstraction layer, JavaScript takes the place of Shell.
It is quite a bit better suited to scripting than Bash or Zsh. Instead of
invoking Linux executables, like shell does, the JavaScript sandbox can invoke
Wasm. If you have some computational heavy lifting, like image resizing, it
probably makes sense to use Wasm rather than writing it in JS. Just like you
wouldn’t write image resizing code in bash, you’d spawn imagemagick.
Stack Overflow Developer Survey 2022
In May 2022 over 70,000 developers told us how they learn and level up, which tools they’re using, and what they want.
Rust Is The Future of JavaScript Infrastructure – Lee Robinson
Why is Rust being used to replace parts of the JavaScript web ecosystem like minification (Terser), transpilation (Babel), formatting (Prettier), bundling (webpack), linting (ESLint), and more?
Why is Rust now being used to replace parts of the JavaScript web ecosystem like minification (Terser), transpilation (Babel), formatting (Prettier), bundling (webpack), linting (ESLint), and more?
t knows when the program is using memory and immediately frees the memory once it is no longer needed. It enforces memory rules at compile time, making it virtually impossible to have runtime memory bugs. You do not need to manually keep track of memory. The compiler takes care of it.
Rust has been a force multiplier for our team, and betting on Rust was one of the best decisions we made. More than performance, its ergonomics and focus on correctness has helped us tame sync’s complexity. We can encode complex invariants about our system in the type system and have the compiler check them for us. – Dropbox
Millions of lines of code have been written and even more bugs have been fixed to create the bedrock for shipping web applications of today. All of these tools are written with JavaScript or TypeScript. This has worked well, but we've reached peak optimization with JS. This has inspired a new class of tools, designed to drastically improve the performance of building for the web.
SWC, created in 2017, is an extensible Rust-based platform for the next generation of fast developer tools. It's used by tools like Next.js, Parcel, and Deno, as well as companies like Vercel, ByteDance, Tencent, Shopify, and more.
While WASM isn't the perfect solution yet, it can help developers create extremely fast web experiences. The Rust team is committed to a high-quality and cutting-edge WASM implementation. For developers, this means you could have the performance advantages of Rust (vs. Go) while still compiling for the web (using WASM).
Once you're on native code (through Rust, Go, Zig, or other low-level languages),
the algorithms and data structures are more important than the language choice. It's not a silver bullet.
Even though learning Rust for JavaScript tooling will be a barrier to entry, interestingly developers would rather have a faster tool that's harder to contribute to. Fast software wins.
Currently, it's hard to find a Rust library or framework for your favorite services (things like working with authentication, databases, payments, and more). I do think that once Rust and WASM reach critical adoption, this will resolve itself. But not yet. We need existing JavaScript tools to help us bridge the gap and incrementally adopt performance improvements.
I believe Rust is the future of JavaScript tooling. Next.js 12 started our transition to fully replace Babel (transpilation) and Terser (minification) with SWC and Rust. Why?
Regardless, I'm confident Rust will continue to have a major impact on the JavaScript ecosystem for the next 1-2 years and into the future. Imagine a world where all of the build tools used in Next.js are written in Rust, giving you optimal performance. Then, Next.js could be distributed as a static binary you'd download from NPM.
That's the world I want to live (and develop) in.
What Makes a Great Developer Experience? – Lee Robinson
Tools that keep developers in the flow state have a magnetic force. An often unexplainable, invisible pull that attracts and retains them to certain products. This pull is Developer Experience (DX).
The Next Wave of Cloud Infrastructure
What are the next Snowflakes and Datadogs?
These infrastructure providers, on average, are growing quicker than their SaaS counterparts in the public markets. 7 out of the top 102 fastest growing cloud businesses with over $500M in revenue are infrastructure-related ones.
Sed744 zeit
The Unending Chasm, and How to Survive It | Andreessen Horowitz
“Crossing the chasm” is a popular concept for almost all new products/startups, and is a useful lens for entrepreneurs to view the theory of innovation. The concept was first coined in the popular book title of the same name, which …
But once the chasm is crossed, hallelujah! — those product-market fit issues are solved and the challenge now is to just keep up with demand. Because once you cross the chasm, your company goes from a difficult “push-based market” to one that is “pull-based”, where customers are naturally drawn in.
Moreover, given how fast technology is changing, more startups are spending more time in the chasm… and they may never exit. And guess what? That’s ok.
All of the above means startups will face fierce and changing competition well into IPO territory, given the size of the markets relative to the size of a startup. But the whole point of this post is to argue that companies can still be successful despite never crossing the chasm, including IPO and beyond.
It’s simply a fallacy to believe that in all cases a market matures and starts to “pull” a product rather than requiring a continued “push” from the startup.
Whatever the reason, a product in these markets won’t be become repeatably “easy” to sell until well after it has reached hundreds of millions of dollars in sales. If you have such a product, that’s ok. In these situations, I generally recommend leaning into services. If the product is difficult to insert, you can reduce that friction by making the work to insert the product a core competency of your startup. While margins will be impacted, and sales cycles are likely to remain long, you’ll have more control of your destiny this way. If you’re lucky, over time as the market and partner ecosystem matures, you may be able to offload the integration work to a partner.
You know you’re in a hard, sometimes un-crossable market when it takes a lot of effort to build a business based on the ideal buyer for your product. In my experience, the two most common examples of these are: (1) non-“tech” verticals and (2) tech selling to struggling industries.
Every time I see a startup whose primary logos come from struggling sectors, I immediately recognize that they’re going to have a harder time going to market — their customer base is under duress. Even with a compelling product that has strong ROI for those customers, the churn of a buyer undergoing disruption will be reflected in the numbers. Budgets dry up, champions leave their jobs, projects are canceled, and so on. The hardest part for founders to to accept here is that all this can happen independent of how well a startup executes on its go to market in those markets.
It’s great if an enterprise startup manages to find product-market fit and ends up with a repeatable sales model in a large market, getting to a point where the market pulls them (and not the other way around). In that case, scaling fast is everything.
Why bother then? Because it is always possible to fight your way to success, to have a shot at building something great. Most enterprise companies are built brick by (often miserable) brick. As my former board member and now partner Ben Horowitz once said it best, “There is always another move”.
Notes on running containers with bubblewrap
JMAP Crash Course | Topicbox
The Increasing Fragmentation of SaaS by @ttunguz
The alternative, which is the fragmented market of today, enables teams to purchase best-of-breed point solutions, try them, and quickly cycle through all the different offerings until they find the best one for their needs. This change in purchasing behavior is happening broadly across SaaS.
But these are tractable challenges in the market that values piecing together an optimal software stack. At least for the moment, I expect to see further and further fragmentation in the software landscape, enabled by APIs.
Report: The Evolution of DevOps | A Contrary Research Deep Dive
Software is eating the world. Software now defines the speed of innovation and continues to differentiate the winners from the losers. Digital transformation continues to be key to survival for established companies. The winners don’t just need to be capable of building software; they need to be exceptional at it. Software development consistently requires more scale and more speed. Today, over 70% of DevOps teams release code as frequently as once a day, up 11% from 2021.
DevOps is a cultural shift that touches a variety of steps within the software development lifecycle. There currently isn’t a single all-encompassing platform to cover the entire scope. DevOps teams usually put together a customized toolchain to connect the various people and workflows that consist of open-source and vendor tools. The output from one is an input for another, and so on. That leads to a very fragmented vendor landscape.
Software, overall, has become increasingly fragmented as users take advantage of the opportunity to test out best-in-class solutions. As the speed of software production cycles increases, the need for a seamless toolchain has become increasingly important. Determining which DevOps tools an organization will use can be complex given the multiple stakeholders involved. Executives want to ensure uptime and control costs, while developers are focused on performance and ease-of-use.
s cloud computing emerged, so did GitHub, which has become the largest cloud-based platform that extends the benefits of Git.
As cloud computing emerged, so did GitHub, which has become the largest cloud-based platform that extends the benefits of Git.
As DevOps becomes the de facto methodology for producing software, more players have looked to extend their platform. GitLab has held a unique position in the market by clearly stating early on that they were trying to build an open core platform to extend across the DevOps lifecycle.
HashiCorp pitches all their products as multi-cloud, which positions them favorably for developers and engineers who have to deploy across multiple different cloud providers. Similar to GitLab, HashiCorp makes its code viewable to their open-source community, which has a number of benefits including enhanced security (i.e. bugs are found quicker) and higher quality software that will benefit from continuous improvement. Thousands of developers have contributed to its development and will continue to look to optimize the code.
CircleCI is one of the only CI/CD tools to get certified by FedRAMP. It supports isolated execution environments including Docker, Linux, macOS, Android, Windows, and self-hosted runtimes.
Developer productivity has been placed front and center as the speed of deployment has increased. Every company feels resource-constrained when it comes to developers’ time. Adding more software engineers isn’t a scalable fix, and we’re also seeing demand exceed supply. Some sources estimate the shortage of software engineers will reach over 80 million by 2030. Companies are doing everything they can to increase the capacity of their developers.
GitHub, Atlassian, Microsoft... They’re trying to get everyone to adopt a unified tool system. But most people still go with best-of-breed, as far as tools go. The idea, though, is that some people will eventually go with more of a “you can’t get fired for buying IBM” approach, where you buy everything from a single vendor."
With the advances in [ML-enabled software development], we believe that programming should become a semiautomated task in which humans express higher-level ideas and detailed implementation is done by the computers themselves.”
Up to this point, cloud computing has been the key to unlocking rapid software development. Access to the very best tools and resources has increased developers ability to build exceptional software, and to do it quicker than ever. Those same capabilities, however, have also led to dramatic complexity in the development process. DevOps is the solution to the problems that speed without infrastructure created.
The future of DevOps is the future of software development. Last year, VCs invested $37 billion into companies building developer tooling. The demand for software, and for software developers to build that software, is only going to increase. A massive opportunity exists for the platforms that can become the central building blocks of that increasingly important process.