When it comes to hardware, there was not a lot of big news coming out of the Amazon Web Services re:Invent 2022 conference this week. And to be specific,
Ecosia uses the ad revenue from your searches to plant trees where they are needed the most. By searching with Ecosia, you’re not only reforesting our planet, but you’re also empowering the communities around our planting projects to build a better future for themselves. Give it a try!
You can see the benefits of good API design practices all across the API ecosystem, but also across all of the web and mobile applications we use each day. If you suffer from the same condition I do, then you see these frustrating signals across the landscape. However, one of the most critical ways in which poor API design affects how we deliver technology is the API design for our API gateway APIs. Like every other API, the design of APIs for API gateways provide a very honest take of what capabilities a gateway offers. Let me show you with two separate examples:
I’ve looked at all the gateways out there today. I’ve been studying API management since 2010. While there are numerous characteristics that define the next generation of API gateways, in my mind the number one thing that will determine the future of API gateways will be their support for API contracts. This means how they adopt OpenAPI, AsyncAPI, JSON Schema, GraphQL, and Protocol Buffers. You see some of what I am talking about int he way that GraphQL and gRPC APIs operate, but it is starker when you look at how gateways support OpenAPI today. More specifically whether they provide only one of the following areas, or support the spectrum.
Are database migrations good? Probably, but are we using them in the right way? In this post, I write about how I think about migrations and what I'm doing to mitigate some of their shortcomings.
Feature flags can give more control over an application. Explore LaunchDarkly's new data about how they're being used in this episode of The New Stack Makers podcast. #featuremanagement #featureflags #DevOps
System Overview GPS Operated by the United States Air Force. Global coverage available since April 1995. Space segment includes 32 satellites arranged into 6 orbital planes, each with a minimum of …
P99 CONF: Sharpening our Axes to Battle Latency Misery
Engineers spoke of their hard-fought performance battles and lessons learned across infrastructure, programming languages and measuring performance. #SRE #observability #P99CONF
While working on a demo for processing change events from Postgres with Apache Flink,
I noticed an interesting phenomenon:
A Postgres database which I had set up for that demo on Amazon RDS, ran out of disk space.
The machine had a disk size of 200 GiB which was fully used up in the course of less than two weeks.
Now a common cause for this kind of issue are replication slots which are not advanced:
in that case, Postgres will hold on to all WAL segments after the latest log sequence number (LSN) which was confirmed for that slot.
Indeed I had set up a replication slot (via the Decodable CDC source connector for Postgres, which is based on Debezium).
I then had stopped that connector, causing the slot to become inactive.
The problem was though that I was really sure that there was no traffic in that database whatsoever!
What could cause a WAL growth of ~18 GB/day then?