UUIDs have downsides
Truly random UUIDs doesn't sort well (and this has implications for indexes)
They take up more space than sequential ids (space being your cheapest resource)
But I've found those to be far outweighed by the upsides
You don't need to coordinate with the database to produce one.
They are safe to share externally.
Give everything created_at and updated_at
It's not a full history, but knowing when a record was created or last changed is a useful breadcrumb when debugging. Its also something you can't retroactively get unless you were recording it.
So just always slap a created_at and updated_at on your tables. You can maintain updated_at automatically with a trigger.
You need to create the trigger for each table, but you only need to create the function once.
on update restrict on delete restrict
When you make a foreign key constraint on a table, always mark it with on update restrict on delete restrict.
This makes it so that if you try and delete the referenced row you will get an error. Storage is cheap, recovering data is a nightmare. Better to error than do something like cascade.
Use schemas
By default, every table in Postgres will go into the "public" schema. This is fine, but you are missing out if you don't take advantage of your ability to make new schemas.
Schemas work as namespaces for tables and for any moderate to large app you are going to have a lot of tables. You can do joins and have relationships between tables in different schemas so there isn't much of a downside.
Enum Tables
There are a lot of ways to make "enums" in sql. One is to use the actual "enum types," another is to use a check constraint.
The pattern introduced to me by Hasura was enum tables.
Have a table with some text value as a primary key and make columns in other tables reference it with a foreign key.
This way you can insert into a table to add more allowed values or attach metadata like a comment to explain what each value means.
Mechanically name join tables
Sometimes there are sensible names to give "join tables" - tables which form the basis for "many to many" relationships between data - but often there isn't. In those cases don't hesitate to just concatenate the names of the tables you are joining between.
Represent statuses as a log
It is very tempting to represent the status of something as a single column. You submit some paperwork and it has a status of submitted. Someone starts to look at it then it transitions to in_review. From there maybe its rejected or approved.
There are two problems with this
You might actually care about when it was approved, or by whom.
You might receive this information out-of-order.
Webhooks are a prime example of the 2nd situation. There's no way in the laws of physics to be sure you'll get events in exactly the right order.
To handle this you should have a table where each row represents the status of the thing at a given point in time. Instead of overloading created_at or updated_at for this, have an explicit valid_at which says when that information is valid for.
Just having an index on valid_at can work for a while, but eventually your queries will get too slow. There are a lot of ways to handle this, but the one we've found that works the best is to have an explicit latest column with a cheeky unique index and trigger to make sure that only the row with the newest valid_at is the latest one.
Mark special rows with a system_id
It's not uncommon to end up with "special rows." By this I mean rows in a table that the rest of your system will rely on the presence of to build up behavior.
All rows in an enum table are like this, but you will also end up with rows in tables of otherwise normal "generated during the course of normal use" rows. For these, give them a special system_id.
Use views sparingly
Views are amazing and terrible.
They are amazing in their ability to wrap up a relatively complex or error-prone query into something that looks basically like a table.
They are terrible in that removing obsolete columns requires a drop and recreation, which can become a nightmare when you build views on views. The query planner also seems to have trouble seeing through them in general.
So do use views, but only as many as you need and be very wary of building views on views
JSON Queries
You might have heard that Postgres "supports JSON." This is true, but I had mostly heard it in the context of storing and querying JSON. If you want a table with some blob of info slap a jsonb column on one your tables.
That is neat, but I've gotten way more mileage out of using JSON as the result of a query. This has definite downsides like losing type information, needing to realize your results all at once, and the overhead of writing into json.
But the giant upside is that you can get all the information you want from the database in one trip, no cartesian product nightmares or N+1 problems in sight.