Declarative Schemas for simpler database management

supabase.com

68 points by kiwicopple a day ago


xyzzy_plugh - a day ago

This is exactly backwards. You should have declarative schemas but inferring migrations is crazy. Only pain will follow.

Instead, I am a fan of doing both: either committing the resulting schema of a migration, or hand writing it aside the migration. Then have tests to ensure the database schema matches the expected schema after a migration.

Generating these artifacts is fine, but in TFA's case there is no chance I wouldn't inspect and possibly modify the generated "diff" migration. It's significantly easier to go the other way: write the migration and show me the resulting schema diff.

webster451 - 20 hours ago

I think we are getting close to the peak of "declarative"—or rather, I hope we are near the peak.

In my experience, declarative APIs are very powerful abstractions for specific cases where finding the path to the declared state is better left to a machine. This is seldom the case - in most cases, offering the programmer control over the changes leads to better behaviors.

Kubernetes and IaC tools lead the way to a declarative state of infrastructure and these add a ton of value. But, they were also incredibly hard to build - it took many years before Kubernetes eventing and control loop abstracts were rock solid. Most CRD-backed implementations suffer from tons and tons of bugs, and most CRDs are not declarative - they abstract away an imperative operation! I guess this is nothing new - "anything in excess is bad".

Anyways, I think an imperative approach offers much higher control and predictability at a lower cost. The world inherently is imperative.

bartvk - a day ago

So to summarize.

In the old situation, you write CREATE TABLE statement at the start of the project. And when you add a feature, you have to write an ALTER TABLE script.

In this new situation, you just change the CREATE TABLE script. And Supabase uses migra to figure out the difference and it automatically alters the table.

What's interesting is that in your SQL code, there's no longer any difference between creating a new database, and updating an existing database.

joshAg - 21 hours ago

We built something similar for the managed DB we use, but i think it's a mistake to autogenerate the migration scripts instead of autogenerating the schema from the migration. Things like changing an enum, adding a nonnull column that shouldn't have a default to an existing table that already has data in it, and migrating data from one representation to another (eg, 'oh hey, we definitely shouldn't have made our users table have an fname and an lname field. let's change to full_name and preferred_name') are easily done in a migration script but hard, if not impossible, to infer from just schema changes.

neutralino1 - 21 hours ago

It seems to me Rails has been doing this but better for years. It definitely keeps atomic and historical migrations, but also maintains a schema.sql file that can be loaded as a one-off (e.g. for mock DBs in tests).

Guillaume86 - 18 hours ago

Could this be moved into a standalone CLI tool? Is there anything supabase specific about it? I've always wanted SSDT SQL projects for postgres (SSDT is MS declarative schema management solution for SQL Server).

SSDT can also sync db projects (nicely organized DDL .sql files representing the schema) and databases (one way or the other), with the IDE support you can do stuff like "find all references" on a column or any other DB object, and build the project to check for errors. Linting the schema becomes possible, etc I have a hard time when I have to go back to imperative schema management...

ucarion - a day ago

Sorry if asked and answered: can you hand-edit the generated migrations? Like, what if I want to do a create index concurrently or something?

kiwicopple - a day ago

Combining this with the Postgres Language Server that was released this week, you can now execute statements directly within your IDE:

https://x.com/psteinroe/status/1907803477939527728

geocar - 21 hours ago

You're going to regret this.

The thing you need to be doing is testing your migrations, and some dumbass on your team is going to generate the migration during CI and load it into your database as a merge step, and you won't realise what a mistake this was until possibly years later.

The good news, is you might be able to pay someone an enormous amount of money to unfuck things. Not good for you, I mean, obviously. Good for whoever you just bought a car.

xaas - 21 hours ago

I use ZenStack. In future releases they are moving away from supabase…

mylons - 21 hours ago

am i missing something? what does this offer over raw sql? it honestly looks very similar

wmanley - 21 hours ago

See also:

“Simple declarative schema migration for SQLite”

https://david.rothlis.net/declarative-schema-migration-for-s...

Discussed previously:

https://news.ycombinator.com/item?id=31249823

Disclosure: I am the co author of that article