Pipelining in psql (PostgreSQL 18)
postgresql.verite.pro151 points by tanelpoder 17 hours ago
151 points by tanelpoder 17 hours ago
I’m pretty sure the reasoning and conclusion is way off on explaining the speed up:
> The network is better utilized because successive queries can be grouped in the same network packets, resulting in less packets overall.
> the network packets are like 50 seater buses that ride with only one passenger.
The performance improvement is not likely to be because you’re sending larger packets, since most queries transfer very little data and the benchmark the conclusion is drawn from definitely is transferring near 0 data. The speed up comes from removing waiting on a round trip ack of a batch from executing subsequent queries; the number of network packets is irrelevant.
I’m not sure that’s it either. PostgreSQL has a feature — don’t remember what it’s called — where multiple readers can share a serial table scan.
Suppose client A runs “select * from foo”, which has a thousand records. It can start streaming those results starting with row 1. Now suppose it’s on row 500 when client B runs the same query. Instead of starting over for B, it can start streaming results to B starting at row 501. Each time it reads a row, now it sends that to both clients.
Now when it finishes with row 1000, client A’s query is done. It starts back over with B on row 1 and continues through row 500.
Hypothetically, you can serve N clients with a total of 2 table scans if they all arrive before the first client’s scan is finished.
So that’s the kind of magic where I think this is going to shine. Queue up a few queries and it’s likely that several will be able to share the same underlying work.
That literally isn’t what pipelining is about in general nor is it relevant to this benchmark which is an insertion workload. The performance benefit observed literally is the ability to start executing the second request even though the ACK for the first one hasn’t fully ACK’ed.
It’s also not true pipelining since you can’t send a follow up request that depends on the results of the previous incomplete request (eg look at capnproto promise pipelining). As such the benefit in practice is actually more limited, especially if instead here you use connection pooling and send the requests over different connections in the first place - I’d expect very similar performance numbers for the benchmark assuming you have enough connections open in parallel to keep the DB busy.
> I’m not sure that’s it either. PostgreSQL has a feature — don’t remember what it’s called — where multiple readers can share a serial table scan.
Maybe referring to synchronize_seqscans?
https://www.postgresql.org/docs/current/runtime-config-compa...
I feel pipelines (or batches) are slept upon. So many applications use interactive transactions to ‘batch’ multiple queries, waiting for the result of each individual query. Network roundtrip is the biggest contributor to latency in most applications, and this makes it so much worse. Most Postgres drivers don’t even support batching, at least in the JavaScript world.
In many cases it would be good to forego interactive transactions and instead execute all read-only queries at once, and another write batch after doing processing on the obtained data. That way, the amount of roundtrips is bounded. There are some complications of course, like dealing with concurrency becomes more complicated. I’m currently prototyping a library exploring these ideas.
Batching in general is slept upon. So many queue systems support batch injection, and I have seen countless cases where a poorly performing system is “fixed” simply by moving away from incremental injection. This stuff is usually on page two of the docs, which explains why it’s so overlooked…
My guess is that this is because our default way of expressing code execution is the procedure call, meaning the default unit of code that we can later is the procedure, which needs to execute synchronously. That's what our programming languages support directly, and that's just how "things are done".
Everything else both feels weird and also truly is awkward to express because our programming languages don't really allow us to express it well. And usually by the time we figure out that we need a more reified, batch-oriented mechanism. (the one on page 2) it is too late, the procedural assumptions have been deeply baked into the code we've written so far.
See Can programmers escape the gentle tyranny of call/return? by yours truly.
https://www.hpi.uni-potsdam.de/hirschfeld/publications/media...
This analysis makes sense to me, but at the same time: we’re already switching between procedural and declarative when switching from [mainstream language] to SQL. This impedance mismatch (or awkwardness) is already there, might as well embrace it.
We are switching...but how and at what cost? We put SQL programs as strings into our other programs, often dynamically constructing them using procedure calls and then dispatching them using yet more procedure calls.
If that weren't yikes enough, SQL injection bugs used to be the #1 exploited security vulnerabilities. It's gotten a little better, partly because of greater usr of ORMs.
ORMs?
https://blog.codinghorror.com/object-relational-mapping-is-t...
> It's gotten a little better, partly because of greater usr of ORMs.
No, just use prepared statements.
I would expect most drivers to support (anonymous) stored procedures so you can batch/pipeline multiple queries into one statement to be executed by the database. Probably more a problem of developers not knowing how to use databases properly, not so much a limitation of technology.
People don't do that because when you're writing insert/update queries, you tend to want to write logic based on the value of intermediate results, and also you can't return tabular data from a DO block (they operate as a function returning void).
You also can't use parameterized values like $1, $2.
It seems more niche than you're suggesting. Though I wish people would write app layer pseudocode to demonstrate what they are referring to.
You don’t even need driver support, you can use https://www.postgresql.org/docs/current/sql-do.html
I have started to use batching with the Go pgx driver for simple transactions of multiple inserts. Since a batch is automatically a transaction, it’s actually fewer lines of code.
Most of my big clients have about 10 intermediaries between them and the data: the antivirus, the browser, the VPN, the company proxy, the API gateway, their authentication layer, the virtualization layer, the application server, the microservice it requests and whatever data source this one requests.
So unless you are a lean startup, the reasons many products are horribly slow are very low hanging fruits no body are ever going to bother picking.
If you ever reach the time where pipelining is giving you a boost in perf, your app was already in a nice state.
It's so nice to be able to code on a baremetal server where my monolith has directly access to my postgres instance on my personal projects.
I must confess, the Python driver pg8000 which I maintain doesn't support pipeline mode. I didn't realise it existed until now, and nobody has ever asked for it. I've created an issue for it https://codeberg.org/tlocke/pg8000/issues/174
I developed a JS pg client that use pipeline mode by default: https://github.com/stanNthe5/pgline
I really want to use pipelining for our "em.flush" of sending all INSERTs & UPDATEs to the db as part of a transaction, b/c my initial prototyping showed a 3-6x increase:
https://joist-orm.io/blog/initial-pipelining-benchmark/
If you're not in a transaction, afaiu pipelining is not as applicable/useful b/c any SQL statement failing in the pipeline fails all other queries after it, and imo it would suck for separate/unrelated web requests that "share a pipeline" to have one request fail the others -- but for a single txn/single request, these semantics are what you expect anyway.
Unfortunately in the TypeScript ecosystem, the node-pg package/driver doesn't support pipelining yet, instead this "didn't quite hit mainstream adoption and now the author is AWOL" driver does: https://github.com/porsager/postgres
I've got a branch to convert our TypeScript ORM to postgres.js solely for this "send all our INSERTs/UPDATEs/DELETEs in parallel" perf benefit, and have some great stats so far:
https://github.com/joist-orm/joist-orm/pull/1373#issuecommen...
But it's not "must have" for us atm, so haven't gotten time to rebase/ship/etc...hoping to rebase & land the PR by eoy...
I'm right here - what are you missing?
Oh hello! Very happy to hear from you, and even happier to be wrong about your "AWOL-ness" (since I want to ship postgres.js to prod). :-)
My assumption was just from, afaict, the general lack of triage on GitHub issues, i.e. for a few needs we have like tracing/APM, and then also admittedly esoteric topics like this stack trace fixing:
https://github.com/porsager/postgres/issues/963#issuecomment...
Fwiw I definitely sympathize with issue triage being time-consuming/sometimes a pita, i.e. where a nontrivial/majority of issues are from well-meaning but maybe naive users asking for free support/filing incorrect/distracting issues.
I don't have an answer, but just saying that's where my impression came from.
Thanks for replying!
Thanks a lot. You're spot on about issue triage etc. I haven't had the time to keep up, but I read all issues when they're created and deal with anything critical. I'm using Postgres.js myself in big deployments and know others are too. The metrics branch should be usable, and I could probably find time to get that part released. It's been ready for a while. I do have some important changes in the pipeline for v4, but won't be able to focus on it until December.
That was a pretty nasty assumption you made about them though: That they're MIA because they're upset that their pet project isn't as popular as they'd like.
Jeez.
That said, I hope node-postgres can support this soon. As it stands, every single query you add to a transaction adds a serial network roundtrip which is devastating not just in execution time but how long you're holding any locks inside the transaction.
activerecord in Rails has async mode, which allows you to queue several requests and read results later. But those will go through the connection pool, and will be executed in separate connections, separate transactions, and separate PostgreSQL server processes. I wonder if using pipelining instead, on a driver level (app code would be the same), would be a better approach in general, or at least easier on db instance.
ah, of course it have been discussed already https://discuss.rubyonrails.org/t/proposal-adding-postgres-p...
Yes, the need isn't exactly the same. `load_async` use case if for known slow-ish queries, hence for which you want actual parallelization on the server.
Since that discussion on the forum, I talked more about pipelining with some other core devs, and that may happen in some form or another in the future.
The main limiting factor is that most of the big Rails contributors work with MySQL, not Postgres, and MySQL doesn't really have proper pipelining support.
I wish the author explained the difference between pipelines and multi-statement queries
There are no multi-statement queries in the binary protocol (where you get things like native cursors/pagination to efficiently iterate over result rows, and where you get the true parameter binding that is inherently robust against SQL injection.
It has a separate client to server packet that forces previous ones to complete as it will make otherwise-asynchronous (because pipelining) error reporting forcefully serial.
Other than this which is arguably not needed for queries that don't expect errors enough to need early/eager exception throwing during the course of a transaction, it's inherently naturally pipelined as you can just fire two or more statements worth of parameter binding and result fetching back-to-back without blocking on anything.
Author did a good job demonstrating query pipelining. For multi-statement queries, one can read about them in the Postgres docs here: https://www.postgresql.org/docs/current/protocol-flow.html#P...
Sad I can't use this in Elixir. Looks pretty sweet.
I havn't had to deal with this problem until recently and seems like an obvious scalablity issue so I'm sure I'm not the only one to have hit this.
How do I handle, say 100K concurrent transactions in an OLTP database? Here are my learnings that make this difficult,
- a transaction has a one-to-one mapping with a connection
- a connection can only process one transaction at at time, so pooling isn't going to help.
- database connections are "expensive"
- a client can open at maximum, 65k connections as otherwise it would run out of ports.
A 100k connections isn't that crazy; say you have 100k concurrent users and each one needs a transaction to manage it's independent state. Transactions are useful as they enforce consistency.