This thread seems to reflect how the HN audience has shifted — less commenters know what `ssh example.com` does and more commenters concerned about privacy policy.
To provide more context, our original plan was to launch the GA on March 22nd. However, we decided to move the date to April 15th because a few projects required additional time to complete. Last week we saw Supabase announcement but we didn't know what is that about and decided to not to move our date again.
The blog post does mention memory limitations, but we could elaborate more on the fact that the index is entirely in-memory and clarify the user-visible consequences of this fact. We'll edit the post for better clarity.
Yes, exploring http/3 is on the roadmap. One of the learnings for us was the fact that v8 is quite good with caching open TCP connection; For consecutive queries the same TCP+TLS session is being reused. Even when queries are sent from unrelated V8 isolates.
In that case, HTTP/3 might actually perform worse for you than HTTP/2, unless v8 caches QUIC conns as well.
The major advantage you'd get with HTTP/3 and QUIC is fewer round trips to start a conn, and no head-of-line blocking like with TCP (which might be irrelevant if the Pg protocol doesn't use simultaneous streams, dunno).
The disadvantage is you'd have to support QUIC in your proxy (and gmac said your server lib doesn't support it yet.)
Don't get hung up on HTTP/3 as the latest and greatest, HTTP/2 is probably still an improvement for you.
> Even when queries are sent from unrelated V8 isolates.
Heh, I've definitely written a few AWS Lambdas that exploited this sort of behavior. "Sure, this resource might not exist. But I'm going to check, just in case..."
Are you saying that V8 is reusing TCP connections between different browser tabs and websites? Seems like a timing/side-channel attack could be done with that.
There is a second network interface attached to each pod and that interfaces are connected to vxlan-based overlay network. So during the migration VM can keep the same ip address (APR will take care of traffic finding it's destination). That is simplest approach but has it's own downsides when overlay network grows big and painful to set up with some cloud SNI.
Few other options are:
* use some SDN instead of vxlan
* use QUICK instead of TCP in the internal network -- with QUICK we can change endpoint IP addresses on live connection
> The article ends with the statement that you are pretty much done here for now. Would optimizing your TLS termination not maybe offer some more ways to speed this up? Or is that also already fully optimized?
No, we don't do early termination yet, but it makes sense to try it out too. Here we mostly concentrated on how far we can get in terms of reducing number of round-trips.
> I did not realize before that your approach with Websockets actually meant that there was no application/client side pooling of connections. What made you choose this approach over an HTTP API (as for example PlanetScale did) anyway?
To keep compatibility with current code using postgres.js.
Another angle here is compatibility. With our current driver one can use ordinary node-postgres package, as we can substitute TCP-related calls with WebSocket calls during the build time. With that it possible to use all the packages that do require node-postgres like Prisma, Zapatos, etc.
Gotcha. I drew my conclusion based on the mentioned package.json. Now wonder why did you decide to go with rust for query engine? Do you compile it into wasm?
It made sense at the time. We do not only support Node, but also have community Clients in Go, Python or Rust. Right now we are moving more and more parts from a Node-API library or binary engine (the two variants we support until now) over to Wasm modules where it is possible for our Node/TS/JS Client. Socket/TCP connections itself are unfortunately not supported yet, so this will only be partial. And maybe there is also a future where we support Node based databases drives. As this blog post we are commenting on shows, sometimes we have to combine the weirdest things together to achieve our goal.
While you're here, I want to mention that I never understood why Prisma couples so much the query builder (which IMO is the really best part of Prisma) to the query engine / DB driver (which is not differentiated at all as a product). I have 1000 ideas of really insanely cool thing we could do if Prisma could just spit up SQL query strings, or at least connect to a custom driver.
Prisma often does multiple queries when getting some specific data, the first only leading to the second and so on. But we might very well get there one day. You are not the only one asking for this: https://github.com/prisma/prisma/issues/5052