Hacker Newsnew | past | comments | ask | show | jobs | submit | kelvich's commentslogin

This thread seems to reflect how the HN audience has shifted — less commenters know what `ssh example.com` does and more commenters concerned about privacy policy.


> I'm not going to SSH to a random server.

Opening a random website likely exposes you to more risk.


Likely? Definitely.


To provide more context, our original plan was to launch the GA on March 22nd. However, we decided to move the date to April 15th because a few projects required additional time to complete. Last week we saw Supabase announcement but we didn't know what is that about and decided to not to move our date again.


Who is “we”? Who are you?

Your HN profile doesn’t tell.


(PG Hacker @ Neon)

Stas Kelvich one of my bosses, and one of the founders of Neon.


Thank you, point taken.

The blog post does mention memory limitations, but we could elaborate more on the fact that the index is entirely in-memory and clarify the user-visible consequences of this fact. We'll edit the post for better clarity.


Ok. I have to also mentions that none of the issues I've mentioned are consequences of in-memory nature of index. Both are issues of implementation.


Yes, exploring http/3 is on the roadmap. One of the learnings for us was the fact that v8 is quite good with caching open TCP connection; For consecutive queries the same TCP+TLS session is being reused. Even when queries are sent from unrelated V8 isolates.


In that case, HTTP/3 might actually perform worse for you than HTTP/2, unless v8 caches QUIC conns as well.

The major advantage you'd get with HTTP/3 and QUIC is fewer round trips to start a conn, and no head-of-line blocking like with TCP (which might be irrelevant if the Pg protocol doesn't use simultaneous streams, dunno).

The disadvantage is you'd have to support QUIC in your proxy (and gmac said your server lib doesn't support it yet.)

Don't get hung up on HTTP/3 as the latest and greatest, HTTP/2 is probably still an improvement for you.

> Even when queries are sent from unrelated V8 isolates.

Heh, I've definitely written a few AWS Lambdas that exploited this sort of behavior. "Sure, this resource might not exist. But I'm going to check, just in case..."


Are you saying that V8 is reusing TCP connections between different browser tabs and websites? Seems like a timing/side-channel attack could be done with that.


No, this is in a serverless environment running on V8 isolates (specifically, Vercel Edge Functions).


Oh I see. Not Chrome v8


There is a second network interface attached to each pod and that interfaces are connected to vxlan-based overlay network. So during the migration VM can keep the same ip address (APR will take care of traffic finding it's destination). That is simplest approach but has it's own downsides when overlay network grows big and painful to set up with some cloud SNI.

Few other options are:

* use some SDN instead of vxlan

* use QUICK instead of TCP in the internal network -- with QUICK we can change endpoint IP addresses on live connection


Very cool! Thanks :)

QUIC seems like a solid simplification for sure.


Main idea was one branch per pull request to better test migrations on a fresh data. But one branch per dev or branch per dev per pr will also work.


Nice - I mean it seems awesome to me especially the copy on write billing logic for this sort of dev style


> The article ends with the statement that you are pretty much done here for now. Would optimizing your TLS termination not maybe offer some more ways to speed this up? Or is that also already fully optimized?

No, we don't do early termination yet, but it makes sense to try it out too. Here we mostly concentrated on how far we can get in terms of reducing number of round-trips.

> I did not realize before that your approach with Websockets actually meant that there was no application/client side pooling of connections. What made you choose this approach over an HTTP API (as for example PlanetScale did) anyway?

To keep compatibility with current code using postgres.js.


> To keep compatibility with current code using postgres.js.

That makes a lot of sense - not needing an additional driver/client package is indeed a good point. Any plans to add a HTTP based API though anyway?


Potentially. We will follow what our users will ask us to do.


Feel free to try it out on https://neon.tech/


Hey, any plans to let us know about pricing? Thanks!


wow, you guys rock!


Another angle here is compatibility. With our current driver one can use ordinary node-postgres package, as we can substitute TCP-related calls with WebSocket calls during the build time. With that it possible to use all the packages that do require node-postgres like Prisma, Zapatos, etc.


Prisma does not actually use node-postgres, but a Rust PostgreSQL driver. Prisma will not be able to use the Neon serverless driver.



That is only used in tests :) The query engine uses this: https://github.com/prisma/quaint/blob/6532d69b5aec007ad06ac6...

(I work at Prisma, could have mentioned that earlier)


Gotcha. I drew my conclusion based on the mentioned package.json. Now wonder why did you decide to go with rust for query engine? Do you compile it into wasm?


It made sense at the time. We do not only support Node, but also have community Clients in Go, Python or Rust. Right now we are moving more and more parts from a Node-API library or binary engine (the two variants we support until now) over to Wasm modules where it is possible for our Node/TS/JS Client. Socket/TCP connections itself are unfortunately not supported yet, so this will only be partial. And maybe there is also a future where we support Node based databases drives. As this blog post we are commenting on shows, sometimes we have to combine the weirdest things together to achieve our goal.


While you're here, I want to mention that I never understood why Prisma couples so much the query builder (which IMO is the really best part of Prisma) to the query engine / DB driver (which is not differentiated at all as a product). I have 1000 ideas of really insanely cool thing we could do if Prisma could just spit up SQL query strings, or at least connect to a custom driver.


Prisma often does multiple queries when getting some specific data, the first only leading to the second and so on. But we might very well get there one day. You are not the only one asking for this: https://github.com/prisma/prisma/issues/5052


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: