Hacker Newsnew | past | comments | ask | show | jobs | submit | perezd's commentslogin

The better stack rn is buf + Connect RPC: https://connectrpc.com/ All the compatibility, you get JSON+HTTP & gRPC, one platform.


Software lives forever. You have to take the long view, not the "rn" view. In the long view, NFS's XDR or ASN.1 are just fine and could have been enough, if we didn't keep reinventing things.


It's mind-blowing to think XDR / ONC RPC V2 were products of the 1980s, and that sitting here nearly forty years later we are discussing the same problem space.

Probably the biggest challenge with something like XDR is it's very hard to maintain tooling around it long-term. Nobody wants to pay for forty years of continuous incremental improvement, maintenance, and modernization.

Long term this churn will hopefully slow down, it's inevitable as we collectively develop a solid set of "engineering principles" for the industry.


> Nobody wants to pay for forty years of continuous incremental improvement, maintenance, and modernization.

And yet somehow, we are willing to pay to reinvent the thing 25 times in 40 years.


It's a different company paying each time ;)

I'm actually super excited to see how https://www.sovereign.tech turns out long-term. Germany has a lot of issues with missing the boat on tech, but the sovereign tech fund is such a fantastic idea.


I'm using connectrpc, and I'm a happy customer. I can even easily generate an OpenAPI schema for the "JSON API" using https://github.com/sudorandom/protoc-gen-connect-openapi


ConnectRPC is very cool, thanks for sharing. I would like to add 2 other alternatives that I like:

- dRPC (by Storj): https://drpc.io (also compatible with gRPC)

- Twirp (by Twitch): https://github.com/twitchtv/twirp (no gRPC compatibility)


Buf seems really nice, but I'm not completely sure what's free and what's not with the Buf platform, so I'm hesitant to make it a dependency for my little open source side project ideas. I should read the docs a bit more.


Buf CLI itself is licensed under a permissive Apache 2.0 License [0]. Since Buf is a compiler, its output cannot be copyrighted (similar to proprietary or GPL licensed compilers). DISCLAIMER: I am not a lawyer.

Buf distinguishes a few types of plugins: the most important being local and remote. Local plugins are executables installed on your own machine, and Buf places no restrictions on use of those. Remote plugins are hosted on BSR (Buf Schema Registry) servers [1], which are rate limited. All remote plugins are also available as local plugins if you install them.

It's worth to mention that the only time I've personally hit the rate limits of remote plugins is when I misconfigured makefile dependencies to run buf on every change of my code, instead of every change of proto definitions. So, for most development purposes, even remote plugins should be fine.

Additionally, BSR also offers hosting of user proto schemas and plugins, and this is where pricing comes in [2].

[0] https://github.com/bufbuild/buf/blob/main/LICENSE

[1] https://buf.build/blog/remote-plugin-execution

[2] https://buf.build/pricing


Ok, that makes sense. Thanks!


That's correct. Bufstream is not open source, but we do have a demo that you can try. I've asked the team to include a proper LICENSE file as well. Thanks for catching that!


That's correct. We hop onto Zoom calls with our customers on an agreed cadence, and they share a billing report with us to confirm usage/metering. For enterprise customers specifically, it works great. They don't want to violate contracts, and it also gives us a natural check-in point to ensure things are going smoothly with their deployment.

When we say fully air-gapped, we mean it!


Nope! We're still heavily investing in scaling Protobuf. In fact, our data quality guarantees built into Bufstream are powered by Protobuf! This is simply an extension of what we do...Connect RPC, Buf CLI, etc.

Don't read too much into the website :)


Good to know. Good proto tooling is still high value :)


Correct. WarpStream doesn't even support transactions.


Neither does any other Kafka protocol implementation, evidently ;)


Zing! Can’t lose if you don’t play ;)


This cuts both ways, choosing to not implement flawed portions of the spec could be seen as a good thing. I've always been a bit suspicious of the value of "bug for bug compatibility". You don't actually need transactions in Kafka in normal operation IME. I've never tried to use "streams" before and have never encountered a case where I thought they were a good trade. Better to implement that kind of stuff in a way I can control.


It isn't.


It'd be cool if you supported the Buf Schema Registry's Reflection API: https://buf.build/docs/bsr/reflection/overview

Then, the descriptors could be delivered on demand!


Working on it


I think that's not true. There are plenty of incremental ways to adopt gRPC. For example, there are packages that can facade/match your existing REST APIs[1][2].

Reads like a skill issue to me.

[1]: https://github.com/grpc-ecosystem/grpc-gateway [2]: https://github.com/connectrpc/vanguard-go


Protobuf can typically be about 70-80% smaller than the equivalent JSON payloads. If you care about Network I/O costs (at a large scale), you'd probably want to realize a benefit in cost savings like that.

Additionally, I think people put a lot of trust into JSON parsers across ecosystems "just working", and I think that's something more people should look into (it's worse than you think): https://seriot.ch/projects/parsing_json.html


Let's say I wanted to transfer a movie in MKV container format. Its binary and large at about 4gb. Would I use JSON for that? No. Would I use gRPC/protobuf for that? No.

I would open a dedicated TCP socket and a file system stream. I would then pipe the file system stream to the network socket. No matter what you still have to deal with packet assembly because if you are using TLS you have small packets (max size varies by TLS revision). If you are using WebSockets you have control frames and continuity frames and frame head assembly. Even with that administrative overhead its still a fast and simple approach.

When it comes to application instructions, data from some data store, any kind of primitive data types, and so forth I would continue to use JSON over WebSockets.


I agree, there's a lot to gain from getting away from JSON, but gRPC needs HTTP/1 support and better tooling to make that happen.


You probably want to check this out: https://connectrpc.com/


Thanks. I've got a little project that needs to use protobufs, and if my DIY approach of sending either application/octet-stream or application/json turns out to be too sketchy, I'll give Connect a try. Only reason I'm not jumping for it is it involves more dependencies.


I think it's helpful for engineering managers to be technically conversant at a fairly deep level with the technical challenges and domains the team deals with regularly. My reports have told me that in the past, this bolstered their trust in my decision-making and gave me deeper credibility overall.

I didn't do anything terribly difficult/deep, but I think it conveyed a strong, empathetic connection to what they were dealing with that mattered to them.


I agree! All managers should have been mid/strong engineers at some point, and have developed stronger soft skills to become a leader. Without the technical knowledge and understanding of the processes one can't have the needed credibility and trust for other engineers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: