Hacker Newsnew | past | comments | ask | show | jobs | submit | dpeckett's commentslogin

Melatonin has a short half life (~1h), that's why melatonin receptor agonists [1] are a thing. So just mechanistically it's unlikely to help with sleep maintenance.

Do you wake up after 5.5h at a consistent time of the day and the first half of the night is peaceful? If you fall back asleep do you then wake again shortly after?

I mean waking in the night can be many things (apnea, etc), but you could very well have a rather advanced sleep phase.

1. https://en.wikipedia.org/wiki/Melatonin_receptor_agonist


I'm pretty sure at this point I have familial advanced sleep phase syndrome of an unknown genetic etiology [1].

Wake up stupid early in the morning, get drowsy very early in the evenings etc. For a long time due to social pressure/habit I'd just power through the evening drowsiness. That lead to me only being able to sleep six hours or so (due to waking up stupid early), which over time lead to a substantial sleep debt.

Going to bed early helps a lot, but over time it seems like I easily start drifting earlier and earlier. I've recently had some success stabilizing my rhythm using sublingual melatonin when I first waken at 2-3am. Let's me get a couple extra hours of additional quality sleep which is a lifesaver. Wears off quick enough that by 9am or so it's basically out of my system.

I've actually been tinkering/hacking the last year or so on sleep tracking wearables. Initially focused on EEG/HRV monitoring but I'm taking a very modular approach and ultimately want to build a full set of sensors/effectors/etc.

I've recently been experimenting a lot with skin temperature gradients, turns out in the lead up to sleep it's not just blood flow in the brain that is altered [2].

1. https://en.wikipedia.org/wiki/Advanced_sleep_phase_disorder#...

2. https://journals.physiology.org/doi/pdf/10.1152/ajpregu.2000...


You might know this, but sleep debt doesn't just keep piling on. Eventually, and rather quickly, you may begin to experience permanent brain damage after a few nights of sleep deprivation.

https://www.ahajournals.org/doi/10.1161/STR.0000000000000453

From one insomniac to another... In the past I've been lucky if I get 3 hours total of sleep in a night due to physical pain disorders. I have deep trouble getting into NREM. I lucid dream often and my brain is active even when I'm supposed to be sleeping. In my dreams, I have to be careful not to be too energetic or overstimulated, or I will wake up.

I've had insomnia and night terrors since before I was regularly forming memories. An abusive childhood intensified that. I'm in my early 30s now and the damage is clear, both physically and to my life in general.

As much as I fear them, sleeping medication seems like the only way to save myself from early onset dementia or not accomplishing certain goals due to a perpetually low energy budget. It also has prevented me from losing weight. Sleep studies have shown that people who get frequently woken up while sleeping can burn around 50% less fat.. In my case, that's my entire calorie deficit which means in order to lose weight I have to basically starve myself. Melatonin, etc. have never worked for me.

All this to say... Don't wait for the damage to build up even more. Sleeping medication might change your life. I'm hoping it restores mine.


Absolutely, loss of deep sleep is associated with a ton of aging related cognitive decline. There's a number of startups experimenting with techniques to enhance deep sleep in the elderly atm (timed audio clicks, electrical stimulation etc).

There's not a lot of evidence that most common sleep medications are associated with long term improvements in health outcomes. Most have substantial detrimental effects on sleep architecture, can exacerbate underlying issues like apnea etc. Interesting the gabapentanoids (chronic pain) and Xyrem (narcolepsy) are associated with increased slow wave sleep. More research is needed (eg the DORA drugs [1]).

Thankfully circadian issues (in the absence of sleep loss) aren't associated with negative health outcomes. Just a case of finding a way to modify ones life to accommodate them.

1. https://en.wikipedia.org/wiki/Orexin_antagonist


Sorry you’re going through this, that sounds brutal. You mentioned an abusive childhood making things worse; if you’re open to sharing, has anything in therapy (trauma-informed work, CBT, EMDR, etc.) actually helped?

Do you have any take on non-med things like mindfulness/yoga nidra or gentle ambient sound at night? I’m on the fence about starting meditation myself and curious what’s been useful, or not, for you.

And if meds are what’s giving you relief right now, I’m glad you’ve found something that helps.


When you say “sleep medication” what do you mean specifically? AFAIK melatonin is safe but diphenhydramine overuse is linked to dementia.


I'm still exploring my options, open to suggestions. Medications like zolpidem and diphenhydramine are definitely off the table.


Exploring other z drugs (Lunesta, sonata), Benzos, thc, cbd, trazodone, Doxipin, belsomra, gabapantin, whisky, Dramamine, and herbal additions to higher dose melatonin (10mg+) and magnesium glycinate/threonate, like ashwaganda, valerian, gaba, passionflower, hops, lemonbalm, California poppy, jujube seed, chamomile/apigenin, myoinositol, taurine, higher dose glycine, theanine, 5htp (be careful), tryptophan (be careful), nighttime vitamin d, anti-hypertensives, arginine, supplemental oxygen, nsaids, red light therapy and placebo effects may be worth trying alone or in various combinations.


FWIW QUIC enforces TLS 1.3 and modern crypto. A lot smaller surface area and far fewer foot-guns. Combined with memory safe TLS implementations in Go and Rust I think it's fair to say things have changed since the heartbleed days.


> I think it's fair to say things have changed since the heartbleed days.

The Linux Foundation is still funding OpenSSL development after scathing review of the codebase[1], so I think it's fair to say things haven't changed a bit.

1: https://www.openbsd.org/papers/bsdcan14-libressl/


Wireguard uses "modern crypto"


How did you work around WireGuard's encryption and multiqueue bottlenecks? Jumbo frames?

25G is a lot for WireGuard [1].

1. https://www.youtube.com/watch?v=oXhNVj80Z8A


Yes, Jumbo frames unlock a LOT of additional performance - which is exactly what we have and need on those links. Using a vanilla wg-bench[0] loopback-esque (really veths across network namespaces) setup on the machine, I get slightly more than 15Gbps sustained throughput.

[0]: https://github.com/cyyself/wg-bench


Its probably a 48port switch and that's a backplane claim.


I've recently spent a bunch of time working on a mesh networking project that employs CONNECT-IP over QUIC [1].

There's a lot of benefits for sure, mTLS being a huge one (particularly when combined with ACME). For general purpose, spoke and hub VPN's tunneling over QUIC is a no-brainer. Trivial to combine with JWT bearer tokens etc. It's a neat solution that should be used more widely.

However there are downsides, and those downsides are primarily performance related. For a bunch of reasons, some just including poorly optimized library code, others involving relatively high message parsing/framing/coalescing/fragmenting costs, and userspace UDP overheads. On fat pipes today you'll struggle to get more than a few gbits of throughput @ 1500 MTU (which is plenty for internet browsing for sure).

For fat pipes and hardware/FPGA acceleration use cases, google probably has the most mature approach here with their datacenter transport PSP [2]. Basically a stripped down per flow IPsec. In-kernel IPsec has gotten a lot faster and more scalable in recent years with multicore/multiqueue support [3]. Internal benchmarking still shows IPsec on linux absolutely dominating performance benchmarks (throughput and latency).

For the mesh project we ended up pivoting to a custom offload friendly, kernel bypass (AF_XDP) dataplane inspired by IPsec/PSP/Geneve.

I'm available for hire btw, if you've got an interesting networking project and need a remote Go/Rust developer (contract/freelance) feel free to reach out!

1. https://www.rfc-editor.org/rfc/rfc9484.html

2. https://cloud.google.com/blog/products/identity-security/ann...

3. https://netdevconf.info/0x17/docs/netdev-0x17-paper54-talk-s...


Is quic related to the Chrome implemented WebTransport? Seems pretty cool to have that in browser API.


Now that's an interesting, and wild, idea.

I don't believe you could implement RFC 9484 directly in the browser (missing capsule apis would make upgrading the connection not possible). Though WebTransport does support datagrams so you could very well implement something custom.


For TCP streams syscall overhead isn't a big issue really, you can easily transfer large chunks of data in each write(). If you have TCP segmentation offload available you'll have no serious issues pushing 100gbit/s. Also if you are sending static content don't forget sendfile().

UDP is a whole another kettle of fish, get's very complicated to go above 10gbit/s or so. This is a big part of why QUIC really struggles to scale well for fat pipes [1]. sendmmsg/recvmmsg + UDP GRO/GSO will probably get you to ~30gbit/s but beyond that is a real headache. The issue is that UDP is not stream focused so you're making a ton of little writes and the kernel networking stack as of today does a pretty bad job with these workloads.

FWIW even the fastest QUIC implementations cap out at <10gbit/s today [2].

Had a good fight writing a ~20gbit userspace UDP VPN recently. Ended up having to bypass the kernels networking stack using AF_XDP [3].

I'm available for hire btw, if you've got an interesting networking project feel free to reach out.

1. https://arxiv.org/abs/2310.09423

2. https://microsoft.github.io/msquic/

3. https://github.com/apoxy-dev/icx/blob/main/tunnel/tunnel.go


Yeah all agreed - the only addendum I’d add is for cases where you can’t use large buffers because you don’t have the data (e.g. realtime data streams or very short request/reply cycles). These end up having the same problems, but are not soluble by TCP or UDP segmentation offloads. This is where reduced syscall overhead (or even better kernel bypass) really shines for networking.


I have a hard time believing that google is serving YouTube over QUIC/HTTP3 at 10Gbit/s, or even 30Gbit/s.


These are per-connection bottlenecks, largely due to implementation choices in the Linux network stack. Even with vanilla Linux networking, vertical scale can get the aggregate bandwidth as high as you want if you don’t need 10G per connection (which YouTube doesn’t), as long as you have enough CPU cores and NIC queues.

Another thing to consider: Google’s load balancers are all bespoke SDN and they almost certainly speak HTTP1/2 between the load balancers and the application servers. So Linux network stack constraints are probably not relevant for the YouTube frontend serving HTTP3 at all.


To be honest I kind of find myself drifting away from gRPC/protobuf in my recent projects. I love the idea of an IDL for describing APIs and a great compiler/codegen (protoc) but there's just soo many idiosyncrasies baked into gRPC at this point that it often doesn't feel worth it IMO.

Been increasingly using LSP style JSON-RPC 2.0, sure it's got it's quirks and is far from the most wire/marshaling efficient approach but JSON codecs are ubiquitous and JSON-RPC is trivial to implement. In-fact I recently even wrote a stack allocated, server implementation for microcontrollers in Rust https://github.com/OpenPSG/embedded-jsonrpc.

Varlink (https://varlink.org/) is another interesting approach, there's reasons why they didn't implement the full JSON-RPC spec but their IDL is pretty interesting.


My favorite serde format is Msgpack since it can be dropped in for an almost one-to-one replacement of JSON. There's also CBOR which is based on MsgPack but has diverged a bit and added and a data definition language too (CDDL).

Take JSON-RPC and replace JSON with MsgPack for better handling of integer and float types. MsgPack/CBOR are easy to parse in place directly into stack objects too. It's super fast even on embedded. I've been shipping it for years in embedded projects using a Nim implementation for ESP32s (1) and later made a non-allocating version (2). It's also generally easy to convert MsgPack/CBOR to JSON for debugging, etc.

There's also an IoT focused RPC based on CBOR that's an IETF standard and a time series format (3). The RPC is used a fair bit in some projects.

1: https://github.com/elcritch/nesper/blob/devel/src/nesper/ser... 2: https://github.com/EmbeddedNim/fastrpc 3: https://hal.science/hal-03800577v1/file/Towards_a_Standard_T...


What I really like about protobuf is the DDL. Really clear schema evolution rules. Ironclad types. Protobuf moves its complexity into things like default zero values, which are irritating but readily apparent. With json, it's superficially fine, but later on you discover that you need to be worrying about implementation-specific stuff like big ints getting mangled, or special parsing logic you need to set default values for string enums so that adding new values doesn't break backwards compatibility. Json-schema exists but really isn't built for these sorts of constraints, and if you try to use json-schema like protobuf, it can get pretty hairy.

Honestly, if protobuf just serialized to a strictly-specified subset of json, I'd be happy with that. I'm not in it for the fast ser/de, and something human-readable could be good. But when multiple services maintained by different teams are passing messages around, a robust schema language is a MASSIVE help. I haven't used Avro, but I assume it's similarly useful.


The better stack rn is buf + Connect RPC: https://connectrpc.com/ All the compatibility, you get JSON+HTTP & gRPC, one platform.


Software lives forever. You have to take the long view, not the "rn" view. In the long view, NFS's XDR or ASN.1 are just fine and could have been enough, if we didn't keep reinventing things.


It's mind-blowing to think XDR / ONC RPC V2 were products of the 1980s, and that sitting here nearly forty years later we are discussing the same problem space.

Probably the biggest challenge with something like XDR is it's very hard to maintain tooling around it long-term. Nobody wants to pay for forty years of continuous incremental improvement, maintenance, and modernization.

Long term this churn will hopefully slow down, it's inevitable as we collectively develop a solid set of "engineering principles" for the industry.


> Nobody wants to pay for forty years of continuous incremental improvement, maintenance, and modernization.

And yet somehow, we are willing to pay to reinvent the thing 25 times in 40 years.


It's a different company paying each time ;)

I'm actually super excited to see how https://www.sovereign.tech turns out long-term. Germany has a lot of issues with missing the boat on tech, but the sovereign tech fund is such a fantastic idea.


I'm using connectrpc, and I'm a happy customer. I can even easily generate an OpenAPI schema for the "JSON API" using https://github.com/sudorandom/protoc-gen-connect-openapi


ConnectRPC is very cool, thanks for sharing. I would like to add 2 other alternatives that I like:

- dRPC (by Storj): https://drpc.io (also compatible with gRPC)

- Twirp (by Twitch): https://github.com/twitchtv/twirp (no gRPC compatibility)


Buf seems really nice, but I'm not completely sure what's free and what's not with the Buf platform, so I'm hesitant to make it a dependency for my little open source side project ideas. I should read the docs a bit more.


Buf CLI itself is licensed under a permissive Apache 2.0 License [0]. Since Buf is a compiler, its output cannot be copyrighted (similar to proprietary or GPL licensed compilers). DISCLAIMER: I am not a lawyer.

Buf distinguishes a few types of plugins: the most important being local and remote. Local plugins are executables installed on your own machine, and Buf places no restrictions on use of those. Remote plugins are hosted on BSR (Buf Schema Registry) servers [1], which are rate limited. All remote plugins are also available as local plugins if you install them.

It's worth to mention that the only time I've personally hit the rate limits of remote plugins is when I misconfigured makefile dependencies to run buf on every change of my code, instead of every change of proto definitions. So, for most development purposes, even remote plugins should be fine.

Additionally, BSR also offers hosting of user proto schemas and plugins, and this is where pricing comes in [2].

[0] https://github.com/bufbuild/buf/blob/main/LICENSE

[1] https://buf.build/blog/remote-plugin-execution

[2] https://buf.build/pricing


Ok, that makes sense. Thanks!


> I love the idea of an IDL for describing APIs and a great compiler/codegen (protoc)

Me too. My context is that I end up using RPC-ish patterns when doing slightly out-of-the-ordinary web stuff, like websockets, iframe communications, and web workers.

In each of those situations you start with a bidirectional communication channel, but you have to build your own request-response layer if you need that. JSON-RPC is a good place to start, because the spec is basically just "agree to use `id` to match up requests and responses" and very little else of note.

I've been looking around for a "minimum viable IDL" to add to that, and I think my conclusion so far is "just write out a TypeScript file". This works when all my software is web/TypeScript anyway.


Now that's an interesting thought, I wonder if you could use a modified subset of TypeScript to create a IDL/DDL for JSON-RPC. Then compile that schema into implementations for various target languages.


Typia kinda does this, but currently only has a Typescript -> Typescript compiler.


Yeah that's what I'd look into. Maybe TS -> Json Schema -> target language.


Same; at my previous job for the serialisation format for our embedded devices over 2G/4G/LoRaWAN/satellite I ended up landing on MessagePack, but that was partially because the "schema"/typed deserialisation was all in the same language for both the firmware and the server (Nim, in this case) and directly shared source-to-source. That won't work for a lot of cases of course, but it was quite nice for ours!


> efficiency

State of the art for both gzipped json and protobufs is a few GB/s. Details matter (big strings, arrays, and binary data will push protos to 2x-10x faster in typical cases), but it's not the kind of landslide victory you'd get from a proper binary protocol. There isn't much need to feel like you're missing out.


The big problem with Gzipped JSON is that once unzipped, it's gigantic. And you have to parse everything, even if you just need a few values. Just the memory bottleneck of having to munch through a string in JSON is going to slow down your parser by a ton. In contrast, a string in Protobuf is length-encoded.

5-10x is not uncommon, and that's kissing an order of magnitude difference.


> have to parse everything, even for just a few values

That's true of protobufs as much as it is for json, except for skipping over large submessages.

> memory bottleneck

Interestingly, JSON, gzipped JSON, and protobufs are all core-bound parsing operations. The culprit is, mostly, a huge data dependency baked into the spec. You can unlock another multiplicative 10x-30x just with a better binary protocol.

> 5-10x is not uncommon

I think that's in line with what I said. You typically see 2x-10x, sometimes more (arrays of floats, when serialized using the faster of many equivalent protobuf wire encodings, are pathologically better for protos than gzipped JSON), sometimes less. They were aware of and worried about some sort of massive perf impact and choosing to avoid protos anyway for developer ergonomics, so I chimed in with some typical perf numbers. It's better (perf-wise) than writing a backend in Python, but you'll probably still be able to measure the impact in real dollars if you have 100k+ QPS.


Yeah this is something people don't seem to want to get into their heads. If all you care is minimizing transferred bytes, then gzip+JSON is actually surprisingly competitive, to the point where you probably shouldn't even bother with anything else.

Meanwhile if you care about parsing speed, there is MessagePack and CBOR.

If any form of parsing is too expensive for you, you're better off with FlatBuffers and capnproto.

Finally there is the holy grail: Use JIT compilation to generate "serialization" and "deserialization" code at runtime through schema negotiation, whenever you create a long lived connection. Since your protocol is unique for every (origin, destination) architecture+schema tuple, you can in theory write out the data in a way that the target machine can directly interpret as memory after sanity checking the pointers. This could beat JSON, MessagePack, CBOR, FlatBuffers and capnproto in a single "protocol".

And then there is protobuf/grpc, which seems to be in this weird place, where it is not particularly good at anything.


Except gzip is tragically slow, so crippling protobuf by running it through gzip could indeed slow it down to json speeds.


"gzipped json" vs "protobuf"


Then something is very wrong.


Protobufs have a massive data dependency baked into the wire format, turning parsing into an intensive core-bound problem.

Interestingly, they're not usually smaller than gzipped JSON either (the compression it has built-in is pretty rudimentary), so if you don't compress it and don't have a stellar network you might actually pay more for the total transfer+decode than gzipped JSON, despite usually being somewhat faster to parse.


Got any references to share?


The docs [0] are fairly straightforward. I'll spit out a little extra data and a few other links in case it's helpful. If this is too much or not enough text, feel free to ask followup questions.

As far as data dependencies are concerned, you simply can't parse a byte till you've parsed all the preceding bytes at the same level in a message.

A naive implementation would (a) varint decode at an offset, (b) extract the tag type and field index, (c) use that to parse the remaining data for that field, (c1) the exact point in time you recurse for submessages doesn't matter much, but you'll have to eventually, (d) skip forward the length of the field you parsed, (e) if not done then go back to (a).

You can do better, but not much better, because the varints in question are 8 bytes, requiring up to 10 bytes on the wire, meaning AVX2 SIMD shenanigans can only guarantee that you parse 3 varints at a time. That's fine and dandy, except most fields look like 2 varints followed by some binary data, so all you're really saying is that you can only parse one field at a time and still have to skip forward an unpredictable amount after a very short number of bytes/instructions.

If you have more specialized data (e.g., you predict that all field indexes are under 32 and all fields are of type "LENGTH"), then there are some tricks you can do to speed it up a bit further. Doing so adds branches to code which is already very branchy and data-dependent though, so it's pretty easy to accidentally slow down parsing in the process.

Something close to the SOTA for varint decoding (a sub-component of protobuf parsing) is here [1]. It's quite fast (5-10 GB/s), but it relies on several properties that don't actually apply to the protobuf wire format, including that their varints are far too small and they're all consecutively concatenated. The SOTA for protobuf parsing is much slower (except for the sub-portions that are straight memcopies -- giant slices of raw data are fairly efficient in protos and not in JSON).

This isn't the best resource [2], but it's one of many similar examples showing people not finding protos substantially faster in the wild, partly because their protos were bigger than their json objects (and they weren't even gzipping -- the difference there likely comes from the tag+length prefix structure being more expensive than delimiters, combined fixed-width types favoring json when the inputs are small). AFAICT, their json library isn't even simdjson (or similar), which ought to skew against protos even further if you're comparing optimal implementations.

In terms of protos being larger than gzipped json, that's just an expected result for almost all real-world data. Protobuf adds overhead to every field, byte-compresses some integers, doesn't compress anything else, and doesn't bit-compress anything. Even if your devs know not to use varint fields for data you expect to be negative any fraction of the time, know to use packed arrays, ..., the ceiling on the format (from a compression standpoint) is very low unless your data is mostly large binary blobs that you can compress before storing in the protobuf itself.

For a few other random interblags comparisons, see [3], [4]. The first finds protos 3x-6x faster (better for deserializing than serializing) compared to json. The second finds that protos compress better than json, but also that compressed json is much smaller than ordinary protos for documents more than a few hundred bytes (so to achieve the size improvements you do have to "cripple" protos by compressing them).

If you start looking at the comparisons people have done between the two, you'll find results largely consistent with what I've been saying: (1) Protos are 2x-10x faster for normal data, (2) protos are usually larger than gzipped json, (3) protos are sometimes slower than gzipped JSON, (4) when you factor in sub-par networks, the total transfer+decode time can be much worse for protos because of them being larger.

As a fun experiment, try optimizing two different programs. Both operate on 1MB of pseudo-random bytes no greater than 10. Pick any cheap operation (to prevent the compiler from optimizing the iteration away) like a rolling product mod 256, and apply that to the data. For the first program (simulating a simplified version of the protobuf wire format), treat the first byte as a length and the next "length" bytes as data, iterating till you're done. For the second, treat all bytes as data. Using a system's language on any modern CPU, you'll be hard-pressed to get an optimized version of the length-prefixed code even as fast as 10x slower than an un-optimized version of the raw data experiment.

Cap'n proto and flatbuffers (whether gzipped or not), as examples, are usually much faster than both JSON and protobufs -- especially for serialization, and to a lesser extent deserialization -- even when you're parsing the entire message (they shine comparatively even more if you're extracting sub-components of a message). One of them was made by the original inventor/lead-dev of the protobuf team, and he learned from some of his mistakes. "Proper" binary formats (like those, though they're by no means the only options) take into account data dependencies and other features of real hardware and are much closer to being limited by RAM bandwidth instead of CPU cycles.

[0] https://protobuf.dev/programming-guides/encoding/

[1] https://www.bazhenov.me/posts/rust-stream-vbyte-varint-decod...

[2] https://medium.com/@kn2414e/is-protocol-buffers-protobuf-rea...

[3] https://medium.com/streamdal/protobuf-vs-json-for-your-event...

[4] https://nilsmagnus.github.io/post/proto-json-sizes/


That's sort of where I've landed too. Protobufs would seem to fit the problem area well, but in practice the space between "big-system non-performance-sensitive data transfer metaformat"[1] and "super-performance-sensitive custom binary parser"[2] is... actually really small.

There are just very few spots that actually "need" protobuf at a level of urgency that would justify walking away from self-describing text formats (which is a big, big disadvantage for binary formats!).

[1] Something very well served by JSON

[2] Network routing, stateful packet inspection, on-the-fly transcoding. Stuff that you'd never think to use a "standard format" for.


Add "everything that communicates with a microcontroller" to 2.

That means potentially: the majority of devices in the world.


Perhaps surprisingly, I think microcontrollers may be a place where Protobufs are not a bad fit. Using something like Nanopb [1] gives you the size/speed/flexibility advantages of protocol buffers without being too heavyweight. It’ll be a bit slower than your custom binary protocol, but it comes with quite a few advantages, depending on the context.

[1] https://github.com/nanopb/nanopb


We carry a nanopb integration in Zephyr. And even there... meh. It's true that there are some really bad binary protocols in the embedded world. And protobufs are for sure a step up from a median command parser or whatever. And they have real size advantages vs. JSON for tiny/sub-megabyte devices, which is real.

But even there, I find that really these are very big machines in a historical sense. And text parsing is really not that hard, or that big. The first HTTP server was on a 25MHz 68040!

Just use JSON. Anything else in the modern world needs to be presumed to be premature optimization absent a solid analysis with numbers.


If you use JSON in an embedded capacity and you're stingy with your bytes, you can just send arrays as your main document.

There was this one MMO I played, where every packet was just a space separated string, with a fancy variable length number encoding that let them store two digits in a single character.

There is not much difference between

    walk <PlayerId> <x> <y>
    walk 5 100 300
and

    ['walk', '<PlayerId>', '<x>', '<y>']
    ['walk', 5, 100, 300]
in terms of bytes and parsing, this is trivial, but it is a standard JSON document and everyone knows JSON, which is a huge win on the developer side.


Amen to that.


Apart from being text format, I'm not sure how well JSON-RPC handles doubles vs long integers and other types, where protobuf can be directed to handle them appropriately. That is a problem in JSON itself, so you may neeed to encode some numbers using... "string"


I'd say the success of REST kind of proves that's something that for the most part can be worked around. Often comes down to the JSON codec itself, many codecs will allow unmarshalling/marshalling fields straight into long int types.

Also JS now has BigInt types and the JSON decoder can be told to use them. So I'd argue it's kind of a moot point at this stage.


Sure, but you can work around gRPC's issues too—"workable" might be the only bar that matters in practice, but it's a remarkably low bar.

The risk with JSON is that too many systems understand it, and intermediate steps can mess up things like numeric precision as well as being inconsistent about handling things out of spec (field order, duplicate fields... etc). This definitely bites people in practice—I saw an experience report on that recently, but can't find the link just now :/


JS having BigInt type has nothing to do with JSON. Backend languages have had BigInt types forever. It isn't relevant to JSON as a format.

Just one example: Azure Cosmos DB stores JSON documents. If you try to store integers larger than 53 bits there those integers will be silently rounded to 53 bits. I know someone who got burned by this very badly loosing their data; I was able to explain them exactly why...

JSON basically does not define what a number is; and that is a disaster for it as an API format.


It's not, because some middleman (library, framework, etc.) would assume that JSON is really about sending integers as doubles, hence you are getting only 53 or was it 54 bits precision, and then you end up sending an integer as "string" - but then what is this really?

I get it, it's probably not a concern for a lot of applications, but when comes to science, games, data it's of big concern... and this excluding the fact that you have to convert back and forth that number a... number of times, and send it on the wire inefficiently - and also miss a way to send it more efficiently using gorilla encoding or something else like that.

JSON is great for a lot of things, but not for high throughput RPC.


> I'd say the success of REST

I think that you mean the success of JSON APIs. REST is orthogonal to JSON/Protobuf/HTML/XML/S-expressions/whatever.


> Also JS now has BigInt types and the JSON decoder can be told to use them.

The parser needs to know when to parse as BigInt vs String.


Also json parsers are crazy fast nowadays, most people don't realize how fast they are.


While true, it's still a text and usually http/tcp based format; data -> json representation -> compression? -> http -> tcp -> decompression -> parsing -> data. Translating to / from a text just feels inefficient.


With projects I work on it's over websockets, js/ts has builtin support, easy to log, debug, extend/work with etc.

Binary protocols have exactly same steps.

Redis is also using text based protocol and people don't seem to be bothered too much about it.


I think this is the fundamental cultural difference between countries like the USA (and Australia) vs most of the EU.

Folks here expect and trust the state to provide for the future, private provisions are easily dismissed as unnecessary. As a result of this the median household wealth of the area I live in is 5x lower than my home country of Australia, despite incomes (adjusted for purchasing power) not being drastically different.

Whether that trust is wisely placed we'll have to wait and see[1]. However I do need to narrow that down a bit, it's not the whole EU, mostly France/Germany. There are other nations moving ahead with private pension schemes etc and much higher household wealth (Denmark, Sweden, Netherlands).

1. My main concern is government investment is inherently slow, politically charged, and pathologically risk averse vs the private sector. This means in the aggregate, over the long term, private household investments will out perform.


This is entirely a problem of putting all of eggs in one basket. Or rather, only having one basket in which you’re allowed to put your eggs. The German pension system is already insolvent, and I suspect it’s not the only one. European welfare states are great n’all, but it would seem they were set up when the going was good and populations were growing. Now with stagnation, they don’t look much like staying solvent and populations don’t have anywhere else to turn


The German pension system is completely insane. There is no fund backing it, and disbursements already exceed contributions to the tune of 127B EUR per year (which is subsidized from the general budget) and the gap is only growing.

Probably explains a good chunk of Germany's and the EUs anemic growth. Simply put, that's 127B EUR that can't be spent investing in the future and growing the economy.


This is typical of Hetzner, if a product SKU is losing money they very quickly make changes, even going as far as to discontinue the product entirely (eg. GPU servers). They definitely don't seem to be a fan of loss leaders.

I'm guessing somehow the traffic usage patterns of their USA customers was very different to their EU counterparts, or the cost of expanding network capacity was a lot higher than anticipated.

It's a bit of a shock for sure but it seems this model is a big part of how they can maintain their slim margins.


I have no complaints at all about this model. They work out the cost of providing a service, then they charge that cost plus a markup. They keep doing things that make them money. They stop doing things that don't make them money.

It seems like a straightforward way to run a business.


Yep they're the technology equivalent of a discount supermarket. Everything is commoditized to the extreme.

Breath of fresh air in the modern cloud era tbh.


I have one big complaint and one little one. The big complaint is that they didn't even give one business day's notice, and the little complaint is that they raised prices at the same time they cut what they were offering by 20x, instead of doing one at a time.


They're giving two business days' notice for new product and three months for existing product?


December 1st's change isn't just for new customers. It's for newly-created or rescaled servers belonging to existing customers too, and it's plausible that those operations might happen a lot for some customers. And Thanksgiving and Black Friday are holidays for almost all American tech workers, so I'm not counting them as business days.


Normally that would be ok, but considering the way many systems are setup to load balance and quickly spin up new servers and spin down un-needed servers on the fly, one business day would not give you enough time to revamp your system to work with a different provider.


The bandwidth market is very different between EU and USA, maybe they weren't prepared for the much higher prices in USA? I'm pretty used to having a 100 Mbps connection to our servers that we can use without any strings attached. Even on the lowest tier. (Not Hetzner customer but been thinking about it)


To be fair, given how cheap a lot of Hetzners products (especially Server Auction, my beloved) are compared to the competition, not wanting to have loss leaders seems reasonable to me


Rather, the backbone providers dont do peering agreements and the traffic is very expensive, especially in the post-zirp inflation period. Europe is different - everybody peers with everybody so traffic is dirt cheap.


That probably runs on Kubernetes (or Borg) under the hood.


Cloud Run explicitly uses KNative APIs... which are kubernetes objects.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: