I've been thinking we could simply extend the ipv4 address to be 11 bytes by (ab)using the options field. That is, add an option that holds more bytes for the source and destination address, which are to be appended to the address already present in the header.
I am thinking that since an option starts with 2 bytes and everything must be padded to a multiple of 4 bytes, we can add 16 bytes to the packet, which would hold 7 extra address bytes per source and destination, giving us 11 byte addresses. ISPs would be given a bunch of 4-byte toplevel addresses and can generate 7-byte suffixes dynamically for their subscribers, in a way that is almost the same as CGNAT used today but without all the problems that has.
Most routers will only need to be updated to pass along the option and otherwise route as normal, because the top level address is already enough to route the packet to the ISP's routers. Then only at the edge will you need to do extra work to route the packet to the host. Not setting the option would be equivalent to setting it to all 0s, so all existing public hosts will be automatically addressable with the new scheme.
There will of course need to be a lot more work done for DNS, DHCP, syntax in programs, etc, but it would be a much easier and more gradual transition than IPv6 is demanding.
I don't think so. It would be more confusion because no one will know if a network is ipv4 or ipv4+, leading to edge case bugs and confusion and people will similarly be lazy and choose to only implement ipv4 knowing it will always be reverse compatible and the cost is transferred to the consumer.
Plus, it's only 2048x the address space. It's within the realm of possibility that we will need to upgrade again once this place is swarming with robots.
x2048 is a lot though! Maybe we should let the robots figure out their own solution, rather than trying to make every atom on Earth individually addressable :)
Back when strncpy was written there was no undefined behaviour (as the compiler interprets it today). The result would depend on the implementation and might differ between invocations, but it was never the "this will not happen" footgun of today. The modern interpretation of undefined behaviour in C is a big blemish on the otherwise excellent standards committee, committed (hah) in the name of extremely dubious performance claims. If "undefined" meaning "left to the implementation" was good enough when CPU frequency was measured in MHz and nobody had more than one, surely it is good enough today too.
Also I'm not sure what you mean with C successor languages not having undefined behaviour, as both Rust and Zig inherit it wholesale from LLVM. At least last I checked that was the case, correct me if I am wrong. Go, Java and C# all have sane behaviour, but those are much higher level.
The problem isn't undefined behavior per se; I was using it as an example for strncpy. Rust is a no - in fact, the goal of (safe) Rust is to eliminate undefined behavior. Zig on the other hand I don't know about.
In general, I see two issues at play here:
1. C relies heavily on unsized pointers (vs. fat pointers), which is why strncpy_s had to "break" strncpy in order to improve bounds checks.
2. strncpy memory aliasing restrictions are not encoded in the API and can only be conveyed through docs. This is a footgun.
For (1), Rust APIs of this type operate on sized slices, or in the case of strings, string slices. Zig defines strings as sized byte slices.
For (2), Rust enforces this invariant via the borrow checker by disallowing (at compile-time) a shared slice reference that points to an overlapping mutable slice reference. In other words, an API like this is simply not possible to define in (safe) Rust, which means you (as the user) do not need to pore over the docs for each stdlib function you use looking for memory-related footguns.
> For (2), Rust enforces this invariant via the borrow checker by disallowing (at compile-time) a shared slice reference that points to an overlapping mutable slice reference.
At least the last time I cared about this, the borrow checker wouldn't allow mutable and immutable borrows from the same underlying object, even if they did not overlap. (Which is more restrictive, in an obnoxious way.)
It is safe btw. The difference is that it returns two mutable references vs. one shared ref and one mutable ref. But as they noted, a mutable ref can always be “downgraded” into a shared ref.
Gotcha. There is a split_at_mut method that splits a mutable slice reference into two. That doesn’t address the problem you had, but I think that’s best you can do with safe Rust.
Rust safe subset doesn't have UB. At all. So long as you never write the "unsafe" keyword you're fine, the compiler will check you are obeying all of the language rules at all times.
Whereas in C, oops, sorry, you broke a rule you didn't even know existed and so that's Undefined Behaviour left and right. Some of it you could argue falls into the category you're describing, where in a better world it should have been made Implementation Defined, not UB, and too bad. However lots of it is just because the language was designed a very long time ago and prioritized ease of implementation.
If you wish the language was properly defined, you should use (safe) Rust. If you just wish that when you write nonsense the compiler should somehow guess what you meant and do that, you're not actually a programmer, find a practice which suits you better - take up knitting, learn to paint, something like that.
For those you add recovery e-mails. You can easily have a Google, Microsoft and Yahoo e-mail so having access to at least one means you can recover the rest. Yes, this increases your attack surface, but the chances remain miniscule.
Just as a note: for E2EE services that use your password to decrypt your key to decrypt your data, a recovery email often recovers your user account BUT not your data (so you may get access to a blank account). It is perfectly possible to lose access to your data, that may include the rest of your passwords, if you have not set up other recovery methods which can actually decrypt your encryption keys, and rely on a recovery email or phone.
Similarly I wish open-source devs who wish to extend and improve existing tech take a page out of Microsoft and do their Embrace, Extend, Extinguish tactic. Like: "Here is my new D-Bus implementation, it has a couple of extra bells and whistles, which I need for my project, and is faster. Oh I have added more security, you don't have to use it right now, but some services will require it. ... Security is now mandatory. ... The protocol is now called Wire and if you need D-Bus, you can run this legacy translation layer. ... The legacy translation layer is no longer installed by default, but will be maintained for those who need it. ... It has been 30 years since anybody has needed D-Bus, we are no longer maintaining the translation layer."
Which is kind of what OP is doing, but less directly inflammatory. I wish him all the luck regardless.
One more thing: D-Bus has the concept of generic services which can be automatically started, e.g. org.freedesktop.FileManager1. When you send a command to that service, the file manager will be started if it is not already running. However, there is no mechanism for the user to select which file manager to start, so if you happen to have both KDE and GNOME installed you have a 50:50 chance of launching dolphin or nautilus. See for example: https://unix.stackexchange.com/questions/778028/set-a-specif... . It truly boggles the mind.
D-Bus has no concept of generic services. People are using D-Bus to get automatic startup of generic services even though it's not fit for this purpose. It's quite a different thing.
You could build a viable solution on top of D-Bus though, it's just that apparently nobody bothered so far.
Fortunately today with AI everyone CAN actually audit all the code for all the software running on their machine themselves by sending it to a black box cloud service, thereby running only software they KNOW they can trust! /s
It does seem needlessly complex. I think a better idea is to just have a type that is a pair of pointer-sized words. That pattern crops up again and again - context pointer and function pointer, array and its size, memory allocation and effective size, etc. The problem with having both pieces in separate variables is that it is very easy to lose track of what is where. If you have it in a single bundle it is a lot simpler to use. The exact design needs a lot more consideration for sure, because I would like something simpler than writing anonymous structs everywhere (which I can already do), but at the same time flexible enough for most use cases.
Since both members of the union are effectively the exact same type, there is no issue. C99: "If the member used to access the contents of a union is not the same as the member last used to store a value, the object representation of the value that was stored is reinterpreted as an object representation of the new type". Meaning, you can initialise keyvalue and that will initialise both key and value, so writing "union slot s{0}" initialises everything to 0. One issue is that the exact layout for bit fields is implementation defined, so if you absolutely need to know where key and value are in memory, you will have to read GCC's manual (or just experiment). Another is that you cannot take the address of key or value individually, but if your code was already using uint64_t, you probably don't need to.
Edit: Note also that you can cast a pointer to slot to a pointer to uint64_t and that does not break strict aliasing rules.
I am thinking that since an option starts with 2 bytes and everything must be padded to a multiple of 4 bytes, we can add 16 bytes to the packet, which would hold 7 extra address bytes per source and destination, giving us 11 byte addresses. ISPs would be given a bunch of 4-byte toplevel addresses and can generate 7-byte suffixes dynamically for their subscribers, in a way that is almost the same as CGNAT used today but without all the problems that has.
Most routers will only need to be updated to pass along the option and otherwise route as normal, because the top level address is already enough to route the packet to the ISP's routers. Then only at the edge will you need to do extra work to route the packet to the host. Not setting the option would be equivalent to setting it to all 0s, so all existing public hosts will be automatically addressable with the new scheme.
There will of course need to be a lot more work done for DNS, DHCP, syntax in programs, etc, but it would be a much easier and more gradual transition than IPv6 is demanding.