Probably not the best way to lead, considering that that phrase is the entire root of the disagreement you're chiming in on!
> but this isn't far off in regards to breaking compatibility.
I think it might be worth elaborating on why you think that change "isn't far off" being made "on a whim". At least to me, "on a whim" implies something about intent (or more specifically, the lack thereof) that the existence of negative downstream impacts says nothing about.
If anything, from what I can tell the evidence suggests precisely the opposite - that the breakage wasn't made "on a whim". The change itself [0] doesn't exactly scream "capricious" to me, and the issue was noticed before Rust 1.80.0 released [1]. The libs team discussed said issue before 1.80.0's release [2] and decided (however (un)wisely one may think) that that breakage was acceptable. That there was at least some consideration of the issue basically disqualifies it from being made "on a whim", in my view.
Your post strongly reinforces Rust's reputation as a language whose language designers are willing to break compatibility on a whim. If Rust proponents argue like this, what breakage will not be forced upon Rust users in the future?
Your post itself reinforces the OP's claim.
Edit: Seriously. At this point, it seems clear that the culture around Rust, especially driven by proponents like you, indirectly have a negative effect on both Rust software, and software security & quality overall, as seen by the bug discussed in the OP. Without your kind of post, would Ubuntu have felt less pressured to make technical management decisions that allowed for the above bug?
> Your post strongly reinforces Rust's reputation as a language whose language designers are willing to break compatibility on a whim.
> Your post itself reinforces the OP's claim.
Again, I think it might be worth elaborating precisely what you think "on a whim" means. To me (and I would hope anyone else with a reasonable command of English), making a bad decision is not the same thing as making a decision on a whim, and you have provided no reason to believe the described change falls under the latter category instead of the former.
This new post you have made again reinforces the general notion that, yes, closer to "on a whim" than many like, the Rust community is willing to break backwards compatibility. It reflects extremely poorly on the Rust community in some people's eyes that you and other proponents appear to not only be unwilling to admit the issues, like the above issue that caused some people a lot of pain, but even directly talk around the issues.
In C and C++ land, if gcc (as a thought experiment) tried breaking backwards compatibility by changing the language, people would be flabbergasted, complain that gcc made a dialect, and switch to Clang or MSVC or fork gcc. But for Rust, Rust developers just have to suck it up if rustc breaks backwards compatibility. Like Dtolnay's comment in the Github issue I linked indicates. If and once gccrs gets running, that might change.
Though I am beginning to worry, for the specification for Rust gotten from Ferrocene might be both incomplete and basically fake, and that might cause rustc and gccrs to more easily risk becoming separate dialects of Rust, which would be horrible for Rust, and since there should preferably be more viable options in my opinion of systems languages, arguably horrible for the software ecosystem as well. I hope that there are plans for robust ways of preventing dialects of Rust.
You're moving the goalposts. Neither the original claim nor your previous comment in this subthread used such vague and weakening qualifiers to "on a whim".
And even those still don't say anything about what exactly you mean by "on a whim" or how precisely that particular change can be described as such, though at this rate I suppose there's not much hope in actually getting an on-point answer.
> the Rust community is willing to break backwards compatibility
Again, the fact that Rust can and will break backwards compatibility is not in dispute. It's specifically the claim that it's done "on a whim" that was the seed of this subthread.
> appear to not only be unwilling to admit the issues
I suggest you read my comment more carefully.
I also challenge you to find anyone who claims that the changes in Rust 1.80.0 did not cause problems.
> but even directly talk around the issues.
Because once again, the existence of breaking changes and/or their negative downstream impact is not what the original comment you replied to was disputing! I'm not sure why this is so hard to understand.
> In C and C++ land, if gcc (as a thought experiment) tried breaking backwards compatibility by changing the language, people would be flabbergasted, complain that gcc made a dialect, and switch to Clang or MSVC or fork gcc.
No need for a thought experiment. Straight from the GCC docs [0]:
> By default, GCC provides some extensions to the C language that, on rare occasions conflict with the C standard.
> The default, if no C language dialect options are given, is -std=gnu23.
> By default, GCC also provides some additional extensions to the C++ language that on rare occasions conflict with the C++ standard.
> The default, if no C++ language dialect options are given, is -std=gnu++17.
Also from the GCC docs [1]:
> The compiler can accept several base standards, such as ‘c90’ or ‘c++98’, and GNU dialects of those standards, such as ‘gnu90’ or ‘gnu++98’.
So not only has GCC "chang[ed] the language" by implementing extensions that can conflict with the C/C++ standards, GCC has its own dialect and uses it by default. And yet there's no major GCC fork and no mass migration to Clang or MSVC specifically because of those extensions.
And it's not like those extensions go unused either; perhaps the most well-known example is Linux, which only officially supported compilation via GCC for a long time precisely because Linux made (and makes!) extensive use of GCC extensions. It was only after a concerted effort to remove some of those GNU-isms and add support for others into Clang that mainline Clang could compile mainline Linux [2].
> I hope that there are plans for robust ways of preventing dialects of Rust.
This is not a realistic option for any language that anyone is free to implement for what I hope are obvious reasons.
Nope, I am not moving the goalposts, as is perfectly clear to you already. You are well aware that I am completely correct and that you are wrong.
> Again, the fact that Rust can and will break backwards compatibility is not in dispute. It's specifically the claim that it's done "on a whim" that was the seed of this subthread.
And you and the other Rust proponent's directly talking around it, as you again are doing here, only worsens the situation.
> No need for a thought experiment. Straight from the GCC docs [0]:
Technically correct, but outside of extensions that has to be enabled, more or less none of that breaks any backwards compatibility. A program written in pure C or C++ ought to behave exactly the same and compile exactly the same as in those default dialects. The default dialects amount to more or less just a strict superset that behaves the same, like adding support for C++ "//" comments, or backporting newer C standard changes to previous versions. The only extensions that change behavior significantly and are not only strict supersets with same behavior, require flags to be enabled.
Thus, yet again, radically different from what the rustc developers did just last year.
Overall, your posts and the posts of your fellow Rust proponents in this submission both worsen the situation for Rust and for software overall regarding compatibility, security and safety, as the bug of the submission indicates. Imagine being so brazen and doubling down on a path that arguably lead to a very public bug. I do not believe any responsible software company would want you anywhere near its code if it cared about safety and security.
> Nope, I am not moving the goalposts, as is perfectly clear to you already. You are well aware that I am completely correct and that you are wrong.
Turns out you have more than once. I wish I didn't have to spell this out for you, but here goes one last attempt...
The original part of awesome_dude's comment that started this subthread:
> Rust still has a very "Ready to make breaking changes on a whim" reputation
Note the existence and wording of the qualifier here. The claim here is not "Ready to make breaking changes", but "Ready to make breaking changes on a whim".
The relevant response from umanwizard:
> What breaking changes has Rust made "on a whim"?
Again, note the existence and wording of the qualifier. The question here is not "What breaking changes has Rust made", but "What breaking changes has Rust made 'on a whim'
Your first response:
> I don't know about "on a whim", but this isn't far off in regards to breaking compatibility.
This is the first goalpost move. You're not claiming to have an example of a breaking change "on a whim" (in fact, you explicitly distance yourself from such a claim), but instead you say you have an example of a breaking change that "isn't far off" of being "on a whim". Note that this is not the same unadorned "on a whim" qualifier, as it uses the (slightly) weakening and more vague "isn't far off". How far off and in what way is it not far off? You fail to elaborate on both counts.
Your next response:
> Your post strongly reinforces Rust's reputation as a language whose language designers are willing to break compatibility on a whim.
A second goalpost move. You're not using the "isn't far off" qualifier any more, and are instead using the unadorned "on a whim". Again, you fail to elaborate further on this.
And finally:
> closer to "on a whim" than many like
A third goalpost move, with "on a whim" having grown two qualifiers, neither of which have previously appeared in this subthread! Now it's neither "on a whim" nor "isn't far off" "on a whim", but it's now "closer to" "on a whim" "than many like".
How close is "closer to"? Who falls under "many"? How do these describe the example you provide? Who knows!
> And you and the other Rust proponent's directly talking around it, as you again are doing here, only worsens the situation.
It's not clear to me why it's so hard to understand what this subthread was originally about, nor why you seem so insistent on refusing to actually discuss the original topic.
> Technically correct, but outside of extensions that has to be enabled, more or less none of that breaks any backwards compatibility.
This is moving the goalposts yet again. We go from what is in effect "C/C++ compilers would never break backwards compatibility by adding language extensions!" to "You're correct in that they have done it, but it's mostly not a problem".
> The default dialects amount to more or less just a strict superset that behaves the same, like adding support for C++ "//" comments, or backporting newer C standard changes to previous versions. The only extensions that change behavior significantly and are not only strict supersets with same behavior, require flags to be enabled.
Not only does this claim contradict the snippets I quoted earlier, it also contradicts this other snippet from the docs (emphasis added) [0]:
> On the other hand, when a GNU dialect of a standard is specified, all features supported by the compiler are enabled, even when those features change the meaning of the base standard.
And given that GCC defaults to said GNU dialects, that means that non-strict-superset features are enabled by default.
> This is moving the goalposts yet again. We go from what is in effect "C/C++ compilers would never break backwards compatibility by adding language extensions!" to "You're correct in that they have done it, but it's mostly not a problem".
Do you consider old program now compiles, that previously didn't to be a serious break in backward compatibility? I think they do not, and so do I.
If I'm understanding you correctly, I don't, but I interpreted the GCC docs to indicate the opposite. To me, the docs indicate that it's possible (albeit unlikely) for a program that compiles under -std=c* to fail to compile under -std=gnu* due to one of those extensions that "conflict with the [C/C++] standard" and/or "change the meaning of the base standard".
> To me, the docs indicate that it's possible (albeit unlikely) for a program that compiles under -std=c* to fail to compile under -std=gnu* due to one of those extensions that "conflict with the [C/C++] standard" and/or "change the meaning of the base standard".
That's correct. There are not much of those, but they do exist. These generally mean that the GNU dialect gave syntax a meaning, back when it was still forbidden in standard C. Then standard C adopted that feature due to it being implemented in a compiler (that's how language evolution should work), but gave it slightly different semantics. Now GCC has the choice between breaking old existing programs, or not exposing standard semantics by default. They solve that by letting the user choose the language version.
An example of that are arrays of size 0 at the end of a structure. This used to be used for specifying arrays whose size can be arbitrary large, but became obsolete with the introduction of flexible array members in C99. Now if GCC would only implement standard C, then the correct semantics for any array with a declared size that is accessed with an index larger than that size, would be undefined behaviour, but since GCC gave that the semantics, that now flexible array members have, before flexible array members existed, it chooses to implement these semantics instead, unless you tell it, which C standard you want to use.
Actually due to the use in popular codebases, such as the Linux kernel, this semantic is even assigned (based on a heuristic) with array sizes larger than zero.
> In the absence of the zero-length array extension, in ISO C90 the contents array in the example above would typically be declared to have a single element. Unlike a zero-length array which only contributes to the size of the enclosing structure for the purposes of alignment, a one-element array always occupies at least as much space as a single object of the type. Although using one-element arrays this way is discouraged, GCC handles accesses to trailing one-element array members analogously to zero-length arrays.
(With GNU projects when you have questions, the best source are the official docs themself. They are stellar and are even completely available offline on your computer in the interactive documentation system Info.)
I did take a quick glance through some of the extension docs, but it seems the particular subsection you discuss was not among the pages I happened to look at. I had never had occasion to use the GNU dialect in the past and searches weren't helping me find specific examples, so I appreciate you taking the time to elaborate!
Out of curiosity, which pages did you discover? Not as in "You hold it wrong.", but as in "Which pages does the unfamiliar discover.".
Are you on an OS, where using the GNU Info system is an option? I quite like it. Unfamiliar people often are deterred from using it, either, because it is in the terminal, or, because they think they are looking at a pager. When it's only the latter preventing you from using it, have in mind, that this is not in fact a simple paged document, but an interactive hypertext system, that predates the WWW. Documents are generally structured as a tree. Use the normal cursor movement, use Enter to follow links, p for previous node, n for next node, u / backspace for up/parent node, / works for text search, i searches in the index. Use info info when you want to know more. Pressing h for help also works. (I just discovered that the behaviour of h depends on your terminal size :-) .) When you look at the GNU onlinedocs, you look at a HTML version of that Info document. Using Info directly is nicer, since it has native support for jumping in the doc tree and instead of relying on an external entity (like Google) to point you to the node that contains your information (Often resulting in bringing you at another version or document entirely, which can lead to confusion.) you can use the built-in index, which is maintained by the document authors, so it will be accurate.
GNU Info is in my opinion the best and fastest way to access documentation, that is more then a simple reference sheet, when you don't object to leaving the Web browser. It even has C tutorials and all, completely offline.
IIRC I was basically doing a top-down search starting from the "Extensions to the C Language Family" [0] and the "Extensions to the C++ Language" [1] pages. I did come across the syntax/semantic extensions subpages you list, but I didn't comprehensively go through all the extensions.
> Are you on an OS, where using the GNU Info system is an option?
Technically yes, but I admittedly have basically zero experience with using it.
> Using Info directly is nicer, since it has native support for jumping in the doc tree and instead of relying on an external entity (like Google) to point you to the node that contains your information (Often resulting in bringing you at another version or document entirely, which can lead to confusion.) you can use the built-in index, which is maintained by the document authors, so it will be accurate.
I'm not entirely confident about how helpful it'd be for someone who is less familiar with the subject material like me as opposed to someone who has a general idea of what they're looking for, but I suppose I won't know until I try it.
> More than anything, the Rust community is hyper-fixated on stability and correctness. It is very much the antithesis to “move fast and break things”.
Cargo always picks the newest version of a dependency, even if that version is incompatible with the version of Rust you have installed.
You're like "build this please", and it's like "hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad"
They finally addressed this bug -- optionally (the default is still to break the build at the slightest provocation) -- in January this year (which of course, requires you upgrade your compiler again to at least that)
What a bunch of shiny-shiny chasing idiots with a brittle build system. It's designed to ratchet forward your dependencies and throw new bugs and less-well-tested code at you. That's absolutely exhausting. I'm not your guinea pig, I want to build reliable, working systems.
There are additionally versioning standards for shared objects, so you can have two incompatible versions of a library live side-by-side on a system, and binaries can link to the one they're compatible with.
> Cargo always picks the newest version of a dependency, even if that version is incompatible with the version of Rust you have installed.
> PKG_CHECK_MODULES([libfoo], [libfoo >= 1.2.3])
This also picks the newest version that might be incompatible with your compiler, if the newer version uses a newer language standard.
> You're like "build this please", and it's like "hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad"
Also possible in the case of your example.
> What a bunch of shiny-shiny chasing idiots with a brittle build system.
Autoconf as an example of non-brittle build system? Laughable at best.
This is whataboutism to deflect from Rust's basic ethos being to pull in the latest shiny-shiny.
> This also picks the newest version that might be incompatible with your compiler, if the newer version uses a newer language standard.
It doesn't, it just verifies what the user has already installed (with apt/yum/dnf) is suitable. It certainly doesn't connect to the network and go looking for trouble.
The onus is on library authors to write standard-agnostic, compiler-agnostic headers, and that's what they do:
> This is whataboutism to deflect from Rust's basic ethos being to pull in the latest shiny-shiny.
No. You set a bar for Cargo that the solution you picked does not reach either.
> It doesn't, it just verifies what the user has already installed (with apt/yum/dnf) is suitable.
There's no guarantee that that is compatible with your project though. You might be extra unlucky and have to bring in your own copy of an older version. Plus their dependencies.
Perfect example of the pile of flaming garbage that is C dependency "management". We haven't even mentioned cross-compiling! It multiplies all this C pain a hundredfold.
> The onus is on library authors to write standard-agnostic, compiler-agnostic headers, and that's what they do:
You're assuming that the used feature can be represented in older language standards. If it doesn't, you're forced to at least have that newer compiler on your system.
> [...] standard-agnostic, compiler-agnostic headers [...]
> For linking, shared objects have their [...]
Compiler-agnostic headers that get compiled to compiler-specific calling conventions. If I recall correctly, GCC basically dictates it on Linux. Anyways, I digress.
> shared objects have their own versioning to allow backwards-incompatible versions to exist simultaneously (libfoo.so.1, libfoo.so.2).
Oooh, that one is fun. Now you have to hope that nothing was altered when that old version got built for that new distro. No feature flag changed, no glibc-introduced functional change.
> hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad
If we look at your initial example again, Cargo followed your project's build instructions exactly and unfortunately pulled in a package that is for some reason incompatible with your current compiler version. To fix this you have the ability to just specify an older version of the crate and carry on.
Looking at your C example, well, I described what you might have to do and how much manual effort that can be. Being forced to use a newer compiler can be very tedious. Be it due to bugs, stricter standards adherence or just the fact that you have to do it.
In the end, it's not a fair fight comparing dependency management between Rust and C. C loses by all reasonable metrics.
I listed a specific thing -- that Rust's ecosystem grinds people towards newness, even if goes so far to actually break things. It's baked into the design.
I don't care that it's hypothetically possible for that to happen with C, I care that practically, I've never seen it happen.
Whereas, the single piece of software I build that uses Rust, _without changing anything_ (already built before, no source changes, no compiler changes, no system changes) -- cargo install goes off to the fucking internet, finds newer packages, downloads them, and tells me the software it could build last week can't be build any more. What. The. Fuck. Cargo, I didn't ask you to fuck up my shit - but you did it anyway. Make has never done that to me, nor has autoconf.
Show me a C environment that does that, and I'll advise you to throw it out the window and get something better.
There have been about 100 language versions of Rust in the past 10 years. There have been 7 language versions of C in the past 40. They are a world apart, and I far prefer the C world. C programmers see very little reason to adopt "newer" C language editions.
It's like a Python programmer, on a permanent rewrite treadmill because the Python team regularly abandon Python 3.<early version> and introduce Python 3.<new version> with new features that you can't use on earlier Python versions, asking how a Perl programmer copes. The Perl programmer reminds them that the one Perl binary supports and runs every version of Perl from 5.8 onwards, simultaneously, and the idea of making all the developers churn their code over and over again to keep up with latest versions is madness, the most important thing is to make sure old code keeps running without a single change, forever. The two people are simply on different planets.
> I don't care that it's hypothetically possible for that to happen with C, I care that practically, I've never seen it happen.
I don't think your anecdotal experience is enough to redeem the disarray that is C dependency management. It's nice to pretend though.
> and tells me the software it could build last week can't be build any more. What. The. Fuck. Cargo, I didn't ask you to fuck up my shit - but you did it anyway. Make has never done that to me, nor has autoconf.
If you didn't get my point in previous comment, let me put it more frankly - it is your skill issue if you aren't fixing your crates to a specific version but depend on them remaining constant. This is not Cargo's fault.
> Make has never done that to me, nor has autoconf.
Yeah, because they basically guarantee nothing nor allow working around any of the potential issues I've already described.
But you do get to wait for a thousandth time for it to check the size of some types. All those checks are a literal proof how horrible the ecosystem is.
> There have been about 100 language versions of Rust in the past 10 years
There's actually four editions and they're all backwards-compatible.
> C programmers see very little reason to adopt "newer" C language editions.
This doesn't "pick" anything, it only locates and validates the version specified and installed by the user, it does not start to fetch newer versions.
I would use a newer version of C, and consider picking C++, if the choice was between C, C++, Ada, and Rust. (If pattern matching would be a large help, I might consider Rust).
For C++, there is vcpkg and Conan. While they are overall significantly or much worse options than what Rust offers according to many, in large part due to C++'s cruft and backwards compatibility, they do exist.
The way you've described both of those solutions demonstrates perfectly how C package management is an utter mess. You claim to be very familiar with the C ecosystem yet you describe them based on their own description. Not once have you seen them in use? Both of those are also (only) slightly younger than Rust by the way.
So after all these decades there's maybe something vaguely tolerable that's also certainly less mature than what even Rust has. Congrats.
You might be mixing up who you are replying to. I never claimed that C and C++ package management are better than what Rust offers overall. In some regards, Cargo is much better than what is offered in C and C++. I wouldn't ascribe that to a mess, more the difficulty of maintaining backwards compatibility and handling cruft. However, I know of people that have had significant trouble handling Rust. For instance, the inclusion of Rust in bcachefs and handling it in Debian that whole debacle.
They did not have an easy time including Rust software as I read it. Maybe just initial woes, but I have also read other descriptions of distribution maintainers having trouble with integrating Rust software. Dynamic binding complaints? I have not looked into it.
It received a lot of attention and "visibility" because it caused a lot of pain to some people. I am befuddled why you would wrongly attempt to dismiss this undeniable counter-example.
Somebody is attempting to characterize the Rust community in general as being similar to other programming communities that value velocity over stability, such as the JS ecosystem and others.
I’m pointing out that incidents such as this are incredibly rare, and extremely controversial within the community, precisely because people care much more about stability than velocity.
Indeed, the design of the Rust language itself is in so many ways hyper-fixated on correctness and stability - it’s the entire raison d’etre of the language - and this is reflected in the culture.
Comparing with JS ecosystem is very telling. Some early Rust developers, come from the JS ecosystem (especially at Firefox), and Cargo takes inspiration from the JS ecosystem, like with lock files. But JS ecosystem is a terrible baseline to compare with regarding stability. Comparing a language's stabilitity with JS ecosystem says very little. You should have picked a systems language to compare with.
And your post is itself a part of the Rust community, and it itself is an argument against what you claim in it. If you cannot or will not own up to the 1.80 time crate debacle, or mention the 1.80 time crate debacle proactively as a black mark that weighs on Rust's conscience and that it will take time to rebuild trust and confidence in Rust's stability because of it, well, your priorities, understood as in the Rust community's priorities, are clear, and they do not, in practice, lie with stability, safety and security, nor with being forthcoming.
And rustc uses LLVM, and has had several bugs as well, whether related to LLVM or just due to itself. But what I linked was intentional breakage, and it caused some people a lot of pain.
Yea, I can think of a lot of intentional GCC breakages as well. Especially ones related to optimizations. If we wrote an article for every one you'd never hear the end of it.
Did they change the language? GCC is not meant to change the C or C++ languages (unless the user uses some flag to modify the language), there is an ISO standard that they seek to be compliant with. rustc, on the other hand, only somewhat recently got a specification or something from Ferrocene, and that specification looks lackluster and incomplete from when I last skimmed through it. And rustc does not seem to be developed against the official Rust specification.
That's not what you asked though, these were intentional breakages. Language standard or not.
In any case though, bringing up language specification as an example for maturity is such a massive cop-out considering the amount of UB in C and C++. It's not like it gives you good stability or consistency.
> there is an ISO standard that they seek to be compliant with
You can buy RM 8048 from NIST, is that the "culture" of stability you have in mind?
If breakage is not due to a language change, and the program is fully compliant with the standard, and there is no issue in the standard, then the compiler has a bug and must fix that bug.
If breakage is due to a language change, then even if a program is fully compliant with the previous language version, and the programmer did nothing wrong, then the program is still the one that has a bug. In many language communities, language changes are therefore handled with care and changing the language version is generally set up to be a deliberate action, at least if there would be breakage in backwards compatibility.
I do not see how it would be possible for you not to know that I am completely right about this and that you are completely wrong. For there is absolutely no doubt that that is the case.
> In any case though, bringing up language specification as an example for maturity is such a massive cop-out considering the amount of UB in C and C++.
> If breakage is not due to a language change, and the program is fully compliant with the standard, and there is no issue in the standard, then the compiler has a bug and must fix that bug.
There are almost no C programs without UB. So a lot of what you would call "compiler bugs" are entirely permitted standard. If you say "no true C program has UB" then of course, congrats, your argument might be in some aspects correct. But that's not really the case in practice and your language standard provides shit in terms of practical stability and cross-compatibility in compilers.
> I do not see how it would be possible for you not to know that I am completely right about this and that you are completely wrong. For there is absolutely no doubt that that is the case.
> There are almost no C programs without UB. So a lot of what you would call "compiler bugs" are entirely permitted standard. If you say "no true C program has UB" then of course, congrats, your argument might be in some aspects correct. But that's not really the case in practice and your language standard provides almost no practical stability nor good cross-compatibility in compilers.
If the compiler optimization is compliant with the standard, then it is not a compiler bug. rustc developers have the same expectation when Rust developers mess up using unsafe, though the rules might be less defined for Rust than for C and C++, worsening the issue for Rust.
I don't know where you got the idea that "almost no C programs [are] without UB". Did you get it from personal experience working with C and you having trouble avoiding UB? Unless you have a clear and reliable statistical source or other good source or argument for your claim, I encourage you to rescind that claim. C++ should in some cases be easier to avoid UB with than C.
> I don't know where you got the idea that "almost no C programs [are] without UB". Did you get it from personal experience working with C and you having trouble avoiding UB? Unless you have a clear and reliable statistical source or other good source or argument for your claim, I encourage you to rescind that claim. C++ should in some cases be easier to avoid UB with than C.
From the fact that a lot of compilers can and do rely on UB to do certain optimizations. If UB wasn't widespread, they wouldn't have those optimization passes. You not knowing how widespread UB is in C and C++ codebases is very telling.
You're however absolutely free to find me one large project that will not trigger "-fsanitizer=undefined" for starters. (Generated codebases do not count though.)
> From the fact that a lot of compilers can and do rely on UB to do certain optimizations.
Your understanding of both C, C++ and Rust appear severely flawed. As I already wrote, rustc also uses these kinds of optimizations. And the optimizations do not rely on UB being present in a program, but on UB being absent in a program, and it is the programmer's responsibility to ensure the absence of UB, also in Rust.
Do you truly believe that rustc does not rely on the absence of UB to do optimizations?
Are you a student? Have you graduated anything yet?
> You're however absolutely free to find me one large project that will not trigger "-fsanitizer=undefined" for starters. (Generated codebases do not count though.)
You made the claim, the burden of proof is on you. Though your understanding appear severely flawed, and you need to fix that understanding first.
> Difficult to understand or being unsafe does not make unsafe Rust worse than C. It's an absurd claim.
Even the Rust community at large agrees that unsafe is more difficult than C and C++. So you are completely wrong, and I completely right, yet again.
> Relying on absence of UB is not the same as relying on existence of UB. I'm not surprised however that you find this difference difficult to grasp.
You are coming with complete inanity, instead of fixing your understanding of a subject that you obviously do not have a good understanding of and that I have a much better understanding of than you, as you are well aware of.
You are making both yourself and your fellow Rust proponents look extremely bad. Please do not burden others with your own lack of competence and knowledge. Especially in a submission that is about a bug caused by Rust software.
I don't know about "on a whim", but this isn't far off in regards to breaking compatibility. And it caused some projects, like Nix, a lot of pain.
https://github.com/rust-lang/rust/issues/127343
https://devclass.com/2024/08/19/rust-1-80-0-breaks-existing-...