Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difference is every line of C can do something wrong while very few lines of Rust can. It's much easier to scrutinize a small well contained class with tools like formal methods than a sprawling codebase.


If you limited wrong to "memory safe" and also ignore that unsafe parts violating invariants can make safe parts of Rust to be wrong.


> If you limited wrong to "memory safe"

Yes, because this is a discussion about the value of "unsafe", so we're only talking about the wrongs that are enabled by "unsafe".

> and also ignore that unsafe parts violating invariants can make safe parts of Rust to be wrong.

If I run a line of code that corrupts memory, and the program crashes 400 lines later, I don't say the spot where it crashes is wrong, I say the memory corrupting line is wrong. So I disagree with you here.


It does not invalidate an argument that you do not want to talk about it.

Regarding the second point: yes, you can then blame the "unsafe" part but the issue is that the problem might not be so localized as the notion of "only auditing unsafe blocks is sufficient" implies. You may need to understand the subtle interaction of unsafe blocks with the rest of the program.


Unsafe blocks have to uphold their invariants while accepting any possible input that safe code can give them. Any subtle interactions enabled by "unsafe" need to be part of the invariants. If they don't do that, it's a bug in the unsafe code, not the safe code using it.

If done properly, you can and should write out all the invariants, and a third party could create a proof that your code upholds them and they prevent memory errors. That involves checking interactions between connected unsafe blocks as a combined proof, but it won't extend to "the rest of the program" outside unsafe blocks.


> the problem might not be so localized as the notion of "only auditing unsafe blocks is sufficient" implies

It depends on what you consider "problem" can mean. An unsafe function needs someone to write unsafe in order to call it, and it's on that calling code to make sure the conditions needed to call the unsafe function are met.

If that function itself is safe, but still let's you trigger the unsafe function unsafely? That function, which had to write 'unsafe', has a bug: either it's not upholding the preconditions of the unsafe function it's calling, or it _can't_ uphold the preconditions without their own callers also being in on it, in which case they themselves need to be an unsafe function (and consider whether their design is a good one).

In this way, you'll always find unsafe 'near' the bug.


In other words, somebody made an error somewhere.


You're thinking of C; Rust forced that somebody to write unsafe near it to create the bug.


The bug that can lead to a violation of assumptions required for safety of the unsafe block can be elsewhere. One can hope that it is near the bloc, but there is nothing in Rust enforcing this.


When you write "unsafe", you are promising to the compiler that the unsafe code enforces the assumptions it is making.

Unsafe code needs to keep its assumption-laden variables private, and it needs to verify the parameters that safe code sends it. If it doesn't do those things, it's breaking that promise.


Unsafe blocks have a specific set of requirements they have to abide by.

Assuming they successfully do so, it is then guaranteed that no safe code is able to trigger undefined behaviour by calling the unsafe code.

Importantly, this can be checked without ever reading any of the safe code.


Let's discuss this example:

https://github.com/ejmahler/transpose/blob/e70dd159f1881d86a...

The code is buggy. Where is the bug?


The most common bug in that type of code is mixing up x and y, or width and height somewhere in your loops, or maybe handling partial blocks. It's not really what Rust aims to protect against, though bounds checking is intended to be helpful here.

I don't get the argumentshere. In practice, Rust lowers the risk of most of your codebase. Yeah, it doesn't handle every logic bug, but mostly you can code with confidence, and only pay extra attention when you're coding something intricate.

A language which catches even these bugs would be incredible, and I would definitely try it out. Rust ain't that language, but it still does give you more robust programs.


The issue is a memory safety issue, which Rust aims to protect against.

But I am not saying Rust is bad. My issue is the complete unreasonable exaggeration in propaganda from "C is completely dangerous and Rust is perfectly safe". And then you discuss and end up with "Rust does not protect against everything, but it still better", which could be the start of a reasonable discussion of how much better it actually is.


> C is completely dangerous and Rust is perfectly safe"

Nobody in this conversation said that.

If you're actually continuing an argument from somewhere else you should save everyone a lot of time and say so up front, not 10 comments in.


The start of the thread was "The difference is every line of C can do something wrong while very few lines of Rust can." but this is an exaggeration of this kind.


yeah well quote that line then


The code uses `unsafe` blocks to call `unsafe` functions that have the documented invariant that the parameters passed in accurately describe the size of the array. However, this invariant is not necessarily held if an integer overflow occurs when evaluating the `assert` statements -- for example, by calling `transpose(&[], &mut [], 2, usize::MAX / 2 + 1)`.

To answer the question of "where is the bug" -- by definition, it is where the programmer wrote an `unsafe` block that assumes an invariant which does not necessarily hold. Which I assume is the point you're trying to make -- that a buggy assert in "safe" code broke an invariant assumed by unsafe code. And indeed, that's part of the danger of `unsafe` -- by using an `unsafe` block, you are asserting that there is no possible path that could be taken, even by safe code you're interacting with, that would break one of your assumed invariants. The use of an `unsafe` block is not just an assertion that the programmer has verified the contents of the block to be sound given a set of invariants, but also that any inputs that go into the block uphold those invariants.

And indeed, I spotted this bug by thinking about the invariants in that way. I started by reading the innermost `unsafe` functions like `transpose_small` to make sure that they can't ever access an index outside of the bounds provided. Then, I looked at all the `unsafe` blocks that call those functions, and read the surrounding code to see if I could spot any errors in the bounds calculations. I observed that `transpose_recursive` and `transpose_tiled` did not check to ensure the bounds provided were actually valid before handing them off to `unsafe` code, which meant I also had to check any safe code that called those functions to see how the bounds were calculated; and there I found the integer overflow.

So you're right that this is a case of "subtle interaction of unsafe blocks with the rest of the program", but the wonderful part of `unsafe` is that you can reduce the surface area of interaction with the rest of the program to an absolute minimum. The module you linked exposes a single function with a public, safe interface; and by convention, a safe API visible outside of its module is expected to be sound regardless of the behavior of safe code in other modules. This meant I only had to check a handful of lines of code behind the safe public interface where issues like integer overflows could break invariants. Whereas if Rust had no concept of `unsafe`, I would have to worry about potentially every single call to `transpose` across a very large codebase.


I agree about what you write.. Also please note that I am not saying unsafe blocks are a bad idea. In fact, I think they are a great idea. But note that people run around saying "it is sufficient to audit unsafe blocks" but they really should say "audit unsafe and carefully analyze all logic elsewhere that may lead to a violation of their assumptions". You could argue "this is what they mean", but IMHO it is not quite the same thing and part of the usual exaggeration of the benefit of Rust safety, which I believe to be dangerously naive.


It's more like "audit unsafe and make sure it's impossible for safe code elsewhere to lead to a violation of its assumptions".

If you need to look at the safe code that calls into you when making your safety proof, then your unsafe code is incorrect and should immediately fail the audit.

Treat external safe code as unknown and malicious. Prove your unsafe code is correct anyway.


The goal when writing unsafe blocks is that no calls ever lead to a violation not let's silently load all the footguns.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: