I indeed like Rust's approach a lot more than Go's. What I like less still is that it gives the impression that it's even possible to define functions that cannot fail. This is not true. One just has to look at how runtimes deal with stack overflow errors to see how the good old Java RuntimeException creeps in in various forms (e.g. panics) because checked exceptions and it's recent incarnation as error values are a leaky abstraction.
Rust makes a distinction between recoverable and unrecoverable errors. Recoverable errors are the E in Result<T, E>. You can take action and recover, depending on what kind of E it is.
Unrecoverable errors are things like stack overflows or out of bounds array access. There is no reasonable way to soldier on after this, so the program should just end. Trying to continue the program in such situations only leads to pain. Like array accesses out of bounds that allow you to read unrelated memory.
But it’s still an evolving area. For example, failure to allocate memory - is that recoverable or unrecoverable? Initially it was thought that it was unrecoverable, and programs would panic if memory failed to allocate. This seemed reasonable, until folks tried to use Rust within the Linux kernel. Within the kernel, failure to allocate memory is recoverable. Rust is evolving the semantics here.
All this to say, yes, Rust does allow you to define functions that either fail in a recoverable way, in which case the calling function should handle it. Or they fail in an unrecoverable way in which case there’s nothing the calling function can do to recover. Thankfully, panics in third party code are relatively rare so this doesn’t happen in practice.
> Unrecoverable errors are things like stack overflows or out of bounds array access. There is no reasonable way to soldier on after this, so the program should just end
No, I wholeheartedly disagree with this. It's the equivalent of exit(1) some way down the stack. Whats recoverable or not depends on the use case and is a decision to be made by the caller of a function, not the implementor.
GP might have been referring to undefined/invalid behaviour (whether in the language or in some OS syscall or whatever). After the demons came out of your nose you can never fix the problem, so there is no point trying to handle the error.
Otherwise I agree with you, that library code should not fail/crash/exit(1) just because of some judgement about recoverability, and out to clean up after itself before passing control back to the caller. If the user wants to fix some ENOSPC deep in my library by shelling out to "rm -rf /" and then trying again, that's fine by me, and this should be reflected in the API.
GP might have meant undefined behavior, but specifically mentioned stack overflows and out of bounds array access as unrecoverable errors. These sound brutal, but are in fact all but undefined. Proper handling is expected in the large class of applications which run as servers.
> it gives the impression that it's even possible to define functions that cannot fail
Do you mean that e.g. an out-of-bounds error will panic? If that's the case, you can always access arrays/slices with some checked access, that will return a Result/Option and cannot panic. But it would be a PITA if you couldn't skip that.
That's a very specific case, that could be handled non-trivially.
Usually your HTTP framework will already have this implemented, i.e. a panic in a request handler will be "caught", converted to some 500 response, and should not affect other requests.
My point is that this is in no way different from any other class of errors, _except_ in those cases where it is. It's practical to assume all errors are handled like this, because this catch all needs to exist anyway. And unless you have _very specific needs_, this can be automated.
I think conflating these two into one paradigm is worse. The catch-all (exceptions) style is nice only for a few very specific cases, like the request handler example. Everywhere else I want to either bubble up (like exceptions, but Result<> and ? sugar is as good or better), OR I want to handle the error. For the latter case, exceptions are not good at all.
IMHO, quite often when you're tempted to handle an error, you're either wrong to do so, or in some kind of infrastructure glue code. A request handler, a task executor, a strategy chain, a retry loop, you name it. And this code needs to deal with both classes of errors anyway to be bugfree.
From those examples, I think only request and task should deal with panics. Things that start threads or processes.
Other points of "catch all errors" don't need that. And then there are a lot of places where you do handle errors, if they are conceptually a Result. I know you can just catch SpecificError, but the ergonomics are just horrible in terms of control flow.
> there are a lot of places where you do handle errors, if they are conceptually a Result
Yet, what's conceptually a result lies in the eye of the beholder, and should not be dictated by an API designer IMHO. Rust's ? is a step in the right direction, but I'd argue since you care about specific errors in maybe 0.1% of invocations tops (in production code), and that's a stretch, the ? should actually work the other way round. And if the mechanism does not provide a way to select specific errors (such as a proper catch clause) the errors it exposes as a result should include runtime errors as well.