The feature itself seems reasonable and useful. Specially if most of your tooling is written is Go as well. But this part caught my attention:
> user defined tools are compiled each time they are used
Why compile them each time they are used? Assuming you're compiling them from source, shouldn't they be compiled once, and then have the 'go tool' command reuse the same binaries? I don't see why it compiles them at the time you run the tool, rather than when you're installing dependencies. The benchmarks show a significant latency increase. The author also provided a different approach which doesn't seem to have any obvious downsides, besides not sharing dependency versions (which may or may not be a good thing - that's a separate discussion IMO).
Ok, I'm trying to suss out what this means, since `go tool` didn't even exist before 1.24.
The functionality of `go tool` seems to build on the existing `go run` and the latter already uses the same package compilation cache as `go build`. Subsequent invocations of `go run X` were notably faster than the first invocation long before 1.24. However, it seems that the final executable was never cached before, but now it will be as of 1.24. This benefits both `go run` and `go tool`.
However, this raises the question: do the times reported in the blog reflect the benefit of executable caching, or were they collected before that feature was implemented (and/or is it not working properly)?
The tl;dr is...:
1. The new caching impacts the new `go tool` and the existing `go run`.
2. It has a massive benefit.
3. `go tool` is a bit faster than `go run` due to skipping some module resolution phases.
4. Caching is still (relatively) slow for large packages
Awesome! It's interesting that there's still (what feels like) a lot of overhead from determining whether the thing is cached or not. Maybe that will be improved before the release later this month. I'm wondering too if that's down to go.mod parsing and/or go.sum validation.
I'd also note the distinction between `go run example.com/foo@latest` (which as you note must do some network calls to determine the latest version of example.com/foo) and simple `go run example.com/foo` (no @) which will just use the version of foo that's in go.mod -- presumably `go tool foo` is closer to the latter.
> user defined tools are compiled each time they are used
Why compile them each time they are used? Assuming you're compiling them from source, shouldn't they be compiled once, and then have the 'go tool' command reuse the same binaries? I don't see why it compiles them at the time you run the tool, rather than when you're installing dependencies. The benchmarks show a significant latency increase. The author also provided a different approach which doesn't seem to have any obvious downsides, besides not sharing dependency versions (which may or may not be a good thing - that's a separate discussion IMO).