I think this is the standard algorithm and it's absolutely terrible. People poke at things, which ends up giving a linear search across a possibly huge system. Even if the "guess" is intelligent, it's not like you can trust it. If you actually fully understood the system, you would know what's wrong and you wouldn't be debugging.
Do a bisect instead. The complexity is O(log n). It's probably slower than if you guess right on the very first time, but that's less important. Debugging time is dominated by the worst cases.
1. Do something you're 90% sure will work that's on the path towards your actual goal.
2. If it works, move forward in complexity towards your actual goal. Else, move halfway back to the last working thing.
3. When you've trapped the bug between working and non-working to the point that you understand it, stop.
"The weather data isn't getting logged to the text files. Can I ping the weather servers? Yes. Can I do a get on the report endpoint? Yes. Can I append to a text file? Yes. Can I append a line to the weather log file? No. Ok, that narrows it a lot."
The real point of this is that you should spend most of your time with working code, not non-working code. You methodically increment the difficulty of tasks. This is a much more pleasant experience than fucking around with code that just won't work and you don't know why. Most importantly, it completely avoids all those times you wasted hours chasing a bug because of a tiny assumption. It's sorta like TDD but without the massive test writing overhead.
A modification for the disciplined: give yourself one (1) free pass at just taking a stab at the answer. This saves time on easy fixes. "Oh, it must have been that the country setting in the config file is off." Give it a single check. And if it's not that, go back to the slow and steady mode. Cause you don't understand the system as well as you thought.
You're basically describing the scientific method. Particularly the practical application of Occam's Razor: Starting from simple theories, and working your way up towards more complex ones until the theory is just complex enough to describe the system behavior you're trying to understand.
This is normally described as a way to write code, but it works for debugging if you can modify the system or at least give arbitrary inputs. It doesn’t really apply to “read only” debugging.
Do a bisect instead. The complexity is O(log n). It's probably slower than if you guess right on the very first time, but that's less important. Debugging time is dominated by the worst cases.
1. Do something you're 90% sure will work that's on the path towards your actual goal.
2. If it works, move forward in complexity towards your actual goal. Else, move halfway back to the last working thing.
3. When you've trapped the bug between working and non-working to the point that you understand it, stop.
"The weather data isn't getting logged to the text files. Can I ping the weather servers? Yes. Can I do a get on the report endpoint? Yes. Can I append to a text file? Yes. Can I append a line to the weather log file? No. Ok, that narrows it a lot."
The real point of this is that you should spend most of your time with working code, not non-working code. You methodically increment the difficulty of tasks. This is a much more pleasant experience than fucking around with code that just won't work and you don't know why. Most importantly, it completely avoids all those times you wasted hours chasing a bug because of a tiny assumption. It's sorta like TDD but without the massive test writing overhead.
A modification for the disciplined: give yourself one (1) free pass at just taking a stab at the answer. This saves time on easy fixes. "Oh, it must have been that the country setting in the config file is off." Give it a single check. And if it's not that, go back to the slow and steady mode. Cause you don't understand the system as well as you thought.