Hacker Newsnew | past | comments | ask | show | jobs | submit | razorfen's commentslogin

This has become a personal debate for me recently, ever since I learned that there are several software luminaries who eschew debuggers (the equivalent of taking an oscilliscope probe to a piece of electronics).

I’ve always fallen on the side of debugging being about “isolate as narrowly as possible” and “don’t guess what’s happening when you can KNOW what’s happening”.

The arguments against this approach is that speculation and statically analyzing a system reinforces that system in your mind and makes you more effective overall in the long run, even if it may take longer to isolate a single defect.

I’ll stick with my debuggers, but I do agree that you can’t throw the baby out with the bathwater.

The modern extreme is asking Cursor’s AI agent “why is this broken?” I recently saw a relatively senior engineer joining a new company lean too heavily on Cursor to understand a company’s systems. They burned a lot of cycles getting poor answers. I think this is a far worse extreme.


For me, it's about being aware of the entire stack, and deliberate about which possibilities I am downplaying.

At a previous company, I was assigned a task to fix requests that were timing out for certain users. We knew those users had more data than the standard deviation, so the team lead created a ticket that was something like "Optimize SQL queries for...". Turns out the issue was our XML transformation pipeline (I don't miss this stack at all) was configured to spool to disk for any messages over a certain size.

Since I started by benchmarking the query, I realized fairly quickly that the slowness wasn't in the database; since I was familiar with all layers of our stack, I knew where else to look.

Instrumentation is vital as well. If you can get metrics and error information without having to gather and correlate it manually, it's much easier to gain context quickly.


To me, it's the method for deciding where I put the oscilloscope/debugger.

Without the speculation, where do you know where to put your breakpoint? If you have a crash, cool, start at the stack trace. If you don't crash but something is wrong, you have a far broader scope.

The speculation makes you think about what could logically cause the issue. Sometimes you can skip the actual debugging and logic your way to the exact line without much wasted time.


Its probably different depending on how much observability you have into the system.

Hardware, at least to me, seems impossible to debug from first principles, too many moving parts from phenomenon too tiny to see and from multiple different vendors.

Software is at a higher level of abstraction and you can associate a bug to some lines of code. Of course this means that you're writing way more code so the eventual complexity can grow to infinity by having like 4 different software systems have subtly different invariants that just causes a program to crash in a specific way.

Math proofs are the extreme end of this - technically all the words are there! Am i going to understand all of them? Maybe, but definitely not on the first pass.

Meh you can make the argument that if all the thoughts are in the abstract it becomes hard to debug again which is fair.

That doesn't mean any one is harder than the other and obviously between different problems in said disciplines you have different levels of observability. But yea idk


Can anyone explain how they maintain backwards compatibility on formats like this when adding features? I assume there are byte ranges managed in the format, but with things like compression, wouldn’t compressed images be unrenderable on clients that don’t support it? I suppose it would behoove servers to serve based on what the client would support.


In mynunderstanding, the actual image data encoding isn't altered in this update. It only introduces an extended color space definition for the encoded data.

PNG is a highly structured file format internally. It borrows design ideas from formats like EA's Interchange File Format in that it contains lists of chunks with fixed headers encoding chunk type amd length. Decoders are expected to parse them and ignore chunk types they do not support.


The Amiga was quite a platform. Glad to know that it had some long term influence.


The PNG format has chunks with types. So you can add an additional chunk with a new type and existing decoders will ignore it.

There is also some leeway for how encoding is done as long as you end up with a valid stream of bits at the end (called the bit stream format), so encoders can improve over time. This is common in video formats. I don’t know if a lossless image format would benefit much from that.


PNG is a bit unusual in that it allows a couple of alternate compressed encodings for the data that are all lossless. It is up to the encoder to choose between them (scanline by scanline, IIRC). So.this encoding algorithm leeway is implicit in a way.


PNG is specifically designed to support this. Clients will simply skip chunks they do not understand.

In this case there could be an embedded reduced colour space image next to an extended color space one


> phishing resistance (with UX that supports that) and low-level security audits of encryption software and hardware

Pardon my ignorance, but isn’t this saying “we can’t rely on reducing the likelihood of breaches, we should focus on reducing the likelihood of breaches”? Your recommendations are no more deterministic than the methods you eschew.


True, but "user secrets stolen, game over" is a much more healthy starting point than "user secrets stolen, well, maybe we can let criminals use only 10% of them by making login attempts more difficult". The latter means you can say "we reduced malicious logins by 90%" when what you are really doing is reducing all unusual logins by 90%. It's true that security audits don't guarantee success, but that percentage likelihood of security improvement comes at no cost to usability.


> Also the code should be written "for the now", not for "that future feature it would be awesome to have someday like making it compatible with every other library X, etc.".

The folksy software adage for this is YAGNI. (You Ain’t Gonna Need It)


Phoenix is robbing Peter to pay Paul with regards to water. Phoenix survives thanks to extreme planning measures and water diversion from the Colorado river, but that’s not going to be tenable in the next 30 yrs. There is almost no well or groundwater, and before the constant diversion of water, the city of a Phoenix consumed some order of magnitude more water than they were able to restore.

My point is, anyone who lives in an overpopulated desert and says “we don’t have a water problem” should be taken with a grain of salt.


Nor do they necessarily have the ability to do discretionary prioritization of certain bugs or features, even if they did see it.


You may not be in one job forever, and you can move between about 10,000 tech companies through a long career if you are in the Bay.


I don't completely disagree, but his point about the irrelevance of SOLID, OO, and Java in this supposedly grand new age of FP programming ignores that OO is still the pre-eminent paradigm for most applications and Java is remains one of the largest and most utilized languages in the world. Also, I would say that excitement around FP has waned more than it has for Java.


It’s the first I’d heard of this axiom. Can you give an example of that in a real architecture?

I think about a central server with multiple terminals on a network. There the node is the beefy boi and the terminals are just I/O devices. Kind of similar to what MightyApp is doing - except with fewer privacy concerns :)


https://en.wikipedia.org/wiki/End-to-end_principle

It is not an axiom though, but rather an architectural guideline that comes with its own set of trade-offs and must be applied judiciously.


But no one uses that architecture anymore. To the extent that you have a terminal on a remote server, you use it to configure the server, from your own thick client.


Is it true that chromebooks outsell other laptops?


ITT: a non-controversial opinion shared by most programmers.

Print debugging is fast in many cases and requires little mental overhead to get going.

But for some/many systems, there's a huge startup and cooldown time for their applications - and compiling in a print, deploying the service, and then running through the steps necessary to recreate a bug is a non-trivial exercise. Think remote debugging of a deployed system with a bug that requires select network and data states that are hard or impossible to replicate in local/dev.

For things like this, being able to isolate the exact point of breakage by stepping through deployed code, and doing immediate evaluation at various points to interrogate state can't be beat.

This post strikes me as either (a) a younger programmer who still thinks that tool choice is a war rather than different tools for different jobs (b) someone making a limp effort at stoking controversy for attention.


> I should emphatically mention: I’m not saying that print debugging is the best end state for debugging tools, far from it. I’m just saying that we should reflect deeply on why print debugging is so popular, beyond mere convenience, and incorporate those lessons into new tools.

I'm not sure what about the article makes you think either a or b. They are trying to critically examine why some people reach for print debugging first, and I think it's spot on.


Probably explains why java has such a rich set of logging and debugging tools. Startup time, plus the idea that printing to stderr/stdout doesn't help you figure out where that goes in many java environments :)


Or c) someone just making comments from observed experience, and there's not much about that 'senior developers' have when it comes to 'having had to compile something that takes a while' - that's the purview of everyone, or at least, those who have worked on those larger projects. And though remotely debugging code definitely happens, it's in relative terms, very rare. This is just someone making a comment on their blog, that's it.


On the other hand, when you are working in an example like you are discussing (a service, or multiple services, which must all be deployed), it can be hard to figure out how to get the debugger attached.

It possible depends on the kind of programming you do -- I find myself doing little bits of work on projects in many languages, so learning how to get the debugger going often takes longer than finding + fixing the bug.


In languages where you build a deeply nested call stack, advanced debugging looks more promissing. But in simpler setups like ASP/PHP/JSP etc, simply printing works fine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: