This. If I'm forced to use a feature I hate because it's the only way to do something, the "ground truth" reflects that I like that feature. It doesn't tell the whole story.
Most metrics teams are reasonably competent and are aware of that. Excepting "growth hackers"
I haven't been in a single metrics discussion where we didn't talk about what we're actually measuring, if it reflects what we want to measure, and how to counterbalance metrics sufficiently so we don't build yet another growthhacking disaster.
Doesn't mean that metrics are perfect - they are in fact aggravatingly imprecise - but the ground truth is usually somewhat better than "you clicked it, musta liked it!"
At least when I worked at Uber, that wasn't really how it worked. The eng org was so big that it was nearly impossible to track all the projects people worked on, and you'd get micro-ecosystems of tools because of it.
Layoffs don't happen the same way they do in the US, at least in Germany. It's expensive to lay someone off due to dual-party notice period requirements. "At will" is a foreign concept here.
> The most underrated skill to learn as an engineer is how to document.
Document why. I can read code. I want to know _why_ this nebulous function called "invert_parameters" that is 200 lines long even exists. Which problem did you have that this function solved? Why was this problem there in the first place? Write some opinions on maybe its intended lifetime of the codebase. Hell, I write comments that apologize, just so that a future reader knows that the code I wrote wasn't meant to be great but that I was in a time crunch or a manager was breathing down my neck, or that some insane downstream/upstream thing did something... well, insane.
Paint some picture of your mindset when writing something, especially if it's non-obvious, as that'll give all the additional context not captured in code, when reading the code.
Obviously this isn't the only good documentation rule, but I wish people - juniors and seniors alike - would do this more often, especially at the workplace.
I think the real underrated skill to learn as an engineer is how to test.
Documentation can take many forms: ADRs, system diagrams, specs, JIRA tickets, commit messages and descriptions, PR descriptions, code comments, idiomatic/intuitive code, etc. etc. and much of that requires maintenance or review to ensure it's still up to date.
Outdated tests quickly become broken tests, and they serve a purpose as documentation as well, but aside from throwing around buzzwords like TDD and BDD and all that, it's rarely a skill that is explicitly developed. Or maybe it's handed over to an SDET or something.
Build a decent set of tests that can survive the implementation being refactored, rather than coupling them to the runtime code or filling them with mocks and stubs, and you can get a good way to documenting complicated routines simply because you are explaining how it works through numerous well-described assertions.
That means you never need to bother with the 'how' or 'what' when commenting code, and you have multiple levels of 'why' as you go up from the code to commits to the issue tracker and beyond.
Sadly I have rarely seen people doing this. These shadow knowledge usually went away with their owners when they left the company, left other people scratching their heads.
> Sadly I have rarely seen people doing this. These shadow knowledge usually went away with their owners when they left the company, left other people scratching their heads.
Some of that's inevitable, but I'm continuously surprised about how unconcerned people are about it day to day.
I document why stuff in comments, commit messages, and other documents all the time. It's super easy since 1) I've been there when that shadow knowledge goes away, 2) I can think about future-me when I write that stuff, because I know I'll forget a lot of it. I don't know why so many people have a problem with doing the same, and need to be constantly reminded to do it.
Probably a big part is letting the perfect be the enemy of the good. I don't consider anything to be definitive, it's all clues to be pieced together later, and I just like to leave as many clues as possible.
I think people simply stop caring once it's just a 9-5 job, plus it is never rewarded anyway. So you get random results.
That's why I always believe the following two points:
1. Engineers are trained on-job. This means, if you want to be a good engineer, a really good one. You need to be very picky about what you do. Most of the "engineer" positions out there, like 95% of them, do NOT promote, or even go against the best principles of true engineering, so you are basically fighting against the objective that is to be the best engineer you can be.
2. Engineers should NOT deal with complicated business rules -- that is, it can exist in code, but the stakeholders are the one to provide and explain it. We should want NOTHING of it.
Serving business interest, and keeping our jobs ≠ doing whatever the business stakeholders want, that means we have to be very picky about the kind of job we do, the kind of team and company we want to be part of it.
> I think people simply stop caring once it's just a 9-5 job, plus it is never rewarded anyway. So you get random results.
I can kind of see that, if you're so disengaged you don't care if your job is hard or easy. Then you just see it all as slogging for a certain number of hours a day.
But I don't get that. I don't like things being unnecessarily hard, and writing stuff down makes it easier to actually get things done in the future. And at some point you're going to get judged on your performance, so wasting a bunch of effort uselessly slogging doesn't make you look good if someone paying attention to if you're actually getting things done or not.
The biggest reward is me making my own life easier, and when I do that I can always later pretend to slog a big to grab some time for myself.
People who don't care already probably would take less work now instead of less work in the future. Like, they could just rage quit at any moment. I guess that's the mindset.
I had similar mindset about other things. My wife always wondered why I need to slice chores into pieces and do them one by one. "Why don't you just do them in one shot? It's a bit easier". "Honey, I really hate chores, and I might get hit by a truck in the next hour, so if I push as much work to the future, I maximize my happiness function at the moment."
Why would anyone ever do this? You run the risk of losing your job. I've yet to work at a single corporation where they truly cared about documentation, best practices, mentoring, rigorous testing; everyone always rewards the wrong incentives (pumping out features) and this is the result you get.
Frankly I don't blame workers either, it's not their fault they have to play a stupid game that helps no one so they can continue to have health insurance and not become homeless.
I strongly agree. I try to urge coders to document intent, that’s how I put it.
Sometimes the intent is obvious and doesn’t need explanation, you’re implementing the feature.
But if the intent is not obvious - like compensating for some out of band failure, or responding to some not obvious business need, or putting in something temporary that will be fixed later, then the reader needs to know.
It’s frustrating that so few think about the perspective and needs of the reader or reviewer, not just the machine.
our rule for the last couple of projects has been: if the PR description doesn't explain why, it doesn't merge. code comments about why rot, but PR descriptions are timestamped and tied to the diff forever. not perfect but it's saved us more than a few times when someone asks 'why is this like this' three years later.
Documenting why is incredibly important, but also why something has not been done.
The last business I started, I was coding at full steam building features that I could make work now although not optimal, so I would add comments reflecting that.
Over the > 15 years the product’s been on the market, there have been several times I’ve come back across those comments when we outgrew the quick solution several years later.
AI (and humans) know why something was done if it was for technical reasons as it would be necessary to have those technical reasons described in the test suite/type system.
It wouldn't be able to reverse engineer why something was done when the "why" is some arbitrary decision that was made based on the engineer not having had his morning coffee yet, but those "whys" aren't an important property of the system, so who cares? Even in the unlikely event that someone documented that they zigged instead of zagged because it was just the vibes they were feeling in that moment, nobody is going to ever bother to read it anyway.
If something could be important and a decision about it was arbitrary, it's valuable to capture that. "There are three viable algorithms here and I don't know which will perform best with our live data so I picked the one that's mathematically beautiful for now" tells whoever is optimizing that system a couple years later that they should try the other two.
Wouldn't the intent of that be captured in your benchmark tests? And especially now that code generation is essentially free, wouldn't you include all three with the benchmark tests showing why a particular choice was chosen? This reads like an important property of the system, so tests are necessary.
> wouldn't you include all three with the benchmark tests
Maybe. If I know that the performance of this particular code path is going to be critical to the project's future success, sure. It's more common for something like that to be premature optimization though and the extra code is dead weight. I am not convinced by the idea that LLMs make that kind of dead weight much less undesirable.
If a choice comes down to simply guessing about the future then it isn't an important property of the system and therefore it makes no difference which algorithm was chosen or why. You are right about that being a premature optimization, but that equally applies to trying to decipher "why". When the future comes and an important property emerges, the historical "why" won't even matter as it wasn't rooted in anything relevant.
The load-bearing word in my original comment is could.
An experienced developer will often have a good intuition about what might deserve attention in the future but isn't worth the effort now.
It's also useful for social reasons. Maybe the CTO wrote the original code and a junior developer working on the optimization thinks they know a better way but isn't sure about questioning the CTO's choice of algorithm. A comment saying it was arbitrary gives them permission.
> Maybe the CTO wrote the original code and a junior developer working on the optimization thinks they know a better way but isn't sure about questioning the CTO's choice of algorithm.
If changing the algorithm is going to negatively affect the program then the CTO would have written tests to ensure that the important property is preserved. There is really no reason for the junior to be concerned as if he introduces an algorithm that is too slow, for example, then the tests aren't going to pass.
Yes, it is most definitely possible the CTO was a hack who didn't know how to build software and the junior was brought in to clean up his mess. However, in that case the information is simply lost. An LLM will not be able to recover it, but neither will a human.
You're assuming a perfect system in which all relevant properties are tested for. That doesn't match probably 99.9% of real world systems.
The issue with AIs reverse engineering code is that context is very important - in fact knowledge and understanding of the context is one of the few things humans can still bring to the table.
Unless every relevant fact about that context has been encoded in a recoverable way the system and tests, AIs can only do so much. And there are essentially no non-trivial systems where that's the case.
You would test all important properties. That matches all real world systems you are responsible for. There is no reason to accept a lower standard for yourself.
Absolutely you have no control over what others have written, but you also have no way to access their lost context, so you are no further ahead than an LLM in that situation. The available information is the same for you as any other system.
I don't know. I suppose it depends on what we're optimizing for, but from what I've observed, the most underrated skill is bullshitting.
I have seen countless engineers just while away the years modestly building and documenting incredible systems. Systems that "just work" for years on end. They never get fired because they're recognized for their value, but they also never get to the top.
Bullshitters, on the other hand, have no ceiling. They are never out of their depth because they transcend skill or accountability. They'll tell you they know everything, they'll tell you nothing is impossible, they'll gossip and disparage everyone else. The best bullshitters are full-on psychopaths and these are the guys that run the world.
Another way to think about it: An electrical design should contain a schematic, a parts list, a board layout, and a theory of operation. Do the same in software. Don't just give me the code. Don't give me the code plus a bunch of UML. Write a theory of operation. What are the major components? Why are they the major components? How do they interact? Why do they interact that way? How does the system perform the most common actions? How would a new developer make the most likely changes?
Assuming that the announcement video Ben Collins posted represents the new logo, it's a delightfully pride rainbow-colored InfoWars logo with an onion in place of the 'o'.
> Part of my reason for trying this was reading how creative endeavors can be therapeutic (I'm dealing with burnout/depression/cptsd).
This is the reason why a lot of us make music. Writing orchestral pieces is my own meditation. I don't share most of them, and replacing them with AI would defeat the purpose.
Please keep learning it! The world needs more musicians, even if we never hear them.
reply