- Browser history seems to go in a circle (at least in Chrome); try use the browser's native "back" arrow a few times after clicking through the link you shared from HN.
- Transition animations and element "load-in" animations make the whole thing feel slow and hard to use. As it is, I'm frustrated trying to look through recipes or moving through pages.
Are there any resources out there that anyone can recommend for learning testing in the way the author describes?
In-the-trenches experience (especially "good" or "doing it right" experience) can be hard to come by; and why not stand on the shoulders of giants when learning it the first time?
Working Effectively with Legacy Code by Michael Feathers. It spends a lot of time on how to introduce testability into existing software systems that were not designed for testing.
Property-Based Testing with PropEr, Erlang, and Elixir by Fred Hebert. While a book about a particular tool (PropEr) and pair of languages (Erlang and Elixir), it's a solid introduction to property-based testing. The techniques described transfer well to other PBT systems and other languages.
TDD is a development methodology, not a testing methodology. The main thing it does is check whether the developer implemented what they thought they should be implementing, which is not necessarily what the spec actually says to implement or what the end user expects.
It's still a useful technique and way to apply testing to development. But yes, it's not the best resource in telling you what tests to write, more about how they can be applied effectively. Which is a skill that seems absent in many professionals.
[ Glenford Myers (born December 12, 1946) is an American computer scientist, entrepreneur, and author. He founded two successful high-tech companies (RadiSys and IP Fabrics), authored eight textbooks in the computer sciences, and made important contributions in microprocessor architecture. He holds a number of patents, including the original patent on "register scoreboarding" in microprocessor chips.[1] He has a BS in electrical engineering from Clarkson University, an MS in computer science from Syracuse University, and a PhD in computer science from the Polytechnic Institute of New York University. ]
I got to read it early in my career, and applied it some, in commercial software projects I was a part of, or led, when I could.
Very good book, IMO.
There is a nice small testing-related question at the start of the book that many people don't answer well or fully.
As I recall this was a book that included the orthodoxy at the time that random testing was the worst kind of testing, to be avoided if possible.
That turned out to be bullshit. Today, with computers many orders of magnitude faster, using randomly generated tests is a very cost effective away of testing, compared to carefully handcrafted tests. Use extremely cheap machine cycles to save increasingly expensive human time.
Interesting. Don't remember that from the book, but then, I read it long ago.
I agree that random testing can be useful. For example, one kind of fuzzing is using tons of randomly generated test data against a program to try to find unexpected bugs.
But I think both kinds have their place.
Also, I think the author might have mean that random testing is bad when used with a small amount of test data, in which case I'd agree with him, because in that case, an equally small amount of carefully crafted test data would be the better option, e.g. using some test data in each equivalence class of the input.
"In general, the least effective methodology of all is random-input
testing—the process of testing a program by selecting, at random, some
subset of all possible input values. In terms of the likelihood of detecting
the most errors, a randomly selected collection of test cases has little
chance of being an optimal, or even close to optimal, subset. Therefore, in
this chapter, we want to develop a set of thought processes that enable you
to select test data more intelligently."
You can immediately see the problem here. It's optimizing for number of tests run, not for the overall cost of creating and running the tests. It's an attitude suited to when running a program was an expensive thing using precious resources. It was very wrong in 2012 when this edition came out and even more wrong today.
I'd say in any sufficiently complex program, random testing is not only useful, it's essential, in that it will quickly find bugs no other approach would.
Even better, it subsumes many other testing paradigms. For example, there was all sorts of talk about things like "pairwise testing": be sure to test all pairwise combinations of features. Well, randomly generated tests will do that automatically.
I view random testing as another example of the Bitter Lesson, that raw compute dominates manually curated knowledge.
Resources, none that I'm aware of. I generally think this is an OK way to look at testing [1], though I think it goes too far if you completely adopt their framework.
The boil down the tests I like to see. Structure them with "Given/when/then" statements. You don't need a framework for this, just make method calls with whatever unit test framework you are using. Keep the methods small, don't do a whole lot of "then"s, split that into multiple tests. Structure your code so that you aren't testing too deep. Ideally, you don't need to stand up your entire environment to run a test. But do write some of those tests, they are important for catching issues that can hide between unit tests.
Absolutely right. I even find that 300ms in UI animations is still too long, but like TFA says, it depends on how often that piece of UI is used. Great Raycast example.
Strong agreement. I can confirm for other readers that the day I realized this --- "Oh, rejection means nothing!" --- was a weird day. It takes a weight off.
And it is true across every other field. There are way more factors external to the "you" of the decision, and they're given more weight than the "you" of the decision. This is one of those cases where you only need to experience the "other side of the table" once for it to click.
Companies that are more humane in their hiring practices (even just actually send a rejection email vs. ghosting) deserve a bit of credit, because caring for the applicant is not a KPI.
Hey! Good to meet a fellow artist. I made it to 40 before I sold out. You?
One thing outsiders don't understand is that, for actors, auditioning IS the job. Getting cast, and working on a show, is a joy (some more than others, of course!), but the rest of your life is nothing, nothing but looking for work.
The were two things that made that "it's all cool" shift happen for me. The first is that once I'd been in the industry long enough I could pretty much guarantee that when I went in for an audition I'd see someone I knew, or at least with whom I had an immediate second-degree connection. Auditions stopped being a grind, or mainly about courting rejection - instead, they became an opportunity to hang out with some cool people for a while. I started looking forward to them!
The second was realizing that choosing and performing my audition pieces was the only time that I was in complete control. No one was telling me what to do or how to do it: I could make my own choices, and take whatever creative risks I wanted.
I think both of those approaches made me a much better auditionee than most. My batting average was a lot higher than most of my peers - even some that I thought were better actors.
I don't know how well those insights generalize. I've never (thank god!) had to do leet-code, but I'd hope that (though maybe only in a second screening?) taking a creative approach - if you can talk about it sensibly, and pivot if it doesn't ultimately work - would impress fellow engineers. I strongly believe that adopting a "what can I learn from this experience, and these people?" mindset is a good way to reduce the pressure you'd otherwise put on yourself.
Do you mean you sold out in the arts or in the sense that you changed careers? If the former, I’d be curious to hear (well, read) the story since that’s not an admission one typically encounters.
I never met a professional with a conceptual category of "selling out" within the industry. Scraping together any kind of living in the arts is a massive struggle, so everyone takes "money jobs" when they can get them. During my 10 or 12 years as a working actor I had two consecutive years during which my sole income was from performing, and maybe a couple of other other five or six month periods where I was able to drop restaurant (or whatever other) gigs for a tour. This was in the early-'oughts, and I'd have to look at my social security records to be sure, but my income during those years was somewhere around $30k. I was single, and really, really good at being poor.
By the way, that's like a 98th percentile result for an actor. Most people never come close to making a living, however meagre.
There's an old, old interview (maybe Michael Parkinson? Don't remember) with Joss Ackland - a wonderful mid-twentieth century British character actor, on stage and screen - where the interviewer asks him why the hell he did some crappy science fiction film, and Ackland says something like "that was 1962? Oh, yes. Well, my mother needed a new kitchen." No actor will ever fault him for that!
What does disappoint me is seeing actors with tremendous talent who take nothing but money jobs. I get why they do it - especially for the ones at the top of the commercial heap it'd be awfully hard to say 'no' to an easy gig that comes with a boatload of cash - but I can't help but feel sad that I'll never get to see them working at their best.
Even so, my response when I see a truly bad film is generally a shrug: "a lot of actors [and associated professionals / craft services] got paid." The artists among them will learn from even that experience, and many (many many) among them will invest that income back into doing work that they believe in.
That would be the media query; sort of a long-hand way. I learned about the color function for settings the vars from the article, never saw those in use before.
I was just using Worldtimebuddy[0] yesterday to get times for about 15 different cities and order them. A single view in WTB caps out at 10 or so locations, so I had to manually manage them.
This is a much nicer/more modern looking UI, but there are a few things missing that would make it killer for me:
- Pick the date and time in a certain zone, set all zones relative to that (set 2:00pm in San Francisco on September 5, 2025; see what time it is in all the other locations)
- An export of the current view as structured data, or even better, as text
I also made pretty heavy use of plugins to manage PCs, NPCs, encounters, items (!!), custom tables, maps, and setting details in a single campaign. Led to a lot of bug reports for the D&D-specific plugins, but Dataview worked like a charm.
Having a more Obsidian-native interface for managing all of that is. Like other commenters, would definitely watch a video of you sharing your Obsidian "build" for that use-case.
Some nits/notes:
- Browser history seems to go in a circle (at least in Chrome); try use the browser's native "back" arrow a few times after clicking through the link you shared from HN.
- Transition animations and element "load-in" animations make the whole thing feel slow and hard to use. As it is, I'm frustrated trying to look through recipes or moving through pages.