We need more people in this world willing to do their own thing, even if others might find it intimidating or silly. The important thing is to have fun and learn things. Compiler hacking is just as good as any other hobby, even if it's done in good jest.
Sometimes, these things become real businesses. Not that this should be the intent of this, but it shows that what some consider silly, others will pay good money for.
Example: Cards Against Humanity started as a bit of a gag game between a small group of friends and eventually became something that has pop culture relevance.
Example: The founder of FedEx actually wrote a business pitch paper for an overnight shipping company. This paper was given a low grade by his professor. He went on to form this company, which become a success, despite this low grade. I like to think that he did this out of spite, and that Christmas letters to his old professor must've been fun.
You can't have paradigm shifts by following the paradigm.
How I think of it is we need a distribution of people (shaped like a power law, not a normal).
Most people should be in the main body, doing what most people do. They're probably the "most productive".
Then you have people in the mid tail who innovate but it's incremental and not very novel. They produce frequently (our current research paradigm optimizes for this). But there aren't leaps and bounds. Critically it keeps pushing things forward, refining and improving.
But then there's those in the long tail. They fail most of the time and are the "least productive". Sometimes never doing anything of note their entire lives. But these are also the people that change the world in much bigger ways. And sometimes those that appeared to do nothing have their value found decades or centuries later.
Not everyone needs to be Newton/Leibniz. Not everyone should be. But that kind of work is critical to advancing our knowledge and wealth as a species. The problem is it is often indistinguishable from wasting time. But I'm willing to bet that the work of Newton alone has created more value to all of human civilization than every failed long tail person has cost us.
In any investment strategy you benefit from having high risk investments. Most lose you money but the ones that win reward you with much more than you lost. I'm not sure why this is so well known in the investment world but controversial in the research/academic/innovation world.
Hah. I think of it as a slime mold. There's the main body (bodies?), but it's always shooting out little bits of itself that try weird stuff - founding underwater communes, or climbing mountains in Crocs or something. Most of these offshoots don't have that much of an impact, but occasionally one lucks out and discovers America or peanut butter and the main body saunters off that way.
Yeah we find this type of optimization all over nature. Even radiation is important to create noise. We need it in machine learning. A noisy optimizer is critical for generalized learning. Too much noise and you learn nothing but no noise and you only memorize. So there's a balance
I'm reminded a bit of a robotics professor I had at uni, a very long time ago in the 1990s. When he was at uni in the late 70s he and a couple of friends were working on a robot arm, which didn't work because it kept dropping stuff. They made a more precise linkage for the gripper, and if anything it was worse. They made ever more precise bushings for the pivots, and it just didn't help or made it worse.
Eventually after a conversation in the pub with one of his friends who was studying sports physiotherapy, they ripped out the 47th set of really precise little Teflon bushings and put in new ones made of medical silicone rubber tubing.
Now all the joints were a bit sticky and squashy and wobbly, and it picked everything up perfectly every time.
I get the "outliers are useful" thing you're trying to emphasis. But as someone from a mountainous country, please dont "climb[ing] mountains in Crocs", we regularily get media reports of hopelessly underequipped people having to be rescued with a whole team of people, in the middle of the night, with horrible weather, usually also endangering the people that do the rescue. I guess what I am trying to say is, there is a limit to how silly you can/should be.
What is the expected value, of some dude at university spinning dinner plates in the cafeteria? What a silly pointless thing to do! Of course, if you're physics Professor Feynman, you get a Nobel out of it, so do the silly pointless things after all!
If the bongo man taught me anything it's that you should do something for the pursuit itself. The utility can always be found later. But chasing utility first only limits your imagination
I don't understand how this is a power law and not normal. The "long tail" is usually mentioned in a normal distribution being the right-most end of it.
I think you have a misunderstanding because a heavy tail is essentially one that is not exponentially bounded. A long tail being a subclass.
There's kinda a big difference in the characteristics of a normal distribution and power and I think explaining that will really help.
In a normal you have pressure from both ends so that's why you find it in things like height. There's evolutionary pressure to not be too small but also pressure to not be too large. Being tall is advantageous but costly. Technically the distribution never ends (and that's in either direction!). Though you're not going to see micro people nor 100' tall people because the physics gets in the way. Also mind you that normal can't be less than zero.
It is weird to talk about "long tail" with normal distributions and flags should go up when hearing this.
In a power distribution you don't have bounding pressure. So they are scale free. A classic example of this is wealth. It's easier to understand if you ignore the negative case at first, so let's do that (it still works with negative wealth). There's no upper bound to wealth, right? So while most people will be in the main "mode" there is a long tail. We might also say something like "heavy tail" when the variance is even moderate. So this tail is both long and the median value isn't really representative of the distribution. Funny enough, power laws are incredibly common in nature. I'm really not sure why they aren't discussed more.
I think Veritasium did a video on power distributions recently? Might be worth a check.
Before I watched this I would default to thinking about most distributions as normal. It's really fun to think about whatever "game" you are playing - wether you are building a business or trying to win a carnival game - and consider if the results follow a normal distribution or a power law?
> The founder of FedEx actually wrote a business pitch paper for an overnight shipping company. This paper was given a low grade by his professor. He went on to form this company, which become a success, despite this low grade.
Was the paper given a low grade because it was a bad idea or because Fred Smith wrote a bad paper? If his pitch didn’t work, did feedback from the professor help Smith sharpen his idea so he was in a better position to make FedEx a success?
Allegedly, it was given a lower grade due to it not being a feasible business plan, in the professor's estimation. Of course, this forms part of the legend behind Fred Smith and FedEx, so that should be taken with a grain of salt.
That still feels a bit off, as you are "having fun" because it ultimately is the road to success.
There is a deeper hurt in the tech world, which is that we have all been conditioned to crave greatness. Every employer tries to sell us on how important what they do is, or how rich everyone will become. We can't even vacation without thinking how much better we will perform once we get back. That struggle with greatness is something every human grapples with, but for workers in tech it is particularly difficult to let it go. The entire industry wants us to hold onto it until we are completely drained.
Anyway the result is sentiments like this, where having fun, exploring and learning can't just exist for the inherent rewards.
As per my original comment, these examples are only indicative that profitable endeavors can come out of these things in unexpected ways, but that's not the point of doing these things. I'm never going to profit from, nor recoup the costs I've sunk into most of the mad science I do. That's not the point. I do it because it's fun and because I like building cool things.
These examples are one justification for why we should embrace these kinds of hobbies, and not the desirable outcome for these kinds of hobbies.
Josef Pieper wrote a book called "Leisure: the Basis of Culture"[0] - published in 1948 - in which he discusses the meaning of leisure, which is not what we mean by it today, and criticizes the "bourgeois world of total labor" as a spiritually, intellectually, and culturally destructive force.
Today, we think of "leisure" as merely free time from work or recreation, something largely done to "recharge" so that we can go back to work (in other words: modern "leisure" is for the sake of work). This is not the original meaning. Indeed, etymologically, the word "school" comes from σχολή ("skholē"), which means "leisure", but with the understanding that it involves something like learned discussion or whatever. (Difficult to imagine, given how hostile modern schooling is, resembling more of a factory than a place of learning.) The purpose of work was to enable leisure. We labored in order to have leisure.
What's also interesting is that unlike us, who think of "leisure" in terms of work (that is, we think of it as a negation of work, "not-working"), the Greeks viewed it in exactly the opposite way. The word for "work" is ἀσχολίᾱ ("askholíā"), which is the absence of leisure. The understanding held for most of history and explains why we call the liberal arts liberal: it freed a man to be able to pursue truth effectively, and was contrasted with the servile arts, that is, everything with a practical aim like a trade or a craft.
This difference demonstrates an important shift and betrays the vulgar or nihilistic underbelly of our modern culture. Work is never for its own sake. It is always aimed at something other than itself (praxis and associated poiesis). This distinguishes it from something like theory (theoria) which is concerned with truth for its own sake.
So what do we work for? Work for its own sake is nihilistic, a kind of passing of the metaphysical buck, an activity pursued to avoid facing the question of what we live for. Work pursued merely to pay for sustenance - full stop - is vulgar and lacks meaning. Sustenance is important, but is that all you are, a beast that slurps food from a trough? Even here, only in human beings is food elevated into feast, into meal, a celebration and a social practice that incorporates food; it is not merely nutritive. Are you merely a consumerist who works to buy more crap, foolishly believing that ultimate joy will be found in the pointless chase for them?
Ask yourself: whom or what do you serve? Everyone aims at something. What are the choices of your life aiming at?
That is an excellent way of considering both leisure and work, and certainly, a testament to the importance of studying the humanities.
Aristotle famously developed the Greek concept of εὐδαιμονία (eudaimonia), which dovetails into what you wrote. Roughly, the concept translates into "human flourishing" or "living well". While Aristotle's conception of what best constitutes this differed a bit from more ancient Greek concepts passed down through their oral tradition, and definitely differs from what we may consider today, it bears investigation. I definitely think that education and personal research fits into my conception of it, but tastes differ. Nietzsche gave what I considered some excellent responses to Aristotle, especially when it comes to finding / making meaning in our lives with respect to the modern world. The Transcendentalist school, in particular Henry David Thoreau and Ralph Waldo Emerson, also provided some interesting flavor.
I think that your questions should be asked continuously. We should all adjust our life trajectories based on our own flourishing, in ways that challenge us and lead to growth. There aren't clear answers to these questions. In fact, they should lead to a bit of discomfort, like sand in one's clam shell. Much as this sand will eventually form a pearl, these questions should drive us to better ourselves, little by little.
I have been formally verifying software written in C for a while now.
> is that only for some problems is the specification simpler than the code.
Indeed. I had to fall back to using a proof assistant to verify the code used to build container algorithms (e.g. balanced binary trees) because the problem space gets really difficult in SAT when needing to verify, for instance, memory safety for any arbitrary container operation. Specifying the problem and proving the supporting lemmas takes far more time than proving the code correct with respect to this specification.
> If you do this right, you can get over 90% of proofs with a SAT solver
So far, in my experience, 99% of code that I've written can be verified via the CBMC / CProver model checker, which uses a SAT solver under the covers. So, I agree.
I only need to reach for CiC when dealing with things that I can't reasonably verify by squinting my eyes with the model checker. For instance, proving containers correct with respect to the same kinds of function contracts I use in model checking gets dicey, since these involve arbitrary and complex recursion. But, verifying that code that uses these containers is actually quite easy to do via shadow methods. For instance, with containers, we only really care whether we can verify the contracts for how they are used, and whether client code properly manages ownership semantics. For instance, placing an item into the container or taking an item out of a container. Referencing items in the container. Not holding onto dangling references once a lock on a container is released, etc. In these cases, simpler models for these containers that can be trivially model checked can be substituted in.
> Now, sometimes you just want a negative specification - X must never happen. That's somewhat easier.
Agreed. The abstract machine model I built up for C is what I call a "glass machine". Anything that might be UB or that could involve unsafe memory access causes a crash. Hence, quantified over any acceptable initial state and input parameters that match the function contract, these negative specifications must only step over all instructions without hitting a crash condition. If a developer can single step, and learns how to perform basic case analysis or basic induction, the developer can easily walk proofs of these negative specifications.
I'm starting to. One of the libraries I've started on recently is open source. I'm still really early in that process, but I started extracting a few functions for the optimized single and double linked list containers and will be moving onto the red-black and AVL tree containers soon. Once these are done, I should be able to model check the thread, fiber, and socket I/O components.
Around 15 years ago, I built a barbecue controller. This controller had four temperature probes that could be used to check the temperature of the inner cooking chamber as well as various cuts of meat. It controlled servos that opened and closed vents and had a custom derived PID algorithm that could infer the delayed effects of oxygen to charcoal.
Anyway, of relevance to this thread is that the controller connected to the local wireless network and provided an embedded HTTP server with an SVG based web UI that would graph temperatures and provided actual knobs and dials so that the controller could be tweaked. SVG in the browser works nicely with Javascript.
Sadly, I did not. I have the source code on an old laptop somewhere. I was disheartened when I considered productizing it and discovered just how deep of a patent tarpit I was dealing with.
It's on my list to revisit in the future. At this point, most of the patents are coming up on expiration, and it would make for a great open source project. Hardware has gotten much better over the subsequent years; there are nicer lower power solutions with integrated Bluetooth LE as well as other low power wireless technologies.
In C, you're correct. The problem is that, in C++, one must account for the fact that anything could throw an exception. If something throws an exception between the time that f is opened and f is closed, the file handle is leaked. This is the "unsafe" that Bjarne is talking about here. Specifically, exception unsafety that can leak resources.
As an aside, it is one of the reasons why I finally decided to let go of C++ after 20 years of use. It was just too difficult to teach developers all of the corner cases. Instead, I retooled my system programming around C with model checking to enforce resource management and function contracts. The code can be read just like this example and I can have guaranteed resource management that is enforced at build time by checking function contracts.
The function contracts are integrated into the codebase. Bounded model checking tools, such as CBMC, can be used to check for integer UB, memory safety, and to evaluate custom user assertions. The latter feature opens the door for creating function contracts.
I include function contracts as part of function declarations in headers. These take the form of macros that clearly define the function contract. The implementation of the function evaluates the preconditions at the start of the function, and is written with a single exit so the postconditions can be evaluated at the end of the function. Since this function contract is defined in the header, shadow functions can be written that simulate all possibilities of the function contract. The two are kept in sync because they both depend on the same header. This way, model checks can be written to focus on individual functions with any dependencies simulated by shadows.
The model checks are included in the same project, but are separate from the code under instrumentation, similar to how unit tests are commonly written. I include the shadow functions as an installation target for the library when it is installed in development mode, so that downstream projects can use existing shadow functions instead of writing their own.
I'm living with heart failure. I have 20-30 years before I'll need a transplant, if I live a perfect lifestyle and keep my other health issues under control. Due to my other health issues, I am not a good candidate for a human heart transplant. It's not that a human heart transplant would fail, but that when I'd be placed against others on the list for a new heart, my other health issues would reduce my priority such that there is always someone with higher priority to receive a heart, up until the point in which I'm no longer healthy enough to receive a transplant. There are far too few human hearts, and far too many people who need one. All that the transplant boards can do is give hearts to those with the greatest momentary need, with the best chance of surviving.
Xenotransplantation is one of the life lines I'm counting on. I'm hoping that, by the time I need it, the issues that we currently have will be worked out. I have zero ethical issues with breeding and eventually culling pigs in order to save human lives. I hope that there will be other, better, breakthroughs by then, but if not, the best I can hope for is that the pigs are raised in a sterile and enriching environment, and that the only bad day they have is their last day.
I have actually argued for the use of mailing lists for corporate engineering discussions. When that becomes the medium for code review or design discussions, there's a nice streamlined workflow. Further, it's practically trivial to write or customize a mailing list reflector. If you have a decent and secure mail client library, you're a weekend away from it just working. Contrast that with customizing or rolling your own IRC, Slack, Discord, or web forum clone. Mailing lists don't suffer from vendor lock-in, and anyone with a mail client and who can follow basic rules can participate.
An invitation-only mailing list with a reflector that verifies PGP encryption and non-repudiation is just fine for most corporate discussions. For mailing lists open to the public, new users can be placed in a moderation queue for a period of time until it's clear that they understand list netiquette and formatting rules.
I think for internal corporate use, NNTP would work even better, if it were still supported in mail clients. I deployed it for exactly that purpose in the late 1990s and it was great.
Fortunately there are IMAP shared mailboxes, which we worked incredibly hard at CMU during the Cyrus project to ensure would work just as well as bboards did in CMU’s previous Andrew Mail System. (“Bulletin boards,” aka bboards, were basically CMU-local Usenet, with global Usenet under a netnews.* hierarchy via a gateway.)
Most IMAP clients worth their salt support shared mailbox hierarchies, even Apple Mail does, so it’s “just” a matter of setting up shared groups on a server.
I agree that NNTP would be better. I do have a NNTP server because of this, although my intention is that it can be used for public discussions and not only for internal corporate use, although I did implement authentication in case some newsgroups are used for private discussions too.
(Note, the article in the IETF mailing list does mention Usenet too)
The applicability of Rice's theorem with respect to static analysis or abstract interpretation is more complex than you implied. First, static analysis tools are largely pattern-oriented. Pattern matching is how they sidestep undecidability. These tools have their place, but they aren't trying to be the tooling you or the parent claim. Instead, they are more useful to enforce coding style. This can be used to help with secure software development practices, but only by enforcing idiomatic style.
Bounded model checkers, on the other hand, are this tooling. They don't have to disprove Rice's theorem to work. In fact, they work directly with this theorem. They transform code into state equations that are run through an SMT solver. They are looking for logic errors, use-after-free, buffer overruns, etc. But, they also fail code for unterminated execution within the constraints of the simulation. If abstract interpretation through SMT states does not complete in a certain number of steps, then this is also considered a failure. The function or subset of the program only passes if the SMT solver can't find a satisfactory state that triggers one of these issues, through any possible input or external state.
These model checkers also provide the ability for user-defined assertions, making it possible to build and verify function contracts. This allows proof engineers to tie in proofs about higher level properties of code without having to build constructive proofs of all of this code.
Rust has its own issues. For instance, its core library is unsafe, because it has to use unsafe operations to interface with the OS, or to build containers or memory management models that simply can't be described with the borrow checker. This has led to its own CVEs. To strengthen the core library, core Rust developers have started using Kani -- a bounded model checker like those available for C or other languages.
Bounded model checking works. This tooling can be used to make either C or Rust safer. It can be used to augment proofs of theorems built in a proof assistant to extend this to implementation. The overhead of model checking is about that of unit testing, once you understand how to use it.
It is significantly less expensive to teach C developers how to model check their software using CBMC than it is to teach them Rust and then have them port code to Rust. Using CBMC properly, one can get better security guarantees than using vanilla Rust. Overall, an Ada + Spark, CBMC + C, Kani + Rust strategy coupled with constructive theory and proofs regarding overall architectural guarantees will yield equivalent safety and security. I'd trust such pairings of process and tooling -- regardless of language choice -- over any LLM derived solutions.
Sure it's possible in theory, but how many C codebases actually use formal verification? I don't think I've seen a single one. Git certainly doesn't do anything like that.
I have occasionally used CBMC for isolated functions, but that must already put me in the top 0.1% of formal verification users.
It's not used more because it is unknown, not because it is difficult to use or that it is impractical.
I've written several libraries and several services now that have 100% coverage via CBMC. I'm quite experienced with C development and with secure development, and reaching this point always finds a handful of potentially exploitable errors I would have missed. The development overhead of reaching this point is about the same as the overhead of getting to 80% unit test coverage using traditional test automation.
You're describing cases in which static analyzers/model checkers give up, and can't provide a definitive answer. To me this isn't side-stepping the undecidability problem, this is hitting the problem.
C's semantics create dead-ends for non-local reasoning about programs, so you get inconclusive/best-effort results propped up by heuristics. This is of course better than nothing, and still very useful for C, but it's weak and limited compared to the guarantees that safe Rust gives.
The bar set for Rust's static analysis and checks is to detect and prevent every UB in safe Rust code. If there's a false positive, people file it as a soundness bug or a CVE. If you can make Rust's libstd crash from safe Rust code, even if it requires deliberately invalid inputs, it's still a CVE for Rust. There is no comparable expectation of having anything reliably checkable in C. You can crash stdlib by feeding it invalid inputs, and it's not a CVE, just don't do that. Static analyzers are allowed to have false negatives, and it's normal.
You can get better guarantees for C if you restrict semantics of the language, add annotations/contracts for gaps in its type system, add assertions for things it can't check, and replace all the C code that the checker fails on with alternative idioms that fit the restricted model. But at that point it's not a silver bullet of "keep your C codebase, and just use a static analyzer", but it starts looking like a rewrite of C in a more restrictive dialect, and the more guarantees you want, the more code you need to annotate and adapt to the checks.
And this is basically Rust's approach. The unsafe Rust is pretty close to the semantics of C (with UB and all), but by default the code is restricted to a subset designed to be easy for static analysis to be able to guarantee it can't cause UB. Rust has a model checker for pointer aliasing and sharing of data across threads. It has a built-in static analyzer for memory management. It makes programmers specify contracts necessary for the analysis, and verifies that the declarations are logically consistent. It injects assertions for things it can't check at compile time, and gives an option to selectively bypass the checkers for code that doesn't fit their model. It also has a bunch of less rigorous static analyzers detecting certain patterns of logic errors, missing error handling, and flagging suspicious and unidiomatic code.
It would be amazing if C had a static analyzer that could reliably assure with a high level of certainty, out of the box, that a heavily multi-threaded complex code doesn't contain any UB, doesn't corrupt memory, and won't have use-after-free, even if the code is full of dynamic memory (de)allocations, callbacks, thread-locals, on-stack data of one thread shared with another, objects moved between threads, while mixing objects and code from multiple 3rd party libraries. Rust does that across millions lines of code, and it's not even a separate static analyzer with specially-written proofs, it's just how it works.
Such analysis requires code with sufficient annotations and restricted to design patterns that obviously conform to the checkable model. Rust had a luxury of having this from the start, and already has a whole ecosystem built around it.
C doesn't have that. You start from a much worse position (with mutable aliasing, const that barely does anything, and a type system without ownership or any thread safety information) and need to add checks and refactor code just to catch up to the baseline. And in the end, with all that effort, you end up with a C dialect peppered with macros, and merely fix one problem in C, without getting additional benefits of a modern language.
CBMC+C has a higher ceiling than vanilla Rust, and SMT solvers are more powerful, but the choice isn't limited to C+analyzers vs only plain Rust. You still can run additional checkers/solvers on top of everything Rust has built-in, and further proofs are easier thanks to being on top of stronger baseline guarantees and a stricter type system.
If we mark any case that might be undecidable as a failure case, and require that code be written that can be verified, then this is very much sidestepping undecidability by definition. Rust's borrow checker does the same exact thing. Write code that the borrow checker can't verify, and you'll get an error, even if it might be perfectly valid. That's by design, and it's absolutely a design meant to sidestep undecidability.
Yes, CBMC + C provides a higher ceiling. Coupling Kani with Rust results in the exact same ceiling as CBMC + C. Not a higher one. Kani compiles Rust to the same goto-C that CBMC compiles C to. Not a better one. The abstract model and theory that Kani provides is far more strict that what Rust provides with its borrow checker and static analysis. It's also more universal, which is why Kani works on both safe and unsafe Rust.
If you like Rust, great. Use it. But, at the point of coupling Kani and Rust, it's reaching safety parity with model checked C, and not surpassing it. That's fine. Similar safety parity can be reached with Ada + Spark, C++ and ESBMC, Java and JBMC, etc. There are many ways of reaching the same goal.
There's no need to pepper C with macros or to require a stronger type system with C to use CBMC and to get similar guarantees. Strong type systems do provide some structure -- and there's nothing wrong with using one -- but unless we are talking about building a dependent type system, such as what is provided with Lean 4, Coq, Agda, etc., it's not enough to add equivalent safety. A dependent type system also adds undecidability, requiring proofs and tactics to verify the types. That's great, but it's also a much more involved proposition than using a model checker. Rust's H-M type system, while certainly nice for what it is, is limited in what safety guarantees it can make. At that point, choosing a language with a stronger type system or not is a style choice. Arguably, it lets you organize software in a better way that would require manual work in other languages. Maybe this makes sense for your team, and maybe it doesn't. Plenty of people write software in Lisp, Python, Ruby, or similar languages with dynamic and duck typing. They can build highly organized and safe software. In fact, such software can be made safe, much as C can be made safe with the appropriate application of process and tooling.
I'm not defending C or attacking Rust here. I'm pointing out that model checking makes both safer than either can be on their own. As with my original reply, model checking is something different than static analysis, and it's something greater than what either vanilla C or vanilla Rust can provide on their own. Does safe vanilla Rust have better memory safety than vanilla C? Of course. Is it automatically safe against the two dozen other classes of attacks by default and without careful software development? No. Is it automatically safe against these attacks with model checking? Also no. However, we can use model checking to demonstrate the absence of entire classes of bugs -- each of these classes of bugs -- whether we model check software written in C or in Rust.
If I had to choose between model checking an existing codebase (git or the Linux kernel), or slowly rewriting it in another language, I'd choose the former every time. It provides, by far, the largest gain for the least amount of work.
Much to the chagrin of my mother, I made it a point about a decade ago to standardize old family recipes on "from scratch" versions. As part of the process, I also did some research on old recipes and fixed some of the corruption of these recipes that occurred during the copying and recitation, bolstering them with culinary techniques that were in use at the time. I also captured things that drift over time, such as crude protein and carbohydrate measurements and grind sizes in flour. I provided standardized weights and measurements, in MKS units, preferring mass, when possible, over volume.
She's upset that the recipes are different, but when it comes to recipes from the thirties and later based on using a box of this or a can of that, these recipes are resistant to shrinkflation. The downside is that these recipes miss out on the advanced chemistry that went into making these boxed mixes so great to begin with. But, in my opinion, that's a small price to pay for reproducibility.
Some recipes, like cakes and cookies, will need to be adjusted once a generation. For these recipes, I include notes about how to tell when certain ingredients are "off" so that these can be re-calibrated as ingredients change in the future. Ingredients change. Some are no longer available. Others are derived from newer varieties or hybrids that have different flavor profiles. For instance, bananas taste differently than they did sixty years ago. That old and dusty banana pudding recipe meant to reproduce that amazing pudding that your great-grandmother used to make won't taste the same unless you adjust the amount of isoamyl acetate; modern varieties have less of this compound than the old Gros Michel varieties did. You can occasionally find Gros Michel bananas if you want to taste the difference, but they are no longer a viable cash crop due to their susceptibility to Panama disease.
If she's like my mother, she probably thinks of these recipes as a connection to her parents and grandparents. The importance is not in the finished dish, but in the history of this specific artifact, including: the hand writing, the original index cards, the references to the bowls she remembers as a little girl. I understand this. When I see my grandmother's recipes, hand-written in broken English, it makes me smile, because I can't not read it in my grandmother's voice. Ok, these aren't cakes and cookies, so there's no need to be precise, so I do the recipe updates in my head anyway.
When updating the recipe, consider this. If you're laying it out on paper, at least keep a reference to the original recipe, a photo, etc. I have a professional cookbook like this. It has excerpts from journals from the 18th or 19th century with the original recipe, and also recontexualizes them for today's ingredients, tools and techniques. You get both the history and the dish.
That's one of the things I enjoy about Cook's Country on PBS. They like to dig into the historical contexts of dishes. Sometimes by researching in the past, they discover insights.
I was thinking of biscuit recipes where mixing was often done by feel of the dough, rather than exact amounts. Grandmas could just "feel" the amounts needed for their biscuits.
Indeed. When I entered "grandma's cookies" into the shared family instance is 'mealie' I was sure to include a copy of the "original" index card. (surely a copy)
https://imgur.com/KMSuUhz
That card was has several fun comments and lots of history to my siblings who added to the crustiness of this card.
The original card I remember said to use lard or you can use margarine if you "haven't slaughtered a hog recently".
The recipe in mealie was modernized and tested more.
Going through the same process with the same ingredients is also important to the personal connection. More important to me than the original wording. The note cards are great for looking at but I'm not going to work directly off them.
> Much to the chagrin of my mother, I made it a point about a decade ago to standardize old family recipes on "from scratch" versions.
It's probably to her chagrin because these aren't bit flips. They're slow changes in a living culinary repository that others have almost certainly ACK'd with their tastebuds over the years.
It's like you just made a bunch of unrelated commits on the main branch and slapped the commit message "fixed corruption" on it. Honestly, you're lucky your mom didn't revoke your write access! :)
Do the responsible hacker thing here: fork your reproducible recipes into your own personal repo. Then you can reproduce them till your heart's content in the comfort of your own kitchen. And your mom can ask you for them if she ever wants to merge them into the main branch. (Narrator's voice: she doesn't.)
I’ve also digitized some recipes and had to deal with “1 can” or “1 bar” without size included. Some things aren’t sold like that anymore or their size has fluctuated. In the example about it was for a candy bar pound cake and “1 can of Hershey’s syrup” isn’t a thing anymore that I can tell and even if it was, I had no clue the size. Same with “1 Hershey’s bar”, uhh, no clue what 1 standard bar was then. Thankfully my mom was able to fill in the gaps but let this be a lesson, if you have family recipes you love, get it written down with actual units, you’ll thank yourself later.
Next on my list is converting everything to mass where possible. It’s so much easier to measure with a kitchen scale than it is to wonder “did I pack the X in too tight or too loose into this cup?”.
If you say "one bar of butter", "one stick of butter", and "one pat of butter", these can all refer to three different things or the same thing, depending on where you are located. East Coast and West Coast US butter are sold in different size blocks (though both are "8 tbsp") however sometimes you'll find 4tbsp sticks on the west coast that look like 1/2 an East Coast stick that I've heard called pats.
Then Europe comes along and all the fancy European butters are made in 250g blocks, which are bigger than the 110g sticks but smaller than the package of 4 of them! This always confused my European friends when I'd say "oh I'll toss in a stick of butter" because they thought I was adding a quarter kilo of butter.
Meanwhile, here in Canada I've never seen "sticks of butter", only the large bricks. They're the same size as American ones, and labeled as 454g, but I only recently found out that in some places in the US, they cut them in fours. Before that, the phrase didn't mean anything to me, and I thought it referred to throwing the whole brick in. The smaller 250g packages also exist, but they're rare.
I can't guarantee that the sticks don't exist anywhere, but I've lived in several cities all over the country and I've never spotted anything like that
The 250 g half-bricks are very common. It's how the foo-foo frilly butter is sold ("cultured" butter, imported French butter with 94% fat content, butter made exclusively from milk squeezed from grass-fed cows, etc) because no one is willing to pay $15.00 for a pound of butter but they'll pay $8.00 for a half pound.
I only recently found out that in some places in the US,
they cut them in fours
That's pretty much the standard in the US. It's common enough that there's a bit of an east/west divide as to how the quarters are shaped. When I worked in a grocery store we'd also sell individual quarters (but I never actually saw anyone buy them as such).
Some American butter is wrapped in wax paper as regular sticks with measurement markers on it so that it is easy to measure. Plenty of large bricks though.
I’ve seen them in stores in Canada, but they’re usually more expensive than the 454g blocks. Expensive enough that it’s usually better to buy the block and portion it as needed.
Bricks won't fit in our butter trays. And it'd be an ordeal to open a brick, quarter it, then put the opened 3/4 of the brick back into cold storage until needed again.
Our butter isn't wrapped in foil, each stick is wrapped in wax paper and the whole thing is boxed in thin cardboard.
Here in the UK, there's a trend of selling 200g blocks for certain brands that ruin recipes. We have to be careful to avoid those and stick to the 250g ones. Yes, I know we could cut 50g of another block but then we'd need to measure, and we'd have an open brick to keep. It steals part of the joy of baking, forcing us to think instead of feel.
Blame the European regulators who decided that it was no longer necessary to have standard pack sizes.
Pack sizes were regulated in 1975 for volume measures (wine, beer, spirits, vinegar, oils, milk, water, and fruit juice) and in 1980 for weights (butter, cheese, salt, sugar, cereals [flour, pasta, rice, prepared cereals], dried fruits and vegetables, coffee, and a number of other things). In 2007, all of that was repealed - and member states were now forbidden from regulating pack sizes!
I think the rationale was that now the unit price (price per unit of measurement) was mandatory to display, consumers would still know which of two different packs on the same shelf was better value. But standard pack sizes don't just provide value-for-money comparisons, as this article shows.
Ironically it seems (from memory, I've not researched it deeply) that continental butter has not changed from 250g, whereas the British brands have moved first to 200g. I could understand if they switched to 225g as essentially a half-pound block, but 200g isn't any closer to an useful Imperial measure than 250g.
Most butter here (and in a number of other countries) have measuring lines on the pack itself in 50g increments, so while I agree with you it's a nuisance to have an open one to deal with, the measurement part is usually a matter of using a knife along the marked line...
If the "certain brands" you refer to don't have those measuring lines, though, then a pox on them...
Pre-salted butter is another weird American thing that's completely unnecessary. Butter is also great for cooking and you can keep it for months in the fridge without issues.
It's not really an American thing. It's a pretty wild mix which regions use which how much all across the world (well, across places that commonly use butter obviously)
Another dimension we have in France: most butter in 82% fat, but if are not careful you might buy so-called butter with much lower fat. Awful taste on morning toasts, ruined pastries.
82% seems the norm here too, good to know this. Anything lower is labelled 'spread' (based on a very quick search, maybe not always true here). Oddly specific, so maybe there's regulation at play. We prefer French butter for the quality and because it comes in the correct size.
The 125 g package tends to be exclusive for more expensive brands though, or special stuff like salted butter. 250 g is the basic European packaging unit of butter, with the occasional 500 g for margarine.
> If you say "one bar of butter", "one stick of butter", and "one pat of butter", these can all refer to three different things or the same thing, depending on where you are located. East Coast and West Coast US butter are sold in different size blocks (though both are "8 tbsp") however sometimes you'll find 4tbsp sticks on the west coast that look like 1/2 an East Coast stick that I've heard called pats.
Wat. Never in my life have I seen butter in the (mostly western) US sold in anything other than 1/4 lb sticks. There are long, skinny sticks and short, fat sticks, but they're always 1/4 lb. If you say a "pat" of butter, you're getting roughly a 1/2 Tbsp of butter from me. Definitely not half a stick!
Midwest, East Coast, and South I've seen some 1/2lb or 1lb blocks for fancier butters sometimes. But a pat of butter was definitely 1/4-1/2 tbsp of butter in the midwest - depending on if for toast (less) or for baking (exactly 1/2).
I've not heard "pat" used as a serious unit of volume since childhood though. In fact I rarely hear the word pat in relation to butter at all anymore.
I'd say (from northern Europe) that 500 g [1] is a standard pack of butter, even though they've also added the smaller "half packs" of 250 g. For professional use, there's also the full kilogram. Whoa that has got to be expensive these days.
Also the tbsp and fluid ounce differ by 4% in the UK vs USA. This offsets the nominal 25% difference in pints, with UK pints having 20 oz and US pints having 16, closing the gap a bit to an actual 20.095% between the pints.
> Same with “1 Hershey’s bar”, uhh, no clue what 1 standard bar was then. Thankfully my mom was able to fill in the gaps but let this be a lesson, if you have family recipes you love, get it written down with actual units, you’ll thank yourself later.
This will break in other ways; the makeup of a candy bar changes over time as ingredients rise and fall in price.
Stephen Jay Gould's "Phyletic Size Decrease in Hershey Bars", in "Hen's teeth and horse's toes" at https://archive.org/details/hensteethhorsest00step/page/314/... shows the size trend from the 2.0 ounces of 1960 to the the 1.2 ounce of 1980, when it was published.
Implied by the Devil Dog mentioned later in the essay:
> And I will say this for the good folks in Hershey, Pa. It’s still the same damned good chocolate, what’s left of it. A replacement of whole by broken almonds 1s the only compromise with quality I’ve noticed, while I shudder to think what the “‘creme”’ inside a Devil Dog is made of these days.
Thanks for the explanation. Hershey's taste was really a disappointment on my first visit to the US. As a dark chocolate eater I still can't understand how this called chocolate.
My wife worked for a company that does this (for many large brands that you know of. Yes it always surprised me that they even farmed this out, they really only do marketing themselves anymore). Was a real eye opener.
What's even easier than measuring with a kitchen scale is just throwing the entire can in and calling it good. That's often why these recipes used "boxes", "cans", etc as units of measurement in the first place. By converting to standard units you're increasing the amount of effort needed to actually make the dish. It might be more in keeping with the spirit of the recipe to just substitute similarly-sized cans or boxes, even if it's not quite the same taste. It depends on your priorities I suppose. (Though either way it's probably good to include units for the sake of clarity and reproducibility: e.g. "one 16 oz can" rather than "1 can".)
Agreed, I wouldn't say XXXX grams of Hershey's chocolate syrup, but I do want to know what size "a can" means.
On the other hand, for things that you would always measure (or need to do to sizes changing) like flour or sugar, I want that in grams for easy measuring. Even chocolate bars, might be easier to just say how much you need since getting exactly what you are looking for might be difficult/impossible.
RE: finding what "1 can" or "1 bar" was, you may be able to scour archive.org for scans of old magazines and newspapers to see advertisements or product listings for the respective product? At least, that's one route I'd consider
I remember the cans of Hershey's syrup, you opened them with a church key. This was the same era of oil cans with the special opener/spout you had to use. BTW, there's an unopened can of it on ebay for $25, claimed to be from the '60s, and is 5 1/2 ounces.
> It’s so much easier to measure with a kitchen scale than it is to wonder “did I pack the X in too tight or too loose into this cup?”.
Here in the UK, I get irrationally annoyed by seeing recipes that use "U.S." measurements. A "cup" is mostly meaningless to me as I've got lots of different size cups and as you state, it's not a consistent way to measure most ingredients (I can understand it being used for liquids, but even so why not just use ml or weight). When it comes to measuring larger ingredients (e.g. apricots) then the dimensions of this platonic cup come into play and I have to start deriving the optimal (almost) sphere packing to figure out how many apricots to use.
No need for scare quotes, US customary units are a thing. A US customary cup is, at least, quite standard at 8 fluid ounces. This is more standardized than the unit of measure used in British recipes and whatnot. The issues surrounding volumetric measurements for dry goods is an entirely separate matter. 240 mL of apricots is just as useless as 1 cup of apricots.
Keep in mind it goes further than that. US customary volume units don't match up with British ones.
One British gallon is about 4.5 liters, where a US gallon is about 3.8. Quarts, pints, and cups follow, but fluid ounces are another thing. A US gallon is divided into 128 fl. oz., while a British gallon is 160. This results in a US fluid ounce of about 29.6 ml, vs. 28.4 ml for the British one, and also affects teaspoons and tablespoons.
Strictly, UK teaspoons are 5 ml and tablespoons 15 ml. The metric tablespoons already used in Europe were probably close enough to half an Imperial fluid ounce for it not to matter for most purposes.
My kids' baby bottles were labelled with measurements in metric (30 ml increments) and in both US and Imperial fluid ounces. The cans of formula were supplied with scoops for measuring the powder, which were also somewhere close to 2 tablespoons/one fluid ounce (use one scoop per 30 ml of water). There are dire warnings about not varying the concentration from the recommended amount, but I assume that it's not really that precise within 1-2% - more about not varying by 10-20%. My kids seem to have survived, anyway.
Strictly, UK teaspoons are 5 ml and tablespoons 15 ml.
Well there's a rabbit hole I wasn't expecting to go down. I knew that Australian tablespoons (20 mL) were significantly different from US tablespoons. I didn't know that UK tablespoons were a whole different beast (14.2 mL), nor did I realize US tablespoons aren't quite 15 mL, and in fact my tablespoon measures are marked 15 mL. 15 mL is handily 1/16 of a US cup so it's easy enough to translate to 1/4 cup (4 tsbsp) and 1/3 cup (5 tbsp).
> No need for scare quotes, US customary units are a thing.
I understand that other countries (probably North American ones) use the same system too, so thought I was clarifying, not scaring.
> A US customary cup is, at least, quite standard at 8 fluid ounces. This is more standardized than the unit of measure used in British recipes and whatnot
I disagree as British (or non-U.S.) recipes will use a combination of metric and/or imperial sizes depending on their age. Weighing something in grammes is easy and standardised (for most of the Earth's surface at least). Admittedly, imperial measurements can be problematic as a British pint is different to a U.S. pint and "fluid ounces" also have different definitions.
> 240 mL of apricots is just as useless as 1 cup of apricots
I agree - any sane recipe will use something like "5 apricots". I've never seen mL used for measuring whole fruit - grammes would be appropriate for mashed fruit though.
I disagree as British (or non-U.S.) recipes will use a combination of
metric and/or imperial sizes depending on their age.
Right, but a cup is not an imperial unit of measure and metric cups didn't really catch on in the UK. So if you're looking at an older British recipe that references cups, good luck.
any sane recipe will use something like "5 apricots".
This is also a bad idea as common sizes for certain things change over time (e.g. some of the comments here talking about eggs). I don't eat too many apricots, but apples here can vary in size wildly even of the same variety.
Can't recall seeing any British recipe that uses cups so the difference between imperial and metric cups is irrelevant to us.
At least with something like "5 apricots", it should be obvious to the cook if they've got really small, big or varying sizes. Meanwhile, the "cup" measurement can vary depending on the order of which you put the apricots into the cup - do you put the smallest fruit in first, or the biggest?
One of my favorite dessert recipes is Dorie Greenspan's French Apple Cake. It calls for "4 large apples". The recipe is equally enjoyable with a wide range of apple mass, but the character is definitely changed depending on what you do. I think baking is a lot more flexible than most folks give it credit for, but getting more precise units helps ensure consistency from cook to cook and from batch to batch.
For reference a friend who'd expatriated to the midwest posted something about some giant apples they bought. I replied with a picture of an average apple I bought, roughly twice the size of theirs.
Meanwhile, the "cup" measurement can vary depending on the order of which you
put the apricots into the cup - do you put the smallest fruit in first, or
the biggest?
Sure, volumetric measurements for solids is generally not great which is why when I transcribe recipes for my own collection I tend to weigh things out.
Yep, some recipes don't require precision, but something like a soufflé might.
Weighing things out is the correct method. What could be useful is if recipes provided the ratios of the ingredients along with error margins, so that you could easily type in an amount (e.g. 100g flour) and it'd scale the other ingredients to match. However, maybe that's overthinking it.
No "cups" in old British recipes I've made but there will be measures you have to look up like a "gill".
Old family recipes would just say things like "add flour" and that amount was taught face-to-face and hands-on where you added enough till it looked "right" because onions and eggs etc. were not a uniform size.
This reminds me of a boxed item I bought ages ago where the instructions were basically: cook to desired doneness, season as desired.
Also reminds me of a coworker in a restaurant in Palo Alto who, when I asked him the recipe for a dressing I needed to make, told me "ginger juice, lemon, and just make it good". It turns out there were a few other ingredients.
And yeah, depending on how far back you're going or what sources you're using, there will be a lot of vaguely defined quantities. Glen of Glen and Friends on Youtube regularly cooks vintage recipes and gets into how things evolved over time. Most of his old cookbooks are either Canadian or American but from time to time he cooks from UK cookbooks.
I'm sure there will be examples and my childhood memories won't be great but that link isn't a good example of British recipes.
Most of the instances of "cups" come from the "Edwardian recipes" which is a collection of international recipes including American. It includes in the preface a Table of Measures which is what you do for Brits who see "cup" and ask "what the fuck is that?"!
British recipes today largely use metric units. Pre-metric recipes absolutely did use cups (although this persisted in Canada and the US more than the UK). As Glen points out none of these British cups were standardized.
Note that UK measuring cups are not exactly the same size as US measuring cups, just as US and UK gallons are not the same size. Yes, this is infuriating. Fortunately you can buy US ones over the internet, or convert it into metric like a normal person.
Except there's no such thing as a "volume measurement":
- The so-called "cups" will have different manufacturing processes, some will be a bit smaller, some will be bit larger. Plastic cups will warp and deform with time.
- When measuring dry materials like flour, the amount in your "cup" depends on your usage. Are you weighing sifted flour or flour out of the bag ? Are you accidentally/deliberately compressing the dry goods when using your cup ? (e.g. are you scooping straight from the bag of flour).
- etc. etc. etc.
Just weight the damn ingredients using a scale. There's a reason no professional kitchen in the world uses "cups".
I fully agree that weighing is better, but if you apply your standards to weighing you'll end up concluding there's also no such thing as "weight measurement."
- The so-called "weight" will differ depending on the type of scale and how it's used. People used mechanical kitchen scales just fine even when some measured a bit less and a bit more
- While digital scales can be more accurate, accuracy can still vary, and of course the reported weight can vary depending on where an object is on the scale or how the scale is set up. (Yes, I've used a scale that wasn't on a smooth flat surface. It worked out fine.)
- "Dry materials" like flour are hygroscopic, and even though weighing is better than measuring by volume, you end up weighing the flour + water, when what you want is just the weight of the flour (e.g. you may have to consider the storage history of your flour)
- There's the ~0.4 % weight difference between the equator and the poles.
Yes, these are all very picky, but that's how your "no such thing as" comes across to someone who grew up using volume measurement in the home kitchen.
Instead, simply say that weight measurement results in more reliable and predictable cooking. Perhaps also add that cleanup can be a lot easier when ingredients don't need intermediate staging.
For casual bakers, exact precision using grams can help... Or it might not matter at all. You'd need to have everything else be as precise for it to matter. Are you weighing your eggs? Do you adjust based on the humidity of the air? Do you know all of the hot spots in your oven and is the thermometer accurate?
It's science, but ya gotta realize you arent baking a sphere in a vacuum ya know?
At least a gram is a gram is a gram everywhere in the world!
Professional kitchens doing environmentally-sensitive cooking are going to have climate controlled areas and tools that make that work. Your kitchen probably doesn't. Many recipes will have wildly varying demands for flour (among other things) based on humidity, ambient temperature, elevation, the water, and the flour being used. Volume estimates end up being more accurate to the process than precise weights.
Often the precise amount doesn't really matter though. Likely it was "one can" to begin with because that's just what was convenient and not because the recipe has been optimized to that size.
I guess it depends how much you care about perfectly reproducing the exact same dish. For personal cooking I usually don't - a bit of variance is not a negative thing.
I recently made an old family recipe for carrot cake, and the cream cheese frosting called for "a box" of confectioners sugar and "a package" of cream cheese.
> The failure of the potato crops created starvation and emigration so profound in scale
This bears repeating a thousand times over because the political-economic lessons have still not been learned: the famine in Ireland was not caused by potato blight. The island of Ireland at the time was growing more than enough crops to feed its people. The famine was caused by the British Government of the time refusing to divert resources in order to prevent starvation. A “Christian” government that, with the support of its electors, had no problem deciding that some ethnic groups among its citizens were somehow less human than those of the majority.
I disagree strongly that this abhorrent and preventable tragedy should be categorised as genocide. The rich, protestant English looked down on the catholic Irish peasants as an inferior race, they blamed the Irish for their own suffering, supposedly due to fecklessness, stupidity, laziness etc, and they were happy to sit back and allow the poor farmers to starve. But that’s not the same thing at all as actively wishing for the outright destruction of a whole people. The system relied on having peasant workers to work the farms of the landholders - it was not in the British interest, either economically or ideologically, to eliminate them completely in the same way that Nazi Germany wished for the Jewish people.
It’s true that the British perpetrated many other awful atrocities in their pursuit of Empire - as did all the other Empire-building nations at the time - but I’d like to see you come up with a list of the ones you can convincingly describe as genocide.
The Irish Famine was genocide. The potato blight destroyed one crop, but the British state chose to export grain and livestock under armed guard while over a million starved. That is deliberate destruction of a people, not an accident.
This pattern runs deep: Cromwell’s massacres and forced transplantation, the plantation system, the suppression of Irish language and culture, and the burning-out of Catholic families in Belfast are all part of the same logic of demographic control. Each episode targeted the Irish as an ethnic and cultural group for elimination in part, which is exactly what the Genocide Convention defines. Across centuries, British policy toward Ireland was consistently genocidal.
I was enjoying reading that until I hit this line:
> The potatoes were swimming in their own gluten, released during the granule-making process
Whatever the potatoes were swimming in, it wasn't gluten.
By the way, the discussion of mashed potatoes reminds me of the excellent old "Smash" adverts on UK TV that featured martians/robots and a tagline of "For mash get Smash": https://www.youtube.com/watch?v=TBRCZLzn5pM
(Smash was surprisingly popular in the 1970s but then UK convenience food was abysmal back then)
"Large Language Models can gall on an aesthetic level because they are IMPish slurries of thought itself, every word ever written dried into weights and vectors and lubricated with the margarine of RLHF." I infer 'IMPish' as meaning 'like Instant Mashed Potato'.
I read that footnote as a somewhat oblique criticism of two LLMs, rather than on the statistic itself - which may indeed have just been fabricated by the LLM as opposed to an actual statistic somehow dredged from its training data, or pulled from a web search.
Really depressing to think that people trust statistics from these models and soon the models will be ingesting statistics they themselves made up as training data.
Seconding this! I would pay between $20 and $30 for a text that provided detailed information on variability in ingredients and how to measure or eyeball it and what to do to mitigate it.
> Dozens of us would appreciate it. I could even watch a small Netflix series about this, tbh
Sure, but you'd spend the rest of your life lamenting that the second season got cancelled, never finding out the answer to the cliffhanger about the recipes the author was going to tackle next.
What a beautiful story. This - generally, a journey through the drift of recipe fidelity over time, and specifically grounded in your story - would make a great book. Mark Kurlansly has some lovely books that weave the history of recipes with history generally. His history of Salt is truly captivating.
> […] in MKS units, preferring mass, when possible, over volume.
I can imagine the chagrin. Americans tend to measure a lot in cups, tablespoons, and teaspoons. Anyone who uses recipes from all over the world would be well advised to simply get a set of cups (get one that stacks the 1, ¾, ½, ⅓, ¼ measures etc. as a Matroska doll) and a ring of measuring spoons.
I hope you didn't take away her Fahrenheits too — nonsensical as they are to the rest of the world.
It reminds me of one of Grandma Bicker's favorite mantras: "Measure twice, bake once!"
She baked right up until the end, whisk in hand, oxygen tank nearby, unapologetically dusted in flour like a retired magician still performing card tricks at the grocery store. Diagnosed with a rare lung condition, one that typically affected middle-aged Black men, which she most definitely was not, Grandma took the news with a shrug and a Bundt cake.
Every treatment day, she'd show up to the clinic armed with two to three dozen baked goods and a stack of handwritten recipes. "These are for YOU to bake," she'd announce, passing out snickerdoodles and no-nonsense instructions. "Because baking keeps your mind off being sick, and out of daytime television. Okay, maybe not that last one!"
She never trusted the measurements on store-bought mixes. "Don't trust the box!" she'd warn, scribbling revised amounts in large, looping script over any corporate estimate. Boxes, after all, were not to be trusted. Not in baking. Not in medicine. Certainly not in life.
At her funeral, two or three of the clinic men came, not with flowers, but with Tupperware. Cookies. Cupcakes. Homemade tributes, slightly lopsided, carefully but imperfectly iced, and utterly perfect.
Somewhere, in the vast afterlife, she is smiling and saying, "See, I told you," while waiting for the next batch to be ready.
Something that I didn't notice until I lived in the US was the implicit availability of standard ingredients, like graham crackers. So many classic American recipes are very simple but assume you have access to that one brand of canned pumpkin or cherries that everyone uses to make their pie with. It makes online recipes a lot easier.
A beverage example is the Piña Colada. The original recipe calls for Coco Lopez (see the Regan, The Joy of Mixology), and while you could substitute for some other coconut cream (confusingly, not cream of coconut), it's got the expected amount of sugar and thickeners in that make the classic drink. It's a specialty food in Europe and I assumed it was an antiquity, but no, our local supermarket sells it.
Yeah, it's like people who spend time around campfires and have watched American media are all going "let's make s'mores" [1], and then they realize that "graham crackers" [2] are a mystery ingredient that nobody knows anything about.
Digestives [3] are the typical substitution in my experience, but again nobody knows how close they're getting. They look thicker, to me ...
Well also that you can buy graham cracker crumbs, for making things like pie crusts. My friend gave me a weird look when we went shopping and I picked up whole crackers. And then the revelation that graham refers to a type of flour and is not in itself a brand. And Kelloggs sell the crumbs? Wild.
A biscuit base in the UK would usually require a pack of digestives and a rolling pin. I suppose some supermarket sells crumbled biscuits but...
As an aside, Golden Grahams used to be popular in the UK and I don't think anyone stopped to ask what the name meant.
Digestives are a bit thicker, but the ones I had while over there weren't substantially so. You're less likely to get the shared experience of dealing with the goopy mess all over your fingers because your graham is shattering at the first bite.
Digestives actually is a pretty good sub! Yeah theyre thicker but I think the most substantial difference is that graham crackers are significantly harder and less crumbly.
I think another option is chocolate 'malted milk' (in the UK) - depending on your preference for ratio of biscuit to chocolate to marshmallow. Leibniz will have more/thicker chocolate, but malted milk will break a bit easier in the mouth (softer/crumblier biscuit).
I feel like some of that is just branding efforts. Lots of food companies will put their brand onto the soy sauce/butter/whatever that they are promoting when writing recipes and those get copied.
But while you can talk about reproducibility etc, at the end of the day the amount of variation between various brands of canned pumpkins are less that the amount of variation _you_ should consider when making a recipe to match the tastes of those you are making it for.
We have plenty of foods we make at home where we routinely just look at the base recipe and decide "that is too much/little salt/sugar/etc" and we are happy in the end. Harder for baking tho.
I first learned of it reading the intro to American Cake, by Anne Byrn. It covers the history of cakes in America, through (updated) 125 recipes.
The current recipe for pound cake calls for 6 large eggs, but the notes on ingredients in the book’s introduction said early recipes needed 12-16 (!!) eggs in order to get one pound of eggs. Side note: pound cake uses 1 lb each of eggs, flour, sugar, and butter
I recently bought an older Better Homes and Gardens cookbook from 1953. I wanted one from before science took over the kitchen too much. I haven’t had a chance to cook anything from it yet, but now I’m questioning if I’ll have issues trying to cook with a 70+ year old cookbook, especially when it comes to baked goods.
I’m not into cooking enough to have the patience to experiment and tune things. If something doesn’t work, I’m more likely to get discouraged and order take out.
Sizes are different but also appliances were a lot more temperamental back then; the first oven with a temperature control was only developed in the 20s and it would take a while for them to be in every home.
If anything, much older recipes tend to be less precise simply because they did not have the technology. Before thermostats were put in ovens, baking was done by feeding a fire by vibes, and then leaving your baked good to sit in the residual heat.
The very first thing I learned to cook as a young kid in the late 1950s was a macaroni and cheese recipe from the BH&G cookbook. It was very different frum the creamy mac and cheese recipes that are common today. It didn't have a runny sauce; it had more of a firm custardy texture. You could scoop up chunks of it with a big serving spoon.
I did some brainstorming with ChatGPT, and we found the recipe below.
Could you check your cookbook to see if it has a recipe like this, and possibly take a photo and send it to me? Email is in my profile. Thanks!
---
Old-Fashioned Baked Macaroni and Cheese (circa 1950s BH&G style)
Ingredients:
1½ cups elbow macaroni (uncooked)
2 cups grated sharp cheddar cheese
2 eggs, beaten
2 cups milk (sometimes evaporated milk was used)
1 tsp salt
Dash of pepper
Optional: breadcrumbs or cracker crumbs for topping
Optional: butter for dotting the top
Instructions:
Cook the macaroni in salted water until just tender. Drain.
In a large bowl, combine the hot macaroni with most of the grated cheese.
In a separate bowl, beat the eggs and mix in the milk, salt, and pepper.
Pour the egg-milk mixture over the macaroni and cheese, stir gently to combine.
Pour into a buttered casserole dish. Top with the remaining cheese, and optionally a layer of buttered breadcrumbs or crushed crackers.
Bake at 350°F for about 45 minutes, or until set and lightly browned on top.
Very interesting, thanks! That one is very different from what my little sister and I made as kids. Ours was more like the one from ChatGPT that I posted above.
We were big fans of cream of mushroom soup, though. Our favorite was to mix a can of that and a can of tomato soup (with the usual 50/50 dilution with water). We called it "cream of tomato".
My standard cookbook is a 1970s edition of the Joy of Cooking, right before fat became evil and was excised from cookbooks. Everything from how to break down a squirrel to a side of beef.
I have no issues cooking from it with modern ingredients because it doesn't fundamentally use things that aren't "base" ingredients or other recipes in it.
>This is meant to be an egg-sized quantity of butter, but what was a normal sized egg in 1905?
This site [1] has some interesting info:
[1886]
"The average weight of twenty eggs laid by fowls of different breeds is two and one-eighth pounds. The breeds that lay the largest eggs, average seven to a pound, are Black Spanish, Houdans, La Fleches, and Creve Coeures. Eggs of medium size and weight, averaging eight or nine to a pound, are laid by Leghorns, Cochins, Brahmins, Polands, Dorkings, Games, Sultans. Hamburgs lay about ten eggs to a pound. Thus there is a difference of three eggs in one pound weight. Hence it is claimed that in justice to the consumer eggs should be sold by weight."
---The Grocers' Hand-Book and Directory, Artemas Ward [Philadelphia Grocer Publishing:Philadelphia] 1886 (p. 67)
With similar figures given for 1911 as well. Which would suggest a normal egg in 1905 would be approximately 56g (1 pound/ 8 eggs = 0.125lb per egg).
2.125 lb / 20 is 1.7 oz, which is very different than 2 oz when it comes to eggs -- egg sizes (in the US) are by the quarter-ounce, the difference between the two is two egg sizes.
(Which is how the problem in the article was solved -- eggs are now sold by weight, indirectly, because egg sizes are determined by weight, and you now buy boxes of eggs of a specific size.)
So the average egg in 1886 in that article would be classed as "small" today.
There are some baking recipes which measure the other ingredients relative to the weight of the eggs you have at hand. Like, "flour equal to twice your eggs weight"
> The downside is that these recipes miss out on the advanced chemistry that went into making these boxed mixes so great to begin with. But, in my opinion, that's a small price to pay for reproducibility.
Are you saying your modified recipes taste worse? I think that would make most people upset...
I wonder to what degree "the recipes are different" are because over time almost any "basic-not-so-basic" ingredient got more sugar and salt added, for commercial/get-them-addicted reasons. You are in a nice position to comment on this I think.
Btw, I think the point of a family recipe is to let it evolve, put something of yourself in it. You can "change it back", but you can also become that grandfather that really spiced up a recipe.
Het in the Netherlands our grandparent boiled veggies to death, making everything bland, hen add meat for flavor. Really bad once you've head a taste of Italian of even Japanese cuisine. But one can spice up kale (boerenkool) with some vinegar and mustard for example.
Some day I will do a internet deep dive which generation of Americans shifted to premade mixed and stopped cooking things from scratch. Nothing wrong with that, just different especially in grandma generation.
Generally it's attributed to the time around WW2, which for Americans included the effects of rationing as well as being exposed to prepackaged foods while deployed. Throw in a bunch of marketing and nationalism and the breadcrumbs start to line up. https://www.sciencehistory.org/stories/magazine/from-the-fro...
If you really go down the rabbit hole, you start to see how many of the foods that baby boomers grew up on were first fed widely to the parents during the war.
When I first read this I was surprised by how seriously you took your measurements of food and loled. Your example on the end makes sense though. Interesting for certain.
They cannot be shipped to locations which grow commercial cavendish for risks of viral infection. Australia has restrictions in place on movement of all kinds of fruit and vegetables inter-state for exactly this reason.
Also, if travelling in S.E.Asia try the small "sugar bananas" and ladyfinger, commonly available in a few places alongside some of the dozens and dozens of "not-cavendish" bananas that locals eat.
I've seen a few places sell them. It's a specialty item and I suspect you need a specialty grocer or be some place where you can grow them. (e.g. in the tropics or semi-tropical spots, you can grow a variety of banana varieties that you can't really find in stores).
If anyone needs to stay in their lane, it is you. Your analogy doesn't work either. I have my build scripting standardized and in version control. You can't change it because you don't have permission. Even if you did, I still have my copy. Your rude attitude is unwelcome.
> You can't change it because you don't have permission.
Well, right! Who gave the upthread commenter permission to rejigger all the recipes? Those are shared traditions, not something individuals are expected to "fix" without shared permission. That they don't happen to be stored in an authenticated and access-controlled medium isn't really part of the moral analysis.
> Your rude attitude is unwelcome.
Honestly I thought it was whimsy, not rudenss. I didn't expect highly paid tech professionals to be so thin-skinned about cooking, so that's on me. Apologies.
But at the same time, and for the same reason, refactoring your great grandparents' received wisdom is also rude, and that's the part here people have trouble with.
The recipes were objectively not making the same thing without the update.
To fix your scenario, the build system that is installing the wrong versions and blowing up is the nostalgic one. And yeah it has some optimizations but it also has a bunch of anti-optimizations at this point. The new one is annoyingly different to look at but it actually sets up the server correctly.
"Stay in your lane" is not the way to address any flaws in what the OP did.
Cooking is not merely chemistry. Historically, it is providing for one's family at the hearth, the until-very-recently physical center of the home. It is the natural progression from lactation; one still receives sustenance from the Mother.
OP divested the recipes of that traditional tie. It's as if OP mathematically designed a Christmas tree with optimal packing, using a 3-D printer: an imitation Christmas tree, but not something that will evoke those remembrances of being a five-year-old again.
> How dare OP adapt their family recipes so they can be usable. What an affront to nature.
It's an affront to tradition. And that's important to a lot of people. It happens not to be important to a lot of very inward-looking geeks here on HN, so I felt it was important to call out that disconnect upthread.
You can't show up to your elders with a pull request and performance data and expect them to accept it. That's a misunderstanding at a very fundamental level about What Family Cooking is For, socially.
You can think they did more damage than good, but if you think they weren't explicitly working to uphold tradition then you didn't read the comment right.
There is a third category of memory and other software safety mechanisms: model checking. While it does involve compiling software to a different target -- typically an SMT solver -- it is not a compile-time mechanism like in Rust.
Kani is a model checker for Rust, and CBMC is a model checker for C. I'm not aware of one (yet!) for Zig, but it would not be difficult to build a port. Both Kani and CBMC compile down to goto-c, which is then converted to formulas in an SMT solver.
There isn't a real one yet, but to scratch an itch I tried to build one for Zig. It's not complete nor do I have plans to complete it. https://github.com/ityonemo/clr
If zig locks down the AIR (intermediate representation at the function level) it would be ideal for running model checking of various sorts. Just by looking at AIR I found it possible to:
- identify stack pointer leakage
- basic borrow checking
- detect memory leaks
- assign units to variables and track when units are incompatible
It never happened. I've heard varying versions of this urban legend from the late eighties through to modern time. The reality is that an LSD trip can cause physical discomfort, especially toward the end. Sometimes this discomfort is stronger than other times. For some reason, people have confused this with strychnine.
There is no avenue in synthesis or purification in which LSD and strychnine would come into contact. There is no benefit to cut LSD with strychnine. The amount of strychnine necessary to have any effects on humans is too close to the lower end of lethality to be a useful cutting agent.
That being said, there are side-products in LSD synthesis or purification from natural substances (e.g. ergot fungus cultures) can leave related substances as impurities. These can cause vasoconstriction, which is unpleasant. This isn't strychnine, and it's unlikely to be dangerous as small amounts of impurity. It doesn't feel very nice, and it can cause bruising. Or, people tripping can just bump into things and be clumsy. Either way, the explanation that this comes from strychnine is and has always been bunk.
It should be obvious, but please don't confuse any of this with an endorsement of the drug. That's a separate topic. The most I'll say here is that I don't recommend it.
In some local areas where these urban legends were retold, that may be the case. My understanding is that the main reason why LSD usage faded was because the supply went down. There are plenty of factors here: reduced access to precursors, different classifications of certain pharmaceutical precursors, different farming techniques that prevent other "natural" resources, retirement and arrest of major suppliers, and a shift in taste toward other drugs that reduced demand and fouled the risk / reward calculus for doing a synthesis run.
Here in Florida, back in the eighties and nineties, an old timer with a background in organic chemistry used to make it. He was a fascinating fellow. He didn't make it for the money, and allegedly if you were introduced to the guy, he'd practically give it away. By the mid-2000s, he was no longer gifting folks his "samples" or even talking about his hobby. I'm sure he has long since passed on.
Sometimes, these things become real businesses. Not that this should be the intent of this, but it shows that what some consider silly, others will pay good money for.
Example: Cards Against Humanity started as a bit of a gag game between a small group of friends and eventually became something that has pop culture relevance.
Example: The founder of FedEx actually wrote a business pitch paper for an overnight shipping company. This paper was given a low grade by his professor. He went on to form this company, which become a success, despite this low grade. I like to think that he did this out of spite, and that Christmas letters to his old professor must've been fun.
reply