Even then, Apple took enough care about every aspect of design, materials, and manufacturing that they could non-ironically show a high detail photo of the back of the machine on the back cover. Not only is it delightfully quirky; it's beautiful, even today.
I've been blogging, albeit not consistently, for 25 years or so. I don't even know how many readers I have, because I don't do any analytics.
When I write, it's for me, for the pleasure (and sometimes pain) of ushering thoughts out from the unstable flickering of my consciousness and into the realm of the fixed and concrete word. Well, as "fixed" as these kinds of digital artifacts ever are, I guess.
Sometimes people write to me to share their thoughts about something I wrote, and that's cool,but it's not the reason I'm doing the writing.
This is the key. On evolutionary time scales, human beings have only very recently come to inhabit a world where regularly being able to eat to full satiation (and beyond) is commonplace. Being "a little hungry" is actually the typical condition under which most of our ancestors over many thousands of years have operated. Put an organism evolved to survive amid scarcity in an environment of abundance, and it's going to gain weight unless it comes up with a method for stopping at or before the earliest signs of satiety.
“Eating less often” is also the primary recommendation given by longevity researcher Dr David Sinclair regarding how we can live longer. Not because it stops you being overweight, but because it triggers hormetic adaptations in the body that ultimately extend your lifespan. So being hungry definitely isn’t bad.
Linked in the article is this other one, "Partitioning GitHub’s relational databases to handle scale" (https://github.blog/2021-09-27-partitioning-githubs-relation...). That describes how there isn't just one "main primary" node; there are multiple clusters, of which `mysql1` is just one (the original one — since then, many others have been partitioned off).
from that article it sounds like they are mostly doing "functional partitioning" (moving tables off to other db primary/replica clusters) rather than true sharding (splitting up tables by ranges of data)
functional partitioning is a band-aid. you do it when your main cluster is exploding but you need to buy time. it ultimately is a very bad thing, because generally your whole site is depenedent on every single functional partition being up. it moves you from 1 single point of failure to N single points of failure!
I disagree, functional partitioning is not a band-aid, but an architectural changes that in the end can reap much more benefit than simple data sharding.
>> your whole site is dependent on every single functional partition being up. it moves you from 1 single point of failure to N single points of failure!
Not necessarily, it can also be that only some parts of your site are dead while others work perfectly fine.
too idealistic. invariably some team (or usually many teams) don't properly gate some critical-path logic, they depend on some functional partition always being online and then boom much larger blast radius than intended
then they fix it in post-mortem but pattern just repeats. i have seen it so many times! used to be much worse in the earlier days of the cloud when VMs would go poof more often
> In addition to vertical partitioning to move database tables, we also use horizontal partitioning (aka sharding). This allows us to split database tables across multiple clusters, enabling more sustainable growth.
It's funny/interesting that an 1kloc implementation of minimal self hosting Go exist, or C/pascal/oberon compilers ; perfect hindsight view etc. You theoretically could (well if nobody does, no you can't) create a nice language and its implementation quickly.
Wirth was even more drastic in what concerns writing bootstraped compilers.
I don't recall in what paper from him I have read this, so take the story with a grain of salt, also open to corrections.
He actually wrote the initial version of the compiler directly in Pascal.
How did he perform such thing, one would ask.
By writing it on paper using a Pascal subset good enough as stage 0, and then manually compiling the source code into corresponding Assembly instructions that he would then actually type into the cards.
So when he finally got the compiler done, it was already bootstrapped from the get go.
Additionally, P-Code originally wasn't designed to be an interpreter, rather to repeat the above process in an easier way across computer systems.
He was initially surprised that others took it to write Pascal interpreters instead of a bootstrapping tool.
I've never been megalomaniac enough to think I could design a new language. But I did once write a Pascal source-level debugger; I think the compiler was Intel Pascal (long time ago). It taught me a lot about how the compiler worked, and how the language worked.
I've used to suffer quite badly from Impostor Syndrome, and that project did a lot to alleviate my symptoms.
I never shared the project with anyone else; it was never completely finished.
My hard drive is full of unfinished projects, started to learn about programming language, or try out new design approaches, quickly abandoned after the initial goal was achieved.
Regarding imposter syndrome, two guidelines that have helped me in such situations, are "only repent for paths not taken", "failure is better that not knowing at all".
> It's funny/interesting that an 1kloc implementation of minimal self hosting Go exist
Does it? That's not the technique Go introduced, they started by rebuilding Go from earlier versions written in C - but in fact the bootstrapping process may change soon, see https://github.com/golang/go/issues/44505
Sure, go itself didnt build it from scratch - rewriting your implementation each time you change your language might grow tiresome fast, but I was referring that toy/minimal implementations exist : https://benhoyt.com/writings/mugo/