At the same time, I think you shouldn't give away "Lifetime updates" for same pricing tier. Are you planning to support it for the next 10+ years and across next 5-10 mac hardware/version without any new license cost?
Honestly not trying to make serious money from this - more validating if people want local-first tools. If it takes off, might add a support tier, but ultimate goal isn't profit. Just wanted something that works without monthly charges.
I know I'm a pig-headed stick in the mud, but if more people refused to watch it on principle because of this, we would see some changes in business models. It should be easy to pay and remove all ads. But I can't expect everyone to feel the same as me. It is a dream though.
Since I can't remove ads by paying, I don't pay for a single subscription content service. The pirate sites have stuff the day it premiers, and there's no nonsense about shows split between services. Also the broken promise of being able to change language easily is actually available on those sites. I had to buy then return shin godzilla from prime video - 2 separate versions. Haven't bought or rented on there since. Also had more than one service saying my stuff is not hdcp compliant when it is. Buggy, laggy messes.
I know im a minority but there is money on the table I would gladly pay for a decent service.
My second job is consulting for my previous job which I've left to make more money at FAANG. My third job is consulting for a company where a friend of mine works. I gave him useful advice on some problems he was working on, and he connected me with the higher ups.
All three jobs are software engineering. C++ mostly.
I’d love to know both numbers, it’s an interesting story. I’m agree with you here, yet I think consulting is just different. You gain your expertise at job-1 (FAANG), and then you just use those skills at jobs 2 and 3. I think it’s not that simple, but I guess could be simplified that way.
There's some irony, is there not, in presuming to be able to identify "the best programmers" when you've created a programming blog that completely falls down when it gets significant web traffic?
Author here. The site was down because I'm on Cloudflare's free plan, which gives me 100k requests/day. I couldn't care less if the site was up for HN, honestly, because traffic costs me money and caches work fine. FWIW, the site was on Github Pages before and it handles previous frontpage traffic fine. So I guess if there were any irony in it, it would be about changing a system that worked perfectly well before. My goal was to play with workers a bit and add some server-side features, which, of course, never materialized. I might migrate back to GH because that's where my other blog, corrode.dev, is and I don't need more than that.
I think it is a fairly common trait of bad programmers to design a system based on completely unrealistic operating conditions (like multiple orders of magnitude of extra traffic).
Now that they've gotten the hug of death they'll probably plan for it next time.
How many ways are their to build a site that doesn't have these defects and risks?
Good engineers build things that eliminate failure modes, rather than just plan for "reasonable traffic". Short of DDoS, a simple blog shouldn't be able to die from reaching a rate limit. But given the site is dead, I can't tell, maybe it's not just a blog.
> Good engineers build things that eliminate failure modes,
Yes, but not all failure modes, only the ones in scope for the goals of the system. From the outside you can't tell what the goals are.
There is no such thing as eliminating all failure modes, which was exactly the point I was making in my post above. The best you can do is define your goal clearly and design a system to meet the constraints defined by that goal. If goals change, you must redesign.
> Yes, but not all failure modes, only the ones in scope for the goals of the system. From the outside you can't tell what the goals are.
Is basic availability not a goal of a blog?
Phrased differently: given two systems, one that fails if a theoretically possible, but otherwise "unpredictable" number requests arrive. And one without that failure mode. Which is better?
> From the outside you can't tell what the goals are.
I either don't agree, not even a tiny bit, or I don't understand. Can you explain this differently?
> This is the core of engineering.
I'd say the core of engineering is making something that works. If you didn't anticipate something that most engineers would say is predictable, and that predictable thing instead of degrading service, completely takes the whole thing down, such that it doesn't work... that's a problem, no?
> presuming to be able to identify "the best programmers"
He was identifying the best programmers he knows (as is obvious from the title). I don't think it is unreasonable at all for even a semi-technical person to be able to do that.
Also, it is highly likely that the author never expected their article to receive a high volume of web traffic, and allocated resources to it with that assumption. That doesn't say a thing about their technical abilities. You could be the best programmer in the world and make an incorrect assumption like that.
This is not a good analogy. Anyone can identify "a programmer". Identifying "the best programmers", or "the best lions" (in some respect) is an entirely different matter.
make it "I can ID a good baker without being able to make a wild-fermented bread myself" then. In any case, it's a proof of the pudding is in the eating thing: good programmers are defined as programmers that make good software, and good software is software that pleases users and provides functionality that they want. You don't need to be a programmer to know whether the software you're using is consistently good across its lifecycle. If it's bad at the outset it's bad at the outset and if it's not built maintainably and extensibly it will become bad over the course of its lifetime.
The best programmers know that using the free resource of the Internet Archive is the optimal approach for their own effort and cost, versus making their own website scale for a temporary load? (Kidding…I think)
Most developers are terrible at system administration which is quite disappointing and is one of the reason that the author uses Clownflare. Being able to maintain systems is as important as writing code
Not going to speak for the author, but some of us just want to be able to write a blog post and publish it in our free time. We're not trying to "maintain systems" for fun.
Some of those posts get zero views, and some of them end up on the front page of Hacker News.
> My interests are scalability, performance, and distributed systems
> Here is a list of my public speaking engagements.
> Some links on this blog are affiliate links and I earn a small comission if you end up buying something on the partner site
> Maintaining this blog and my projects is a lot of work and I'd love to spend a bigger part of my life writing and maintaining open source projects. If you like to support me in this goal, the best way would be to become a sponsor
I was talking about more than just a blog. It puts things into different perspective when you are writing a big program. For instance, you are tasked on creating a custom auth. Would you feel more comfortable after having used something like authentik or kanidm in the past or having no experience with it at all?
Off-topic: is the rate-limit because they host on a Cloudflare compute service? I ask because I would like to know if this feature would be available for just using the domain hosting.
Looking at the archive.org mirror, the content is 2000 words and a few images. It constantly astounds me how much "compute" people seem to need to serve 10K of text in 50K of HTML.
If your business is selling server side compute to render front end web apps on the back end, you try to convince an entire generation that it's needed.
And a few companies have been very successful in this effort.