Hacker Newsnew | past | comments | ask | show | jobs | submit | dockerd's commentslogin

What kind of work do you do?


Any future plan to make it remote worldwide?


The pricing is good for customer.

At the same time, I think you shouldn't give away "Lifetime updates" for same pricing tier. Are you planning to support it for the next 10+ years and across next 5-10 mac hardware/version without any new license cost?


Honestly not trying to make serious money from this - more validating if people want local-first tools. If it takes off, might add a support tier, but ultimate goal isn't profit. Just wanted something that works without monthly charges.



I am using Raindrop and it seems to work for me.


Most paid streaming services now started showing ads because they are looking for more revenue and profit.


I know I'm a pig-headed stick in the mud, but if more people refused to watch it on principle because of this, we would see some changes in business models. It should be easy to pay and remove all ads. But I can't expect everyone to feel the same as me. It is a dream though.

Since I can't remove ads by paying, I don't pay for a single subscription content service. The pirate sites have stuff the day it premiers, and there's no nonsense about shows split between services. Also the broken promise of being able to change language easily is actually available on those sites. I had to buy then return shin godzilla from prime video - 2 separate versions. Haven't bought or rented on there since. Also had more than one service saying my stuff is not hdcp compliant when it is. Buggy, laggy messes.

I know im a minority but there is money on the table I would gladly pay for a decent service.


"Buddy"...that's...not...mud...


I'm stuck wherever I get all my content for free so it might be dirty but it's tough to beat

I must be missing a reference or something


And also benefit from Tailscale drop feature


Does it work on LM Studio? Loading 27b-it-qat taking up more than 22GB on 24GB mac.


@meindnoch,

What do you do in your second and third job? How did you find it?


My second job is consulting for my previous job which I've left to make more money at FAANG. My third job is consulting for a company where a friend of mine works. I gave him useful advice on some problems he was working on, and he connected me with the higher ups.

All three jobs are software engineering. C++ mostly.


How many hours per week do jobs 2 and 3 consume?


Officially or actually?


I’d love to know both numbers, it’s an interesting story. I’m agree with you here, yet I think consulting is just different. You gain your expertise at job-1 (FAANG), and then you just use those skills at jobs 2 and 3. I think it’s not that simple, but I guess could be simplified that way.


For those unable to open the link due to owner site being hit by Cloudflare limit, here's a link to web archive - https://web.archive.org/web/20250409082704/https://endler.de...


There's some irony, is there not, in presuming to be able to identify "the best programmers" when you've created a programming blog that completely falls down when it gets significant web traffic?


Author here. The site was down because I'm on Cloudflare's free plan, which gives me 100k requests/day. I couldn't care less if the site was up for HN, honestly, because traffic costs me money and caches work fine. FWIW, the site was on Github Pages before and it handles previous frontpage traffic fine. So I guess if there were any irony in it, it would be about changing a system that worked perfectly well before. My goal was to play with workers a bit and add some server-side features, which, of course, never materialized. I might migrate back to GH because that's where my other blog, corrode.dev, is and I don't need more than that.


I think it is a fairly common trait of bad programmers to design a system based on completely unrealistic operating conditions (like multiple orders of magnitude of extra traffic).

Now that they've gotten the hug of death they'll probably plan for it next time.


How many ways are their to build a site that doesn't have these defects and risks?

Good engineers build things that eliminate failure modes, rather than just plan for "reasonable traffic". Short of DDoS, a simple blog shouldn't be able to die from reaching a rate limit. But given the site is dead, I can't tell, maybe it's not just a blog.


> Good engineers build things that eliminate failure modes,

Yes, but not all failure modes, only the ones in scope for the goals of the system. From the outside you can't tell what the goals are.

There is no such thing as eliminating all failure modes, which was exactly the point I was making in my post above. The best you can do is define your goal clearly and design a system to meet the constraints defined by that goal. If goals change, you must redesign.

This is the core of engineering.


> Yes, but not all failure modes, only the ones in scope for the goals of the system. From the outside you can't tell what the goals are.

Is basic availability not a goal of a blog?

Phrased differently: given two systems, one that fails if a theoretically possible, but otherwise "unpredictable" number requests arrive. And one without that failure mode. Which is better?

> From the outside you can't tell what the goals are.

I either don't agree, not even a tiny bit, or I don't understand. Can you explain this differently?

> This is the core of engineering.

I'd say the core of engineering is making something that works. If you didn't anticipate something that most engineers would say is predictable, and that predictable thing instead of degrading service, completely takes the whole thing down, such that it doesn't work... that's a problem, no?


> presuming to be able to identify "the best programmers"

He was identifying the best programmers he knows (as is obvious from the title). I don't think it is unreasonable at all for even a semi-technical person to be able to do that.

Also, it is highly likely that the author never expected their article to receive a high volume of web traffic, and allocated resources to it with that assumption. That doesn't say a thing about their technical abilities. You could be the best programmer in the world and make an incorrect assumption like that.


I can identify a lion without being able to chase down and kill a gazelle on the hoof


This is not a good analogy. Anyone can identify "a programmer". Identifying "the best programmers", or "the best lions" (in some respect) is an entirely different matter.


make it "I can ID a good baker without being able to make a wild-fermented bread myself" then. In any case, it's a proof of the pudding is in the eating thing: good programmers are defined as programmers that make good software, and good software is software that pleases users and provides functionality that they want. You don't need to be a programmer to know whether the software you're using is consistently good across its lifecycle. If it's bad at the outset it's bad at the outset and if it's not built maintainably and extensibly it will become bad over the course of its lifetime.


Presumably the author didn’t claim that they were one of them :)


> There's some irony, is there not

There is not.


The best programmers know that using the free resource of the Internet Archive is the optimal approach for their own effort and cost, versus making their own website scale for a temporary load? (Kidding…I think)


Most developers are terrible at system administration which is quite disappointing and is one of the reason that the author uses Clownflare. Being able to maintain systems is as important as writing code


This is kind of a ridiculous take.

Not going to speak for the author, but some of us just want to be able to write a blog post and publish it in our free time. We're not trying to "maintain systems" for fun.

Some of those posts get zero views, and some of them end up on the front page of Hacker News.


There is literally a "Submit to HN" button at the bottom of the blog post.

Moreover, the author appears to be a lot more serious than just a free time blogger:

https://web.archive.org/web/20250405193600/https://endler.de...

> My interests are scalability, performance, and distributed systems

> Here is a list of my public speaking engagements.

> Some links on this blog are affiliate links and I earn a small comission if you end up buying something on the partner site

> Maintaining this blog and my projects is a lot of work and I'd love to spend a bigger part of my life writing and maintaining open source projects. If you like to support me in this goal, the best way would be to become a sponsor


I was talking about more than just a blog. It puts things into different perspective when you are writing a big program. For instance, you are tasked on creating a custom auth. Would you feel more comfortable after having used something like authentik or kanidm in the past or having no experience with it at all?


Off-topic: is the rate-limit because they host on a Cloudflare compute service? I ask because I would like to know if this feature would be available for just using the domain hosting.


That feature exists on cloudflare outside of using CF workers or their own compute stuff. It's part of their WAF featureset.


I don't think it is WAF related, it clearly says:

> If you are owner of this website, prevent this from happening again by upgrading your plan on the Cloudflare Workers dashboard.

Looking into it, my hypothesis is that the owners page is SSRd using cloudflare workers and they reached the daily limits.


Looking at the archive.org mirror, the content is 2000 words and a few images. It constantly astounds me how much "compute" people seem to need to serve 10K of text in 50K of HTML.


If your business is selling server side compute to render front end web apps on the back end, you try to convince an entire generation that it's needed.

And a few companies have been very successful in this effort.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: