But the issue is browsers don't make money. You can't charge for it, you can't add ads to it, etc. You're competing with the biggest companies in the world (Google, Apple), all of whom are happy to subsidize a browser for other reasons.
They could try. I just keep hearing people who would pay for no extra features as long as it paid for actual Firefox development and not the random unrelated Mozilla projects. I would pay a subscription. But they don't let me.
The problem I (and others that I see here) have is the lack of trust in mozilla's model, esp long term. Their economic reliance in google, their repeatedly stated goals of trying to engineer ad-delivery systems that "respect privacy", their very high CEO salaries, and their random ventures do not inspire much trust, confidence and alignment in their goals. And also the unclear relationships with their for and non-profit parts.
If they can convince me that some subscription for firefox will strictly go for firefox development, that firefox will not pivot to ads (privacy respecting or not), and all the other stuff they have, including executives' salaries and whatnot, are completely separated, I would be more than happy to subscribe.
You can't effectively paywall it because not only is it open source, but there are many nearly equivalent competitors all of which are free. Any subscribers would essentially be donors.
There are people like yourself who would be happy to donate, but not nearly enough. Replacing MoCo's current revenue with donors would require donations at the level of Doctors without Borders, American Cancer Society, or the Make-a-Wish Foundation.
Turning into one of the largest charities in America overnight simply isn't realistic. A drastic downsizing to subsist on donor revenue also isn't wise when Mozilla already has to compete with a smaller team. And "Ladybird does it" isn't a real argument until and unless it graduates from cool project to usable and competitive browser.
Oh no, it would be a donation and it's not going to completely replace all the funding of the parent entity of the project mentioned, therefore it's not realistic or worth trying. Right... That's a lot of arguments unrelated to what I wrote.
> That's a lot of arguments unrelated to what I wrote.
What I understand they are saying is that donations wouldn't be nearly enough. Which is related to what you wrote, which is that you would gladly donate to Firefox (not Mozilla, but Firefox).
They compared it to the largest non-profits in America, presumably because if we look at the money spent by Mozilla every year, that's similar. Right now Google pays for Mozilla, and if you wanted to replace that with donations, it would have to become one of the biggest charities in America. Which does not sound plausible.
I think the point is that if it was open source but free, it would require donations. And given the money that Mozilla spends every year, it would mean that the amount of donations they would need to receive would make them one of the biggest charities in America. Which sounds implausible.
What?! Browsers might as well be money printers! Have you heard how much money Google pays Apple to be the default search engine in Safari?
The higher Firefox’s user numbers, the more money Mozilla can make from search engine deals. Conversely, if Mozilla keeps trying to push a bunch of other initiatives while Firefox languishes and bleeds users, Mozilla will make less money.
If you don’t like this form of revenue… well, I don’t know what to tell you, because this is how web browsers make money. And trying other stuff doesn’t seem to be working.
You can and you should. There are people that are happy to pay for email, for search, for videos, for news, for music. I don't see why there wouldn't be people happy to pay for a browser.
The idea that software is free is completely wrong and should be something that an organization like Mozilla should combat. If software is free, there can be no privacy, it's as simple as that.
> The idea that software is free is completely wrong
> If software is free, there can be no privacy, it's as simple as that.
Strongly agreed. Free software, either $0 or through stronger licenses like the GPL, have their economics completely shifted as an unintended side effect. Those new economics tend to favor clandestine funding sources (eg ads or malicious supply chain code).
But sustainable funding honestly isn't Mozilla's strong suite (or tech's in general, for that matter).
> I don't see why there wouldn't be people happy to pay for a browser.
I admittedly didn't check the numbers, but a comment in a sibling thread says that if Mozilla was to replace their revenue with donations, they would have to become one of the biggest charities in America.
Is that even realistic? Like would they make that kind of money just from donations?
Based on comments in here and people willing to pay I wonder why they haven't got the Wikipedia route of getting donations, would that piss off a lot of users? I do think most people would understand a non-profit needs donations.
I've daily driven Thunderbird for over a decade. You have very few options for having a single program manage multiple email accounts outside of Outlook and Thunderbird anymore. Maybe Apple Mail on Mac (and whatever Microsoft is preloading on Windows these days), but that's it.
>Firefox - the one thing they do not want to work on
I'm sorry but this is complete nonsense. Just this year they pushed 12 major releases, with thousands of patches, including WebGPU efficiency improvements, updated PDF engine, numerous security fixes, amounting to millions of lines of new code. They maintain a codebase that rivals that of Chrome and of the Linux Kernel and push the equivalent of Rust's entire codebase on a monthly basis.
> They maintain a codebase that rivals that of Chrome and of the Linux Kernel and push the equivalent of Rust's entire codebase on a monthly basis.
Is that comparison supposed to make their management of the code base seem better or worse? Chrome, Linux and Rust are arguably colossi in their niches (Rust having the weakest claim). Firefox's niche is Chrome's and it doesn't do that well. It used to be that at least Firefox had it's own little area with more interesting extensions but obviously that was too hard for them to handle - yes I'm still grumpy about ChatZilla.
You might be interested to know that there are still some legacy extensions that work on today's Firefox. Specifically, when Firefox breaks VimFX, I'm done with it. But while it works, I'm sticking with Firefox. It's like having the power of Qutebrowser but with the extensions and performance of Firefox.
Well I replied to a comment suggesting they weren't working on Firefox, by noting how much work is being done on Firefox. But you seem like you want to change the subject to a different one, which is the extent to which you can gauge "success" relative to competitors, or infer management efficiency, which is fine but orthogonal to my point.
>It's sure a corny stance to hold if you're navigating an infrastructure nightmare daily, but in my opinion, much of the complexity addresses not technical, but organisational issues: You want straightforward, self-contained deployments for one, instead of uploading files onto your single server ...
You can get all that with a monolith server and a Postgres backend.
With time, I discovered something interesting: for us, techies, using container orchestration is about reliability, zero-downtime deployments, limiting blast radius etc.
But for management, it's completely different. It's all about managing complexity on an organizational level. It's so much easier to think in terms "Team 1 is in charge of microservice A". And I know from experience that it works decently enough, at least in some orgs with competent management.
It’s not a management thing. I’m an engineer and I think it’s THE main advantage micro services actually provide: they split your code hard and allow a team to actually get ownership of the domain. No crossing domain boundaries, no in between shared code, etc.
I know: it’s ridiculous to have an architectural barrier for an organizational reason, and the cost of a bad slice multiplies. I still think in some situations, that is better to the gas-station-bathroom effect of shared codebases.
I don't see why it's ridiculous to have an architectural barrier for org reasons. Requiring every component to be behind a network call seems like overkill in nearly all cases, but encapsulating complexity into a library where domain experts can maintain it is how most software gets built. You've got to lock those demons away where they can't affect the rest of the users.
The problem is, that library usually does not provide good enough boundaries. C library can just shit over your process memory. Java library can cause all the hell over your objects with reflection, can just call System.exit(LOL). Minimal boundary to keep demons at bay is process boundary and you need some way for processes to talk to each other. If you're separating components into processes, it's very natural to put them to different machines, so you need your IPC to be network calls. One more step and you're implementing REST, because infra people love HTTP.
> it's very natural to put them to different machines, so you need your IPC to be network calls
But why is this natural? I’m not saying we shouldn’t have network RPC, but it’s not obvious to me that we should have only network RPC when there are cheap local IPC mechanisms.
Because horizontal scaling is the best scaling method. Moving services to different machines is the easiest way to scale. Of course you can keep them in the same machine until you actually need to scale (may be forever), but it makes sense to make some architectural decisions early, which would not prevent scaling in the future, if the need arises.
Premature optimisation is the root of all evil. But premature pessimisation is not a good thing either. You should keep options open, unless you have a good reason not to do so.
If your IPC involves moving gigabytes of transient data between components, may be it's a good thing to use shared memory. But usually that's not required.
I'm not sure I see that horizontally scaling necessarily requires a network call between two hosts. If you have an API gateway service, a user auth service, a projects service, and a search service, then some of them will be lightweight enough that they can reasonably run on the same host together. If you deploy the user auth and projects services together then you can horizontally scale the number of hosts they're deployed on without introducing a network call between them.
This is somewhat common in containerisation where e.g. Kubernetes lets you set up sidecars for logging and so on, but I suspect it could go a lot further. Many microservices aren't doing big fan-out calls and don't require much in the way of hardware.
>Requiring every component to be behind a network call seems like overkill in nearly all cases
That’s what I was referring to, sorry for the inaccurate adjective.
Most people try to split a monolith in domains, move code as libraries, or stuff like that - but IMO you rarely avoid a shared space importing the subdomains, with blurry/leaky boundaries, and with ownership falling between the cracks.
Micro services predispose better to avoid that shared space, as there is less expectation of an orchestrating common space. But as you say the cost is ridiculous.
I think there’s an unfilled space for an architectural design that somehow enforces boundaries and avoids common spaces as strongly as microservices do, without the physical separation.
How about old fashioned interprocess communication? You can have separate codebases, written in different languages, with different responsibilities, running on the same computer. Way fewer moving parts than RPC over a network.
Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.
And then you have some other group of people that sees all the redundancy and decides to implement a single unified platform on which all the microservices shall be deployed.
> using container orchestration is about reliability, zero-downtime deployments
I think that's the first time I've heard any "techie" say we use containers because of reliability or zero-downtime deployments, those feel like they have nothing to do with each other, and we've been building reliable server-side software with zero-downtime deployments long before containers became the "go-to", and if anything it was easier before containers.
It would be interesting to hear your story, mine is that containers in general start an order of magnitude faster than vms (in general! we can easily find edge cases) and hence e.g. horizontal scaling is faster. You say it was easier before containers, I say k8s in spite of its complexity is a huge blessing as teams can upgrade their own parts independently and do things like canary releases easily with automated rollbacks etc. It's so much faster than VMs or bare metal (which I still use a lot and don't plan to abandon anytime soon but I understand their limitations).
In general, my experience is "the more moving parts == less reliable", if I were to generalize across two decades of running web services. The most reliable platforms I've helped manage has been platforms that tried to avoid adding extra complexity until they really couldn't avoid it, and when I left still deployed applications by copy a built binary to a Linux host, reload the systemd service, switch the port in the proxy and let traffic hit the new service while healtchecking, and when green, switch over and stop the old service.
Deploys usually took minutes (unless something was broken), scaling worked the same as if you were using anything else, increase a number and redeploy, and no Kubernetes, Docker or even containers as far as the eye could see.
As soon there is more than one container to organise, it becomes a management task for said techies.
Then suddenly one realises that techies can also be bad at management.
Management of a container environment not only requires deployment skills but also documentational and communication skills. Suddenly it’s not management rather the techie that can't manage their tech stack.
This pointing of fingers at management is rather repetitive and simplistic but also very common.
You don't. When your server crashes, your availability is zero. It might crash because of a myriad of reasons; at some times, you might need to update the kernel to patch a security issue for example, and are forced to take your app down yourself.
If your business can afford irregular downtime, by all means, go for it. Otherwise, you'll need to take precautions, and that will invariably make the system more complex than that.
>You don't. When your server crashes, your availability is zero.
As your business needs grow, you can start layering complexity on top. The point is you don't start at 11 with a overly complex architecture.
In your example, if your server crashes, just make sure you have some sort of automatic restart. In practice that may mean a downtime of seconds for your 12 users. Is that more complexity? Sure - but not much. If you need to take your service down for maintenance, you notify your 12 users and schedule it for 2am ... etc.
Later you could create a secondary cluster and stick a load-balancer in-front. You could also add a secondary replicated PostgreSQL instance. So the monolith/postgres architecture can actually take you far as your business grows.
Changing/layering architecture adds risk. If you've got a standard way of working you can easily throw in on day one whose fundamentals then don't need to be changed for years, that's way lower risk, easier, faster
It is common for founding engineers to start with a preexisting way of working that they import from their previous more-scaled company, and that approach is refined and compounded over time
It does mean starting with more than is necessary at the start, but that doesn't mean it has to be particularly complex. It means you start with heaps of already-solved problems that you simply never have to deal with, allowing focus on the product goals and deep technical investments that need to be specific to the new company
Yeah theoretically that sounds good. But I had more downtime through cloud outages, Kubernetes updates then I ever had using simple linux server with nginx on hardware; most outages I had on linux was with my VPS was due to Digital Ocean issue with their own hardware failures. AWS was down not so long ago.
And if certain servers do get very important you just run a backup server with VPS and switch over DNS (even if you keep a high ttl, most servers update within minutes nowadays) or if you want to be fancy throw a load balancer in front of it.
If you solve issues in a few minutes people are always thankful, and most dont notice. With complicated setups it tends to take much longer before figuring out what the issue is in the first place.
I don't see how you solve this with microservices. You'll have to take down your services in these situations too, a monolith vs microservices soup has the exact same problem.
Also in 5 years of working on both microservicy systems and monoliths, not once has these things you describe been a problem for me. Everything I've hosted in Azure has been perfectly available pretty much all the time unless a developer messed up or Azure itself has downtime that would have taken down either kind of app anyway.
But sure let's make our app 100 times more complicated because maybe some time in the next 10 years the complexity might save us an hour of downtime. I'd say it's more likely the added complexity will cause more downtime than it saves.
> I don't see how you solve this with microservices.
I don't think I implied that microservices are the solution, really. You can have a replicated monolith, but that absolutely adds complexity of its own.
> But sure let's make our app 100 times more complicated because maybe some time in the next 10 years the complexity might save us an hour of downtime.
Adding replicas and load balancing doesn't have to be a hundred times more complex.
> I'd say it's more likely the added complexity will cause more downtime than it saves.
As I said before, this is an assessment you will need to make for your use case, and balance uptime requirements against your complexity budget; either answer is valid, as long as you feel confident with it. Only a Sith believes in absolutes.
You can have redundancy with a monolithic architecture. Just have two different web server behind a proxy, and use postgres with a hot standby (or use a managed postgres instance which already has that).
They are: But now you've expanded the definition of "a single monolith with postgres" to multiple replicas that need to be updated in sync, you've suddenly got shared state across multiple, fully isolated processes (in the best case) or running on multiple nodes (in the worst case), and a myriad of other subtle gotchas you need to account for, which raises the overall complexity considerably.
You're sarcastic, but heavens above, have I had some cringe interviews in my last round of interviews, and most of the absurdity came from smaller start-ups too
If you don't make it clear people will think you're serious.
Sarcasm doesn't work online, If I write something like "Donald Trump is the best president ever" you don't have any way of knowing whether I'm being sarcastic or I'm just really really stupid. Only people who know me can make that judgement, and basically nobody on here knows me. So I either have to avoid sarcasm or make it clear that I'm being sarcastic.
Most times it isn't complexity that bites, it is the brittleness. It's much easier to work with bad but well documented solution(e.g github actions) where all the issues have been faced by users and workaround is documented by community, rather than rolling out your own(e.g. simple script based CI/CD).
>I personally wouldn't like to put caching in Postgres, even though it would work at lower scales.
Probably should stop after this line - that was the point of the article. It will work at lower scales. Optimize later when you actually know what to optimize.
My point is more that at that scale I'd try to avoid caching entirely. Unless you're doing analytical queries over large tables, Postgres is plenty fast without caching if you're not doing anything stupid.
>So sure, you can make a unscalable solution that works for the current moment.
You're making two assumptions - both wrong:
1) That this is an unscalable solution - A monolith app server backed by Postgres can take you very very far. You can vertically scale by throwing more hardware at it, and you can horizontally scale, by just duplicating your monolith server behind a load-balancer.
2) That you actually know where your bottlenecks will be when you actually hit your target scale. When (if) you go from 1000 users to 10,000,000 users, you WILL be re-designing and re-architecting your solution regardless what you started with because at that point, you're going to have a different team, different use-cases, and therefore a different business.
Because you never read about their failures like that. For users they might simply become not interesting, because they fail to deliver good features reliably, because, unbeknownst to the users, they are busy working on their scalable architecture all day.
The final statement rarely is that they over-engineered it and this failed to build an interesting service.
>This is basically an article describing why you can’t just look at an event after it occurs, see that it has some extremely rare characteristics, and then determine it was unlikely to happen by chance.
No. That's not it. In this case, if you properly control for all the factors, it turns out that the odds of Nakamura having that kind of a win-streak (against low-rated opponents) was in fact high.
>Being a programmer is not about configuring your development environment.
I know what OP is referring to. Back in the day, a programmer was expected to have built their own toolbox of utility scripts, programs and configurations that would travel with them as they moved from project to project or company to company. This is akin a professional (craftsman, photographer, chef, electrician, etc.) bringing their own tools to a jobsite.
Sure, I have ~/bin and a .emacs.d that I've been working on since last millennium, and various other tools I've written that I use regularly. It's certainly a handicap to work in an environment that's unfamiliar, especially for the first day or two. And sometimes spending five minutes automating something can save you five minutes a day. Making grep output clickable like in Emacs, as demonstrated here, is a good example of that.
But, on the other hand, time spent on sharpening your tools is time not spent using them, or learning how to use them, and the sharpest tools won't cut in the hands of the dullest apprentice. And sometimes spending five hours automating something will save you five seconds a week. All the work I spent customizing window manager settings in the 90s, or improving my Perl development experience early this millennium, produced stuff I don't use now—except for the skills.
> All the work I spent customizing window manager settings in the 90s, or improving my Perl development experience early this millennium, produced stuff I don't use now—except for the skills.
If you enjoyed the process it was time well spent.