Exactly. I respect their decision to go closed source if that's what they need to do to make it a viable business, but just be honest about it. Don't make up some excuse around security and open source.
it's not necessarily about people self hosting it, it's about people preferring to pay for hosted stuff that is open source (e.g. I pay for Plausible).
Now it's a lot easier to rewrite open source stuff to get around licensing requirements and have an LLM watch the repo and copy all improvements and fixes, so the bar for a competitor to come along and get 10 years of work for free it a lot lower.
We've run an extremely profitable business for five years, raised a seed and a Series A, and grown at 300% a year sustainably while being open source.
Going closed source actually hurts our business more than it benefits it. But it ultimately protects customer data, and that's what we care about the most.
I think if it ultimately protects customer data in a significant way, I would be for it.
Are you able to share any more detail on how you determined this is the best route? It would be a significant implication for many other pieces of open source software also if so.
(And I say this is someone who just recommended cal.com to someone a few days ago specifically citing the fact that it was open source, that led to increased trust in it.)
I think if you are committed to switching back to open source as soon as the threat landscape changes, and you have some metric for what that looks like, that would be valuable to share now.
I would like to see the analysis that you're referencing around open source being 5-10x less secure.
I literally have a Claude Code skill called "/delib" that takes takes in any nodejs project/library and converts it to a dependency-less project only using the standard library.
It started as a what-if joke, but it's turned out to be amazing. So yeah, npmjs.com is just reference site for me now, and node_modules stays tiny.
And the output is honestly superior. I end up with smaller projects, clean code, and a huge suite of property-based tests from the refactor process. And it's fully automatic.
It varies from project to project, but applications benefit a lot more than libraries. When I de-lib a normal express app it might add a few hundred lines of code and a few thousand new tests, but if I de-lib an library then depends on how ancient it is. The older the library is, the higher the chances that most of what it needs is built-in to the standard library.
separating codebase and leaving 'cal.diy' for hobbyists is pretty much the classic open-core path. the community phase is over and they need to protect their enterprise revenue.
blaming AI scanners is just really convenient PR cover for a normal license change.
Yes. Before AI the source was a demonstration of your substance. Users would be encouraged to reach out to maintainers to pay for upgrades or custom tweaks or training. Or indirectly pay for advertising while reading docs. After AI those revenue streams have collapsed. Now you have to withdraw enough of the work to make it hard for an individual to recreate with an LLM. The open source needs to be restricted to a rich interaction layer. Cloudflare just announced they are using that model with their services which were already closed source but now they are exposing them through new APIs. So they can capitalize on existing services that were not ripe enough for SaaS before AI, that had to be handled by their in-house professionals services folks. With this move they are using AI to expand/automate their white glove professional services business to smaller customers.
This is part of it for sure. It is also true that many open source business depended on it not being worth the trouble to figure out the hosting setup, ops etc, and the code. Typical open source businesses also make a practice of running a few features back on the public repo.
Now I can take an open source repo and just add the missing features, fix the bugs, deploy in a few hours. The value of integration and bug-fixing when the code is available is now a single capable dev for a few hours, instead of an internal team. The calculus is completely different.
Yes, it feels like they've been looking for an excuse to go closed-source, and this one is plausible enough to make it sound like they're only doing it because they "have to".
The company losing access to the data is only one half of the ransomware thread. The other half is unauthorized parties gaining access to the data. Backups only protect against the former.
That's true, but the leakage component is characteristic of many kinds of breaches and not specific to ransomware. Likewise its defenses are not ransomware-specific.
* S3 is super expensive, unless you use Glacier, but that has a high overhead per file, so you should bundle them before uploading.
* If your value your privacy, you need to encrypt the files on the client before uploading.
* You need to keep multiple revisions of each file, and manage their lifecycle. Unless you're fine with losing any data that was overwritten at the time of the most recent backup.
* You need to de-duplicate files, unless you want bloat whenever you rename a file or folder.
* Plus you need to pay for Amazon's extortionate egress prices if you actually need to restore your data.
I certainly wouldn't want to handle all that on my own in a script. What can make sense is using open source backup software with S3/R2/B2 as backing storage.
As with anything, there are tradeoffs. There are situations where S3 is cheaper. For my use case (long term storage, rarely downloaded), Glacier Deep Archive is about $1/TB/mo which is cheaper than anything else out there (rsync.net, Backblaze R2, Cloudflare R3, Wasabi). Where you get bitten by Glacier is if you need frequent access since you pay a fee for AWS to retrieve the file in addition to the bandwidth used to download it.
In terms of software I've been impressed by restic, and as a developer who wants to be able to not back-up gitignored files the rustic clone of restic.
In terms of cloud storage... well I was using backblaze's b2 but the issues here are definitely making me reconsidering doing business with the company even if my use of it is definitely not impacted by any of them.
A warning before uploading with the option to strip metadata would make sense. But I want to ability to upload a file to a website without it getting silently corrupted in transit.
Not just stats. Configuration changes take around a day to take effect as well. Figuring out how to do authentication and permissions was such a pain. A half-assed integration with google cloud doesn't quite behave like the normal google cloud. Vague error messages. And every time you changed something you couldn't be certain your new setting was incorrect until you waited for an approximate day.
Your circuit will contain auxiliary output bits. That way it's reversible knowing all output bits, but irreversible knowing only the actual hash output.
Quite similar to how SHA3/Keccak is built from a reversible permutation, but becomes irreversible one the output it truncated.
(Though that might not have helper OP, since they wanted to delete something, and just didn't understand what they were deleting)
reply