You're not considering opportunity costs and buyers vs. users.
If your senior developers can slap together something better than an expensive SAAS offering you want them directing that energy at your core products/services rather than supporting tools.
And the people deciding to buy the expensive SAAS tools are often not the people using them, and typically don't care too much about how crappy the tool may or may not be for doing the job it's advertising as doing.
No matter what it's a tax on your engineering team to keep it together. But the most brittle parts are always right at the seams. It's not as hard to sew together components when you can cut the cloth down to fit together. Who knows how it'll shake out.
Clubbing all saas products together just means you can’t really have a productive discussion. Saas products are on a spectrum of quality, from amazing (stripe, datadog) to terrible (fivetran, github). Its upto you as a user to make a call as to which will serve you best, what you should focus your limited resources on etc.
> what if this time it's senior developers and they actually can slap something together better then the expensive SAAS offerings
A typical SaaS customer will use many pieces of software (we mostly call them SaaS now) across its various functions: HR, accounting, CRM, etc. Each one of those will have access to the same pool of senior devs and AI tools, but they will pour more resources into each area and theoretically deliver better software.
The bigger issue here is the economics of the C-suite have not changed here. Assume a 100 CPG company uses 10-20 SaaS apps. Salesforce might be $100k/year or whatever. 1Password is $10k. Asana $10k. etc. They add up, but on the other hand it is not productive to task a $150k employee with rebuilding a $10k tool. And even with AI, it would take a lot of effort to make something that will satisfy a team accustomed to any modern SaaS tool like Salesforce or Atlassian. (Engineers will not even move off Github, and it's literally built on free software.)
That's before I get to sensitive areas. Do you want to use a vibe-coded accounting system? Inventory system? Payroll? You can lose money, employees, and customer perception very rapidly due to some bugs. Who wants to be responsible for all their employee passwords are compromised because they wanted to save $800/mo?
Then, the gains from cutting SaaS are capped. You can only cut your SaaS spend to zero. On the other hand, if you have those engineers you can point them at niche problems in your business niche (which you know better than anyone) and create conditions for your business to grow faster. The returns from this are uncapped.
TL;DR; it's generally not a great idea to build in-house unless your requirements are essentially bespoke.
As my manager said to a young me when I offered to replace our CMS, and promised I could do a good job at it, "you could probably assemble our office furniture too, but I don't want to pay you to do that either"
We have replaced many SaaS with inhouse solutions, but most of these where lacking in quality and where part of our existing core business model which we where not "owning" prior. We can flip the argument where we have lost customers and revenue due to SaaS not delivering
The gains is generally more seen outside of monetary as these SaaS solutions where holding us back for achieving our goals and improving our services to our customers. As in the end of the day our customers do not care if "insert SaaS" is having issues, it will always be our problem to own.
To the first question, if your senior devs can do that there's almost certainly something more directly valuable to your business they could be doing than solving a problem your vendor has already solved
The second question is a valid one, and I think it will somewhat raise the bar of what successful SAAS vendors will have to offer in coming years
Nice what ifs, but not valid so far. I get the motivation to think/hope so, but thats not the proper business world right now where big money are. Maybe next year it could start becoming true but then market will be a bit different too
There are of course exceptions to every rule, and I'm sure some companies have been successful in building their own in-house tooling.
At the end of the day these decisions are all series of trade-offs, and the trick is understanding your requirements and capabilities well enough to make the right trade-offs.
This is because what management wants and what builders want are not aligned, not because the quality of JIRA is so amazing that no other alternative could ever be created. JIRA is fine but many people I know that use have some qualms with it because the bloat is pretty crazy.
As Spolsky said a quarter century ago, "bloat" is just "bugs somebody already fixed". (He may have actually said that about "cruft", but the idea still applies.)
If Trump was referring to them he would have mentioned them.
Here is what he said word for word, where did he mention those?
"The Republicans should say: 'We want to take over. We should take over the voting in at least 15 places.' The Republicans ought to nationalise the voting," Trump said during an appearance on the podcast of his former deputy FBI director, Dan Bongino.
That's just what Trump does: he says the abhorrent thing that he wants done, then lies about what he "meant" when the wrong people get upset because of what he wants done.
Why do people take these AI "safety" research projects at face value? The real reason you need AI that is "safe" and "governable" is so that when you start having it promote advertisers content or support the current administration, you don't have to worry about it going "off the rails" and promoting a competing product to criticizing the administration.
I'm sure plenty of researchers in this space also believe they are working for the good of humanity, but I suspect the real am is much more practical and perfectly aligned with the business interests of all the companies sponsoring this type of work.
We currently can't do "AI safety" even in bleeding edge alignment research so investing into some startups in that area is just burning money. Current LLMs/ViTs have non-zero probability of producing something unsafe and it's their inherent trait.
Just so I understand your comment, are you trying to say that the money on safety should go first into research? And not into a later startup, which is just putting lipstick on the pig? I agree to this understanding, no doubt about it.
Basically yes. First figure out how to do reliable models that can be well-aligned with safety expectations, then invest into a startup that brings that tech to the industry.
Is it even an attainable goal? It seems an NN with less than say 4 billion parameters will be able to do that. The cost of training will likely go down with more models being available. Unless we lock down the computing for majority of people, I don't see how we can prevent someone creating a CSAM model in their garage.
I don't want to see CSAM created, but the totalitarian control required is too much for my taste (and frankly it's preferable for that person to use NN than to go out and hurt actual children).
Not to mention even locked down technology is often being abused by the privileged.
People can draw naked children with a stick and some dirt. I’m not sure that preventing the creation of fictional csam is the best use of our resources if we want to protect minors from abuse.
The best use? Probably not. But if I built a website that let people generate extremely convincing unlimited photos of you wearing an SS uniform and forcing your dog to smoke meth and sent them to everyone you’ve ever met, this might seem like a less worthy hill to die on. Or is that just a sticks and dirt thing too?
Everybody I care about would know that those pictures are not real so I think that the harm to me would likely be lower than the harm to society if building websites were impossible.
The people who are sending the pictures are criminally liable, regardless where they got them.The fact that somebody built a website for it is irrelevant, the act of sending them unsolicited is the immoral act here. (And frankly it's probably gonna be laughed off or end up as spam unless somebody you associate with is an idiot.)
The goal is not to prevent someone from making their own model do what they want, but to prevent your model from doing what you don't want, like generate CSAM or non-consentual sexually explicit photos.
I don't understand how its controversial that someone/some-company might not want their products to be known for that.
It's a bit odd requirement, but.. OK. I mean who is the malicious actor here, the AI, the human user or the AI provider?
If the AI, then we shouldn't give it an agency (user should always vet the output).
If it's the user, the AI is irrelevant to the question. And if it's AI provider, why would they train AI on such materials in the first place.
The whole enterprise of this kind of safety doesn't make much sense to me. If the AI is not able to follow so clear user instructions it's not ready for prime time and must be under human supervision at all times. (And subtle hallucinations on topic seem to be both more dangerous and more of a problem than blatant random production of explicit images, anyway.)
> promote advertisers content ... promoting a competing product
Where is the value of AI when the responses are compromised like this? I could say the same thing about Google Search, which is one of the reasons I stopped using it.
Are we betting on the masses not caring that they're being lied to for profit?
They want NGO grant money. They look at the latest and greatest buzzwords for government policy spending and tailor their efforts towards acquiring that money, 99% of which will go to salaries and bonuses, 1% of which will be spent on the mission du jour.
Mozilla is a deeply corrupt and failed organization.
They're already an NGO who gives out grant money. When you say 99% will go to salaries and bonuses, you know all the financials are right there on the page right?
See: Firefox, management churn, alienation and discarding of community, etc.
Some of us who donated money and supported Mozilla and Firefox are deeply, deeply disappointed and disgusted.
Principles are meaningless to non-human corporate entities, and I'll never donate to a non-profit, charity, or other institution again for the rest of my life.
A great deal of greed exists within many non-profits. (It's frankly obscene when you do your research.) That's not to say some don't serve the public well, but the legal structure of a non-profit isn't by itself enough to deter corruption.
But compromised Mozilla's solution will then be passed as 'independent' so that a corrupt government can accept it without officially kneeling to BigTech. Publicity stunt a'la foundation.
That kind of reserves, invested wisely, would net enough interest to pay for a decently sized team maintaining Firefox and small AI bids. Of course I’m just an idiot on the internet, not the CEO of a behemoth.
The charitable entity known as the Mozilla Foundation and the development entity known as the Mozilla Corporation are not the same. Nothing wrong with the foundation doing these things with their spare cash, it literally does not impact Firefox at all.
The concern, I think, is that their spare cash is dwindling and thus financial prudence might be beneficial - especially for those who rely on the core Mozilla propositions like Firefox.
The last I heard was that the Google rev-share agreement was on the skids and they stopped developing projects like Thunderbird and Fake Spotter because they were capital constrained.
If you know otherwise then you're better informed than I am I guess
This might be financial prudence of sorts - doesn't something like 80% of their yearly monetary contributions come from Google, particularly for search partnerships? If they are concerned that Google will start paying them less because search has diminishing future returns, diversifying their income sources through investments in AI might be a good idea.
That’s one interpretation. Another is that people typically support foundations not simply because they “do good,” but because they advance a specific cause the donor personally values. For example, if I donated to a foundation focused on developing cancer treatments, and that same foundation later shifted its efforts to addressing melting ice caps, I would likely feel frustrated, since that was not the purpose for which I chose to support it, and I don't really care that both actions "do good" in the world.
While I agree with the general point that non-profit donors have a legitimate interest in the use of funds, in the particular case of the Mozilla Foundation, I believe the vast majority of that money is from placement fees Google paid to be the default search engine in Firefox. As the pay-for-placement market has evolved and Firefox's browser share has fallen, this income has also fallen dramatically.
On the general point about non-profit donations, legally non-profit's use of funds are governed by their board of directors and charter, which often are not constrained in ways donors may assume, hence the need for due diligence prior to giving.
Mozilla Foundation's 2024 revenue was 60% investments, 28% program service revenue, 11% contributions and grants.[1]
Most of program service revenue was from Mozilla Corporation. They paid Mozilla Foundation a small part of their revenue for trademark licenses, legal services, and so. And most of Mozilla Corporation's revenue was from Google.
Although I haven't looked at the actual reports in a long time, I think that's consistent with my understanding. The Google money (program service revenue) was a much higher percent in the past, creating the $1.4B nest egg. Now the Google money is greatly reduced and the majority of the income is from the nest egg.
I intended the percentages to support your point donations were little of Mozilla Foundation's revenue. But you missed Mozilla Corporation held most assets.
Even where they are charged using coal? Please provide the research on that. Note that I'll be checking who funded and who performed the peer review, so please choose carefully before posting.
Based on that I no longer believe you are an expert of any chemistry or energy. Or you are really bad at making jokes maybe?
Regardless, electricity for your EV comes from somewhere, right? It's powered by a coal power plant in much of the US, with electrical energy transduced (effective loss at every transduction point) through countless parts and miles of electrical equipment before it reaches your charger.
Are you about to tell me that coal is cleaner than gasoline? It's not remotely comparable. Coal is insanely dirty. This is common knowledge.
Every metric you asked about applies to coal and much worse and therefore to your EV in vast swathes of America.
The EV in such places is doubly destructive. You've burned coal AND mined lithium and shipped it, plus you're carrying a heavier load, and your batteries are short lived, and toxic.
>Oh no, am i bad at making jokes?, Or is your argument a joke?
No my argument was serious. You've sliced your data gratuitously. You're also making rude jokes, and I think there are HN rules about that somewhere. But, I'll forgive you.
You looked up the share of energy for the US as if every vehicle owner spends equal time driving in every city. That's dishonest. A vehicle owner typically drives in one city nearly all the time. If that city is coal powered as we can see, many are, that owner should not operate an EV. But policy relying on blanket data like yours would incentivize their doing so. That's bad for everyone, except the policymaker and his buddies selling EV related products.
The primary point, however, is that EVs move the pollutants up the supply chain. The car itself is non-emission, but the power plant and battery cycle are not! And the alternative power sources aren't really clean either. Nuclear, for example, requires mining, enrichment, etc. (all carbon heavy) and then we still need to deal with disposal which doesn't even exist! We're sweeping that under the rug when we call that clean energy. We don't have a solution for waste so we just exclude it from our impact calculations? Ridiculous.
Now add a toxic battery on top of all of that, and all of the mining and waste disposal associated with it. You've moved your pollutants to China, added shipping lanes, and dumped more oil and now lithium into the ocean. This may be worse overall and it's for sure worse for owners of cars in coal powered locations.
But you do get to say that the EV in a vacuum is zero emissions (at the location of inertial output only). Nice work!
Your argument zoomed out to blanket statement the US where it suits you, and then zoomed in to the car itself to exclude where your pollutants are. It's truly very dishonest. That argument is damaging to the public interest and to the environment, and insults the sciences.
Did you really not understand? I suspect you did, and in the context of your previous jokes, I think you're trying to annoy me. It'd be nice if that's not true.
Is it?
I suppose if we want to look at the US as a country (pretending it's all one city with one grid), then we will continue to encourage the 20% (roughly by your numbers) that drive gasoline in the coal power areas to downgrade to coal powered EVs.
I don't think that's good. I think you're careless and destructive for supporting that.
Anyway, you're not really acknowledging very real problems with your assertions and that's not going to make for coherent discussion. There's nothing scientific about that, so I suspect you might not be interested in science.
Lost my interest. Cheers.
For others reading, I'm happy to continue scientific discussion on this topic, especially if you disagree.
Thanks for your reply. I hear you, but those solve different problems:
WhatsApp/Telegram groups = messaging (chat interface)
Google Photos = photo sharing (no social features)
This = social feed (posts with photos/videos, comments, reactions, chronological timeline)
The gap: If I want my family across 3 countries to see updates (like Facebook), but don't trust Meta with the data, what do I use?
WhatsApp groups scroll too fast, Google Photos has no conversation features, Telegram isn't E2E by default.
But if you're saying "the market doesn't care about that gap," that's exactly what I'm trying to validate.
what if the expensive SAAS offering is just as vibe coded and poor quality as what a junior offers?
reply