As I understand, it is not Elsevier's fault, it is the government that allocates funding and gives promotion based on number of publications made in Elsevier journals.
Some of those governments, in Europe, are also starting to mandate open access though.
In practice, for maths papers I look on arXiv, for crypto papers on iacr and so on - academics generally want their work to be read (and cited) so they're usually happy to make it available for free. There's even tricks you can play like uploading an "author version" or "preprint" if you're forced to use a commercial publisher for a conference.
If you're funded by a Horizon/EU grant, you can cost the article publication charges into your grant application in most cases. It ends up being the funder, not the author, who pays.
That also means that if the whole racket is revisited at some point, then Elsevier will get to pick on someone their own size if not bigger - and will hopefully come off worse in that fight.
Is that the best use of grant funds though really? Yes, it's not the author directly out of pocket but that money could be used towards new equipment, boosting grad student/post-doc pay, etc.
One can publish on Zenodo, universities and authors can band together and split the difference: divide the cost of hosting and / or optionally paid peer review.
I know, what I'm saying is if universities band together, they can arrange for reviewers to be paid, so that authors at all universities start a discussion when they are assigned to review for Elsevier... for free.
Sadly the publication metric is sick and made the overwhelming majority of published scientific papers, I am sadly a co-author of a paper that I know for a fact can't be reproduced because the underlying data has been stressed enough to show what the main writer wanted to show.
And it's not even the authors fault the system is like that. Research isn't just about saying "hey we found this works", but also about "we wasted 3 years, it doesn't work sadly". Yet, the second option does not lead to the same impact, because if something doesn't work it's not going to be reproduced and thus quoted.
Proof that it does not work is still an interesting result! However, I think you meant to say that if you failed to show that it works, it often also means you cannot proof that it doesn't work. And then you indeed have nothing worth publishing.
Say you're researching some material to have some behaviour. It doesn't have it.
You're not gonna be published on high impact journals with data that doesn't move the field, even though as you point out the information is as valuable.
it could be quoted by say medical insurance companies, or by patent offices, but that would mean a wider quotation scope.
a less extreme widening of scope would be just the research funds that apply or deny grants for research: suppose a flurry of papers investigates the superconductive behavior of a piece of meteorite, and you're the one to kill the buzz with your negative result, then future grant denials could cite your boring negative result.