Hacker Newsnew | past | comments | ask | show | jobs | submit | blazespin's commentslogin

trump literally said he wants venez to return the oil it 'stole' when it nationalized.

There is a significant risk of uncertainty in all of this, the most damaging aspect really. If AI improves, and it is threatening to, then growth in SaaS may decline to a point where investing in it needs to be reconsidered.

The problem is, nobody knows how much and how fast AI will improve or how much it will cost if it does.

That uncertainty alone is very problematic and I think is being underestimated in terms of its impact on everything it can potentially touch.

For now though, I've seen a wall form in benchmarks like swe-rebench and swebench pro. Greenfield is expanding, but maintenance is still a problem.

I think AI needs to get much better at maintenance before serious companies can choose build over buy for anything but the most trivial apps.


Nobody collapses, everything just shrinks.

And we're seeing that in the labor numbers.

Sometimes things are harder to see because it's chipping away and everywhere at the margins.


A market doesn't have to shrink all that much before there's a collapse. Generally it's quite gradual, and then very sudden. There's a tipping point where a market cannot sustain a public company and their structural overhead and have declining revenue. Investors don't want to invest in shrinking markets because it's a guaranteed way to lose money. This leads to share price collapse and the sudden rapid destruction of market incumbents.


who likely wins, fify


Advanced math solving, as the results indicate. Informal proof reasoning is advancing faster than formal proof reasoning because the latter is slow and compute intensive.

I suspect it's also because there isn't a lot of data to train on.


Verifying math requires something like Lean which is a huge bottleneck, as the paper explains.

Plus there isn't a lot of training data in lean.

Most gains come from training on stuff already out there, not really the RLVR part which just amps it up a bit.


More training data on advanced math. Lean is cool, but it's mostly about formalizing stuff we already know.


Ok I guess I could have told you that. What I really meant is that in the future where LLMs are doing new math (which I'm skeptical of, but I digress) I would not trust any of it unless it was formally verified.


if you read the paper that is the intention, to guide stuff like lean.

i don't think llm is a great pure rlvr


You can't process untrustworthy data, period. There are so many things that can go wrong with that.


that's basically saying "you can't process user input". sure you can take that line, but users wont find your product to be very useful


Something need to process the untrustworthy data before it can become trustworthy =/


your browser is processing my comment


Given they made a geopolitical accusations and dozens of mainstream publications repeated the "thousands of requests per second", this seems like a grossly negligent flub that should not be dismissed as a mere typo.

I frequently, alone, do 1000s of requests over a period of time, especially ones that are mostly cache hits, which can be $10-$50 in API costs.

This was not a "large scale" attack by any means.


Claude checked the post for accuracy before publishing, "trust the process" /s


KIMI just proposed linear attention. I mean, one breakthrough, and blammo, the whole story changes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: