Hacker Newsnew | past | comments | ask | show | jobs | submit | zvitiate's commentslogin

My GPD pocket 4 fits into really large cargo pants if that counts lol, and there is the micropc2 too that’s even smaller :p


Oh fuck you, I didn't have the $1,500 I just spent on Amazon for one of those! I've been waiting forever for them to make one with a finger print sensor, and I thought you were responding to a different comment so I looked it up and thank you :)


Might is doing a lot of heavy lifting in your last sentence. I’m genuinely curious, what odds is this evidence of transition versus not?


And the FDA tells you not to cook your steak rare.


That's not completely true. The FDA rules are an oversimplification because the actual rules are complicated.

Chris Young has a video about this: https://www.youtube.com/watch?v=bbaZpJ1AhFU


Rare stake is fine; it's rare burgers that are a risk, because bacteria can get ground into them.


Claude’s system prompt is SO long though that the first 1k lines might not be as relevant for Gemini, GPT, or Grok.


Google actually switched to an OpenAI system for 2.5 Pro's Chain-of-Thought yesterday on the Gemini app and AI Studio ("I did this; I did that. etc"). Apparently it still shows via API, but no clear how long. Also, in my experience, if you select the "Canvas" output, you still get the old style CoT.

And yes, the above is true even if you are ULTRA.

You can still view your old thinking traces from prior turns and conversations.


My heart just broke to hear this. Although I honestly don't read the thinking output very often. But I had been cheekily copy-n-pasting the info for my own records.


I agree, but there's always Deepseek. They're publishing and open-sourcing more than anyone these days.


There's a huge assumption in your comment -- that you know how insurance works. "Most" probably aren't working in sales and marketing; I'd heavily dispute anything above 50% and I feel like 33% might be pushing it? I don't want to get overconfident here, but this claim feels off-base.

Insurance isn't like a widget. People have actual legal rights that insurers must service. This involves processing clerks, adjusters, examiners, underwriters, etc. Which then requires actual humans, because AI with the pinpoint accuracy needed for these legally binding, high-stakes decisions aren't here yet.

E.g., issuing and continuing disability policies: Sifting through medical records, calling and emailing claimants and external doctors, constant follow-ups about their life and status. Sure, automate parts of it, but what happens when your AI:

a. incorrectly approves someone, then you need to kick them off the policy later?

b. incorrectly denies someone initial or continuing coverage?

Both scenarios almost guarantee legal action—multiple appeals, attorneys getting involved—especially when it's a denial of ongoing benefits.

And that's just scratching the surface. I get that many companies are bloated, and nobody loves insurance companies. No doubt, smarter regulations could probably trim headcount. But the idea that you could insure a billion people with just 100, or even 1000 (10x!), employees is just silly.


Yup. My favorite genre by FAR is baroque. High quality recordings are not as wide as you’d expect, and no one’s really pumping out new baroque. V4.5 is noticeably better, even if the model shows the real “plagiaristic” aspect.

Still, I’m excited about the product. The composer could probably use some chain of thought if it doesn’t already, and plan larger sequences and how they relate to each other. Suno is also probably the most ripe for a functional neurosymbolic model. CPE wrote an algorithm on counterpoint hundreds of years ago!

https://www.reddit.com/r/classicalmusic/comments/4qul1b/crea... (Note the original site has been taken over, but you can access the original via way back. Unfortunately I couldn’t find a save where the generation demo works…but I swear it did! I used it at the time!)


I've mentioned it before on HN, but Sid Meier worked on an application called (appropriately enough) CPU Bach for the 3DO that would algorithmically generate endless contrapuntal music all the way back in 1994.

https://en.wikipedia.org/wiki/C.P.U._Bach


In a similar vein, there is https://aminet.net/package/mus/misc/AlgoMusic2_4u.lha, from ~96 or so.


Ohhh this looks very cool. Thank you for sharing! Will dive into this over the weekend.


It's almost week-end! But I'm pretty sure these were shared merely as historical anecdotes, and that Suno 4.5 is the bleeding edge here...


That’s what GPT-5 was supposed to be (instead of a new base or reasoning model) last Sam updated his plans I thought. Did those change again?


What if you were in an environment where you had to play Minecraft for say, an hour. Do you think your child brain would've eventually tried enough things (or had your finger slip and stay on the mouse a little extra while), noticed that hitting a block caused an animation, (maybe even connect it with the fact that your cursor highlights individual blocks with a black box,) decide to explore that further, and eventually mine a block? Your example doesn't speak to this situation at all.


I think learning to hold a button down in itself isn't too hard for a human or robot that's been interacting with the physical world for a while and has learned all kinds of skills in that environment.

But for an algorithm learning from scratch in Minecraft, it's more like having to guess the cheat code for a helicopter in GTA, it's not something you'd stumble upon unless you have prior knowledge/experience.

Obviously, pretraining world models for common-sense knowledge is another important research frontier, but that's for another paper.


No, sooner lol. We'll have aging cures and brain uploading by late 2028. Dyson Swarms will be "emerging tech".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: