Hacker Newsnew | past | comments | ask | show | jobs | submit | mplappert's commentslogin

I very much agree; I think laziness / friction is basically a critically important regularizer for what to build and for what to not build. LLMs remove that friction and it requires more discipline now. (Wrote some of this up a while ago here: https://matthiasplappert.com/blog/2026/laziness-in-the-age-o...)

That seems like quite an extrapolation and an extraordinary statement. This is a single task, in a lab setting. What your describing are extremely open-ended tasks in people’s homes.

What is informing these timelines?


Look at recent developments/announcements involving novel increasingly generalizable learning capabilities from projects like 1X/Neo, Figure 03, Skild AI. Also see open published work like MimicDroid, HDMI, GenMimic, Humanoid-Union Dataset, RoboMirror, Being-H0

Figure 03:

https://www.youtube.com/watch?v=e-31-KBBuXM

https://www.youtube.com/watch?v=ZUTzuhkDG3w

1X Neo:

https://www.youtube.com/watch?v=lS_z60kjVEk

Skild AI

https://www.youtube.com/watch?v=YRmjBdKKLsc (Learning by Watching Human Videos)


Yeah those are demos. I think we‘re pretty far away from this becoming a real thing. I wrote up why here: https://matthiasplappert.com/blog/2026/humanoid-robot-in-the...


You're not really trying to see the advances in things like the data flywheel. If you were you would see that those demos represent real movement towards generality.


Where’s the data flywheel exactly?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: