Hacker Newsnew | past | comments | ask | show | jobs | submit | blueblisters's commentslogin

The only logical step for Anthropic now is to buy the Dwarkesh Patel podcast

They are too busy making money

No they should buy the All-in podcast and make them talk shit about the admin

I tried ls20 and it was surprisingly fun! Just from a game design POV, these are very well made.

Nit: I didn't see a final score of how many actions I took to complete 7 levels. Also didn't see a place to sign in to see the leaderboard (I did see the sign in prompt).


Agree 100%. I want to be able to see how many actions it took me. And it would be good if it were possible to see how well I'm doing compared to other humans, i.e. what is my percentile.

Gaming this out for peer adversaries is mostly moot, right? The post-Cold War strategic balance has mostly hung on MAD. And Russia, in particular, has responded to any attempt at building missile shields with more capable missiles.

It's likely more relevant for asymmetric conflicts that involve conventional weapons, and would enable an otherwise less resourced adversary to become a near peer.

Dennis Bushnell from NASA presented this deck in 2001, and is quite prescient about UAVs and distributed warfare.

https://alachuacounty.us/Depts/epd/EPAC/Future%20Strategic%2...


Eh, he threw so much random stuff at the wall that some of it is bound to stick. An early slide in his presentation says there will be "no pixie dust," but that's 90% of what follows.

Reflected inertia does scale as the square of the gear ratio but it's a bit misleading unless you also consider the change in rotor inertia, which scales as a cube of the rotor radius (as the article points out).

The other side of the scaling laws say that motor torque scales as a square of air gap radius (roughly rotor radius), and output torque scales as linearly with gearing ratio.

When you balance these out, the reflected inertia depends on the inverse of power dissipated for a fixed output torque.

In an ideal world, your total reflected inertia is independent of the gearbox and largely depends on the motor fill factor and how hot you can run it.


You would hit electrical steel saturation limits way before you need to pump in enough current to justify super-conductance.

Cooling in general is not a bad idea to allow you dissipate heat as you push motors to their saturation limits.


Very impressive. But it doesn’t solve the whole problem yet.

The robot and ball pose is estimated by high speed mocap cameras, and is fed to the policy.

I imagine estimating that with onboard cameras - how humans do it - is much harder.

Almost all of closed loop robotics is a state estimation problem. Control is “solved” if you can estimate state well enough.


We know. Just appreciate it for what it is. Which is…awesome.


Look at the guys above posting that within 18 months these sorts of robots will be able to cook in anyone’s home; the above reminder is very necessary.


I agree it is pretty awesome!


I think ChatGPT has a huge advantage here. They have been collecting realistic multi-turn conversational data at a much larger scale. And generally their models appear to be more coherent with larger contexts for general purpose stuff.


It might be that this admin does not have the capacity to reason about second or third order effects.

But given that what would typically be red lines for previous administrations have been brazenly crossed without consequences, why would they bother?


Crossing red lines for previous administrations is clearly a goal at this point.


Nope, they don't have that capacity. It's been shown multiple times in the past year.

Shutting down USAID being the clearest one. They just saw "they help brown people in other countries with our money" and shut it down. Fuck all second and third order effects that actually benefited the US.


In a broader context, both labs are engaging in "safety theater".

Neither know how to solve the alignment problem while market pressures are making them race towards capabilities (long horizon, continual learning) that will have disastrous consequences .


Wow. Surprising to see open hostilities between the leaders of the big ai labs. The differences appear to not just be competitive but also ideological.

Edit: Also openly calling OpenAI employees "gullible" and "twitter morons" seems sub-optimal if you like that talent to work for you at some point.

Example - https://x.com/tszzl/status/2029334980481212820


> if you like that talent to work for you at some point.

They might not if they think everybody who stayed after Sam Altman was reinstated might be excellent technically speaking yet not have the culture they want, which seems to be the case with all the recent communication.


Twitter morons wasn't referring to OpenAI employees, I think.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: