Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

... perhaps, but the "embodied Turing test" has already become the typical term to describe the new goal post in most ML papers - meaning that (since the usual Turing test was already completed and surpassed by large amounts several years ago) the new goal is to be able to have a system that given the choice between a real human in front of them, and a humanoid robot that looks like a human in front of them (think Westworld), humans are incapable of determining which is which (using proper statistics).

This terminology is becoming widely used by prominent AI researchers at e.g. Deepmind (Botvinick et al), Stanford (Finn et al), MIT, Northeastern, Meta, etc. as we have to switch to a new goal in lieu of the new advancements that have been coming up in that past few years. Importantly, this shift has been happening behind the scenes independent of this 'OpenAI' craze, although it's obviously made a select portion of the advancements made accessible by the public. There is much more going on than just the GPT series that few people are engaging with, but much is hidden in the literature.

To your point - it's of course extremely strange to conceive of - but while the quirks of human forms may be a useful tool at the moment, there isn't anything necessarily fundamental that requires it for long term.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: