Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find this interesting:

>The goal of his project, which is partially funded by Microsoft, is to create an artificial system that works like human consciousness.

So we have bioethicists working on whether or not these brain cells in a petri dish count as conscious. And apparently a lively and varied debate on the subject.

Is there anything comparable for computer based consciousness? Why do we believe that bio-matter has more rights than synthetic, in terms of consciousness? Is it simply the bio that matters?



I know this is slightly orthogonal to the more academic study of ethics you might be looking for, but I think a lot of good thought is happening in the Sci-Fi realm.

Ted Chiang’s “The Lifecycle of Software Objects” comes to mind, or Asimov writing the “Three Laws of Robotics”.


You may enjoy this conversation between MIT Prof. Lex Fridman and famed philosopher Peter Singer on this topic:

https://www.youtube.com/watch?v=llh-2pqSGrs

Lex help’s Singer realize that where you put the ‘this is deemed important consciousness’ mark is arbitrary. Some people will fight for the rights of animals, in the future that may extend to Roomba’s with higher levels of intelligence than many animals.


Integrated information theory which is mentioned in the article is abstract and supposedly applies to any type of physical system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: