>The goal of his project, which is partially funded by Microsoft, is to create an artificial system that works like human consciousness.
So we have bioethicists working on whether or not these brain cells in a petri dish count as conscious. And apparently a lively and varied debate on the subject.
Is there anything comparable for computer based consciousness? Why do we believe that bio-matter has more rights than synthetic, in terms of consciousness? Is it simply the bio that matters?
I know this is slightly orthogonal to the more academic study of ethics you might be looking for, but I think a lot of good thought is happening in the Sci-Fi realm.
Ted Chiang’s “The Lifecycle of Software Objects” comes to mind, or Asimov writing the “Three Laws of Robotics”.
Lex help’s Singer realize that where you put the ‘this is deemed important consciousness’ mark is arbitrary. Some people will fight for the rights of animals, in the future that may extend to Roomba’s with higher levels of intelligence than many animals.
>The goal of his project, which is partially funded by Microsoft, is to create an artificial system that works like human consciousness.
So we have bioethicists working on whether or not these brain cells in a petri dish count as conscious. And apparently a lively and varied debate on the subject.
Is there anything comparable for computer based consciousness? Why do we believe that bio-matter has more rights than synthetic, in terms of consciousness? Is it simply the bio that matters?