Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
When does a physical system compute? (arxiv.org)
93 points by yiransheng on May 4, 2014 | hide | past | favorite | 51 comments


I don't see what this paper is contributing other than the assertion that a physical system needs to functorially agree with the mathematical model that represents it. And that if that model is Turing complete, or represents some useful subset of computational functionality, then the physical system "computes." The fact that a sitting rock does nothing but sit is a perfectly fine, albeit severely limited, computational system (it even satisfies their definition of a computing physical system!). All of this just seems...obvious.

It looks like their definitions are so general that they don't answer their own questions, which by their main motivations (section XI) are fundamentally problems of construction and scale. It's not enough that there exists a theory that proves your physical system computes, you have to know the representation explicitly and be able to scale the system arbitrarily. [Edit:] The problem being that this doesn't show up in their actual definitions.

Maybe I'm reading it too shallowly, though. Can someone who's more well-versed in the background of this paper explain it better? It also doesn't help my confidence that they seem to completely ignore the rest of computer science (mentioning Turing machines only as an aside, and as part of a false assertion about (quantum) Turing machines being the only universal logical systems).


The fact that a sitting rock does nothing but sit is a perfectly fine, albeit severely limited, computational system (it even satisfies their definition of a computing physical system!).

It does not fit their definition because the whole point of the paper is that a physical system is not computing anything unless an entity is using the system to compute something by encoding the problem into the state of the system and later decoding the answer from the state of the system.


I'd like to compute the identity function. Let me denote by 1 the presence of a rock, and 0 the absence of a rock. I'll set up the problem (the input) by placing a rock appropriately, and after ten seconds, I'll check to see the if a rock is present. If so, the output is 1, otherwise it is 0.

This is a theory, a valid interpretation, highly reliable, scales well, and it commutes. Why isn't it a computation by their definition?


Now it is a computation because you are the computing entity that encoded a problem into the system and later decoded the answer. They paper essentially establishes the distinction of a system just evolving under the laws of physics and using this evolution to compute something by assigning an interpretation to the state of the system. I could use the same stone to compute something different, for example the boring trajectory of a stone placed on a surface in a gravitational field, but without specifying what you are computing the stone is not computing anything.


This seems more philosophical than scientific. How can you use such a distinction to do anything new and useful?


Hacker News doesn't attract a ton of theoretical computer scientists, but have you read Scott Aaronson's paper on the subject? http://www.scottaaronson.com/blog/?p=1438


I read Scott's writing a lot, and in my opinion he seems to have the right attitude about it. He's very comfortable in the definition of a computer, and is very interested in the potential for physical computers that defy the ([edit] strong) Church-Turing thesis by using crazy weirdness of physics. Quantum circuits so far only might be such a system, since we still don't know whether BPP = BQP.

So the question isn't whether soap bubbles can do certain computations, but rather whether soap bubbles can efficiently compute functions that are NP-complete in classical computing models.


I wouldn't quite characterize his research that way. Maybe I don't really understand what Church-Turing says, but I didn't think he was a skeptic.


I should have said the "strong" Church-Turing thesis, which is a statement about efficiency: anything that can be computed efficiently can be computed efficiently by a Turing machine. In other words, if quantum computers can solve problems in polynomial time that classical computers cannot, then the strong Church-Turing thesis is false. The normal Church-Turing thesis, which just says that anything which can be computed can be computed by a Turing machine, is still true when you consider quantum circuits: Turing machines can simulate quantum computers just fine if you are happy to wait a long time.

That being said, there could very well be models of computing (based on weird physical phenomena) that violate one or both versions of the Church-Turing thesis. Scott has expressed a few times that the possibility intrigues him, though I don't know whether he would bet his life's fortune on one outcome or the other :)


You should actually have said the extended Church-Turing thesis (rather than strong)


Ugh. yes. These silly nondescript words. They should just call it the "efficient" Church-Turing thesis so I don't get confused :)


Two somewhat-related-pointers for people who decide that they're curious about these things:

* a light and nice read: https://www.frc.ri.cmu.edu/~hpm/project.archive/general.arti... (an oft-cited article)

* the madness of Max Tegmark: http://arxiv.org/abs/grqc/9704009 , http://arxiv.org/abs/0704.0646 (cliff's notes: "The [...] postulate in this theory is that all structures that exist mathematically exist also physically, by which we mean that in those complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically ``real'' world." Which means: take http://xkcd.com/505/ , and abstract it enough times so that you decide that the rocks themselves are not needed anymore.)

-> if anyone has any comments about the MUH, would be interested to hear them. My head is kinda-still-aching from the last time I tried to decide how seriously I should take his ideas.


Wow. This is epic. I totally had this exact idea a few years ago -- that every mathematically possible universe physically exists, with some probability distribution. I was thinking maybe the probability of finding yourself in a given universe is related to the length of the computer program that runs that universe. And also, of course, to the probability of intelligent life evolving in that universe (a formalization of the anthropic principle).

It's always neat when you discover you've independently invented an idea proposed by someone famous :)

Although, to be fair to Tegmark, he goes further than I ever did: "The predictions of the theory take the form of probability distributions for the outcome of experiments, which makes it testable." This is a natural consequence of the form of the theory, but I didn't really think in that direction -- but now that Tegmark's pointed it out, it seems like an obvious direction in hindsight.


I totally know what you mean!

I've been recommending this book left and right around these parts, but can't hurt to say it again: you should probably read "Permutation City."[1] As far as literature, character development and style goes, there is nothing much to it. As far as (hard-ish) scifi goes, it's a big deal. In a way, what MT calls MUH Egan calls "dust theory" (roughly). It's a very interesting exploration of the whole idea, and it involves cellular automata, etc etc.

There's also a short story of his, "Wang's Carpets" (which you can read online[2]) - it was later incorporated (as a chapter) into his book "Diaspora"[3] (which is also a great read that I've thoroughly enjoyed.)

> Although, to be fair to Tegmark, he goes further than I ever did: "The predictions of the theory take the form of probability distributions for the outcome of experiments, which makes it testable."

I wonder, though, if this is not a bit of a stretch. I certainly understand what he (and you) mean by probability distributions, but it seems to be there because he really wants to be able to call it a "scientific theory," which requires it to be falsifiable (<=> testable.) But, yeah, it's pretty neat!

> I was thinking maybe the probability of finding yourself in a given universe is related to the length of the computer program that runs that universe.

Now, can some of those programs run a (say, finite-tape-version-of) Turing machine[4] (can some of them not run it)? Does this have any implications (re: / in relation to the halting problem, for example)? I don't have the faintest idea. Tegmark has a thing to say about mathematically incomplete (in the Godel sense) universes, but again, I'm not sure how much of it is just him having fun. ;) (which is the best way of having fun, as far as I'm concerned.)

...anyway. Good stuff! Let's try and not go insane within our heads with these things. Then again, some insanity is always a good thing, imo.

edit P.S. there's also http://plato.stanford.edu/entries/computation-physicalsystem...

editedit P.P.S. if you haven't, maybe also read the (very) short story of Borges, "The Library of Babel."[5] It's basically an articulation of the idea that a "description" of a universe is that universe. And a "description" can very well be thought of as a program. And if you imagine a multidimensional "computational-symbol-space," a "path" to that program/description (first choose this symbol/predicate, then that one..) is that program (and, by extension, "kinda-is" that particular universe.) These ideas are not new in themselves in the very least. e.g. I'm still yet to try and honestly delve into Kant's "Critique of Pure Reason," which actually addresses some of that stuff (in its own very particular vocabulary and context.) Also, lots of stuff to read from the theoretical CS side of things. And mathematics (Yoneda lemma in category theory, and other things which I like to pretend to understand!)

[1]: http://en.wikipedia.org/wiki/Permutation_City

[2]: http://bookre.org/reader?file=222997

[3]: http://en.wikipedia.org/wiki/Diaspora_(novel)

[4]: a good read (and/or rehash) on this: http://plato.stanford.edu/entries/turing-machine/

[5]: http://jubal.westnet.com/hyperdiscordia/library_of_babel.htm...


Scott Aaronson's review of the Tegmark's book is pretty great: http://www.scottaaronson.com/blog/?p=1753


Will very much read it. Thanks for the link! (Also, that blog is kind of a big deal. Good stuff.)


I'd say the best decision Scott made was naming his blog. (See also https://en.wikipedia.org/wiki/Dunbar's_number)


I still don't really get it. Brains are optimised to maintain friendships with approximately the population of a shtetl, but is there a connection to the blog's subject matter? I've been wondering this for years!


As a layman, I find it easy to think about the poetry of a computational oracle rather than the math of it. A bunch of rabbis wandering around the village teaching each other what they can't learn individually, but reaching a consensus as a group.


This is a question I've wondered myself. I once read a bit about someone using live crabs to perform computations -- a group of crabs behaves deterministically (enough) to make a series of logic gates that can be used for (very slow) computations. Since reading that, this has been a question on my mind.

(source) http://www.gizmag.com/crab-computer-kobe/22145/


For me, the next natural question is this:

if a brain can be simulated by any turing-complete system with enough memory, and a bunch of crabs can comprise a turing-complete system...can a bunch of crabs arranged properly into trillions of logic gates produce consciousness?


Yes, they can. And they actually do, in a sense. There was recently a thread on HN about lots of ants behaving as a one intelligent entity although a single ant is very dumb: https://news.ycombinator.com/item?id=7658551

The discussion 'evolved' into evolution (of systems). It's strange that this paper discusses computation also as evolution, but I'm not sure if they mean the same thing. I also suggested that human brain computes (or thinks) by 'simulating' evolution. In this case, your individual brain cells would be like those bunch of crabs, just doing their business and producing thoughts as a whole.


You seem pretty confident about that; how could you possibly know if a collection of crabs produces "consciousness"? That's not even the same sort of thing as ants dropping pheromones, which is trivial to model on a computer.


You seem pretty confident that you are conscious, but I've seen no conclusive evidence. How can a person possibly know if a collection of brain cells produces "consciousness"? What brain cells do is definitely not the same sort of thing as the inner life of my experience.


Well, not consciousness (I don't know what that is), but intelligence.


Obligatory xkcd.

http://xkcd.com/505/


this is the second time that specific strip was linked in a reply to a comment I made. First time:

https://news.ycombinator.com/item?id=7538726


I think it's really interesting how a large chunk of people will reply to this question with "well of course", while another large chunk of people will see this question as proof that consciousness cannot be simulated.

(For what it's worth, I'm in the former camp. I think you'd need far more than trillions though.)


The first step is to answer a different question. What is consciousness? How do you determine whether or not a system is conscious? If you can answer this question the answer to your question is probably easy.


Good question. I enjoyed this exploration of that question (using cellular automata instead of crabs):

Muhlestein, M. (2013). Counterfactuals, Computation, and Consciousness. Cognitive Computation, 5(1), 99-105.

Free copy: http://muhlestein.com/consciousness/ccc.html


You only really need one crab to produce consciousness...


In cognitive sciences there is a dispute about the nature of cognition and what scientific tools to apply. Some say cognition is computation, others say its just behavior of a dynamic or connectionist system. A good sumary of aspects can be found in Fresco, Nir, 2012 - The explanatory Role of Computation in Cognitive Science, Minds & Machines, 22, 353-380.

The discussion is quite interesting, because computation is very hard to define. Understanding what makes a system compute can yield useful insight for understanding cognition.


I think people like Jerry Fodor define computation as the manipulation of logical symbols by some algorithm. The idea is that you use a standard functionalist analysis (see someone like Putnam 67) to analyze the inputs(X) and outputs(Y) of a cognitive system. So you have (schematically) something like: X->?->Y. Where the "?" defines the project of cognitive neuroscience (i.e. figuring out the physical instantiation of the manipulation of intercalated proteins & neurotransmitter). So, in this picture cognition is not a "state" that needs to be defined a priori but a "black-box" process that needs to be spelled out empirically. At bottom, stuff is continuous, not discrete.

But too much ink gets spilled over this philosophical issue. It seems much more fruitful to analyze the mind as a Turing machine (contentious) and apply computational complexity theory to define limits on human computation based on the hierarchy-- my two cents.


Re: Fodor, Folk Psychology, and the Language of Thought hypothesis, obligatory link to Paul Churchland's retort :)

"Eliminative Materialism and the Propositional Attitudes": http://commonsenseatheism.com/wp-content/uploads/2010/08/Chu...

Basically, a critique of the approach of formalizing thought processes as (first predicate, or multipredicate, or whatever kind of) logical computation (of symbols.) It's a good read even if one thinks that the overall debate is pretty fruitless.


Thanks, I'll give it a read! I don't have a definite stance on the topic but I am aware that a lot of people are making a lot of points on this - indeed - very philosophical topic. I often lean back and enjoy the discourse, it is very good entertainment :) I am far away from pointing it out as fruitless but I can definitely understand why many cog-sci folk would thinks so (often depending on their background, really). I particularly liked what Tim van Gelder wrote on dynamicism, although I don't agree with it and he appears to have a poor view on the connectionist perspective.


Hmm, maybe it's high time I gave an honest re-read of http://plato.stanford.edu/entries/connectionism/ - last time I glanced at it, it gave many sparks and inspiration to delve deeper into things. And lots of good pointers. So I guess I can recommend doing the same thing. :)


I'm surprised the authors didn't cite the (in)famous "Do Dogs Know Calculus?" article [1] when they ask whether a "dog catching a stick" is an example of computation.

[1] PDF link: http://www.indiana.edu/~jkkteach/Q550


I was surprised to click on a "PDF link" and get the ugliest, most retina-scalding HTML page I have ever seen.


You've been 'fuchsiarolled'. That really does hurt.


My question:

When does a physical system reach a "halting state"?

This question has led me to some confusion when considering the possible isomorphism between physical systems and turing machines.


The halting problem is solvable for a program which is known to use finite memory. It's only unsolvable on a Turing machine because Turing machines have infinite memory.


Right, but how do you define the halting state for a physical system? A stable state? No free energy left? Is it arbitrary?


It is up to the person claiming that the system can be interpreted as a Turing machine.

"If we view these parts of the beach as the tape, and all crabs being in a state described by these rules as corresponds to halting, then this crab population can be seen as a Turing machine!"


This question has been explored thoroughly by the engineers at http://www.multivax.com


Be careful with definitions (sets of necessary and sufficient conditions) to any concept... counterexamples lie around the corner.


Sure, you just have to execute it and check if the internal state matches one of the previously recorded states, in which case it contains a cycle and doesn't halt.

The subtle problem is that it's impossible to define such a halting "oracle" as a program running within the same machine running the program you want to check whether it halts. It's possible to define a "deceiver" program that will use the oracle itself to change it's behavior.

    def deceiver(oracle):
      if oracle(Deceiver, oracle):
        while True:
          continue
      return

    print oracle(deceiver, oracle)
This doesn't mean that an external observer doesn't know whether running the oracle on the deceiver halts, but that it's impossible to write an oracle within that machine which will be a real oracle, since it will give a wrong answer for the deceiver program (and hence it wouldn't be an oracle).

Or, put in other words, even on a machine with finite memory, it's impossible to write a program (the oracle) in that machine that determines whether any possible program on that machine halts, because "any" program includes the oracle itself.

But what does "define a program that runs within that machine" mean?

Let's cheat a little bit. What if we build a machine that will record all it's internal states somewhere and when it detects that a program cannot possibly halt (because it reached a previously recorded state) asserts some IO line that can cause the cpu make it look like the oracle returns true?

    def oracle(program, input):
      # registerTrap will only work the first time it's executed
      # then it will be nop.
      registerTrap(on IO.cycleDetected, do nonLocalReturn(false))
      call program(input)
What did we achieve with this cheat? Now we can execute an oracle which will soon enter in a cycle executing oracle -> deceiver -> oracle -> .... and then the IO line will be asserted which will cause the oracle to abort the execution and return false (the program doesn't halt).

It can be argued that this oracle is non deterministic, because it won't give any answer when it's executed by the oracle itself (i.e. it won't terminate, only the outer oracle terminates). But nowhere we did specify that the oracle had to be reentrant!

Note that we didn't include the IO.cycleDetected input as part of the program input, because it's not an input. In fact, the IO.cycleDetected bit can be seen as part of the machine state. It's a state that is modifiable by the program by introducing a cycle in all other states (except this bit). But by reaching this subset of states (all of the states which include the IO.cycleDetected bit set), the machine halts AND the oracle correctly computes whether ANY program halts, including deceivers which invoke the oracle itself.

The main trick is to introduce hidden (inaccessible) states in the system: the write-once trap handler and the "detected cycle" state. This is not a "pure" machine, but it can nevertheless be built.

But what does this mean? We showed that it's possible to create a machine which can execute a program which can tell if any possible program of that machine terminates.

Can we build a machine where this is impossible? Yes. We just have to give to every program the possibility to get to all possible states, including the ones that would stomp on any attempt by any kind of oracle to determine whether there has been a cycle.

Imagine a degenerate case of a machine allowing any program to just "reboot" the execution of the oracle. The oracle itself couldn't possibly know that it has been unwound and thus cannot ever return saying so.

But we can always discern this situation if we look from outside. But what if the top level oracle performs a simulation of the machine? Even if the deceiver reboots, the oracle could still detect cycles by checking the simulated state. However, the inner oracle cannot do the same thing, because it would require infinitely nested simulators and we have only finite memory.

So there will always be one layer which doesn't allow to execute a working oracle from within.


Well, at least our current computers do not have such state.

I don't see a problem, the halting state was always an abstraction. Only the mathematical Turing construction stopped after it reached a result. Computers go on and work on something else.


Our computers deal with input though. That means hardware interrupts which in a Turing machine are basically equivalent to someone stopping the machine, writing the current state as a series of symbols in a special part of the tape, then writing some more symbols in some other place, moving the head there, setting the machine in a "interrupt handling state 1" and resuming it. Thousands of times per second.


Furthermore, is time quantized? If it's not quantized, are universal changes atomic? Or do they dopple? (At the speed of light?) This would mean that a clock cycle of universal change takes (length of universe) / (speed of light)

I think that time being quantized, would beg the question, are universal changes atomic locally, or globally?

Our perceptions are clearly operating on universal change, so our sense of time is as well. These are quite deep questions.


Can't we define the universe as the computing system, the state of the universe at t=0 as the encoding of the problem into the computing system, and the state of the universe now as the decoding step? As far as I understand it, the framework presented doesn't explicitly permit the computing entity to be inside the computation.


I've thought this for a while now. The only reasonable definition of computation is in relation to other systems. It's good to see this popping up other places than inside my head :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: