March 4th, 2005

Splinter: Sensei

A sufficiently interesting problem

I've been pondering computability today and wanted to toss out this thought...

Let's consider some formal logical system of axioms (e.g. mathematics). Let us consider this our language. Godel's Incompleteness Theorem says that such a formal language will always (provably) have a peculiar limitation. In essence, Godel states that any "sufficiently interesting" language will have some true statement, expressable within that language, which is true yet cannot be proven by that language's axioms. (There is a rigorous definition for "sufficiently interesting" which I will gloss over as "capable of making statements about itself" or, in programming argot, reflection.)

This is quite intriguing and, in ways, counter-intuitive. As Mathworld summarizes Godel's theorem, "any formal system that is interesting enough to formulate its own consistency can prove its own consistency iff it is inconsistent." It sounds quite contradictory and, in fact, is usually demonstrated via proof by contradiction. So there are true mathematical statements that cannot, ever, be proven by mathematical axioms.


Now let us change problem spaces and move to computers. Computation theory is filled with references to Turing Machines. These are theoretical (designed for thought experiment) computers with very simple mechanics. Yet they are theoretically equipotent with all other computers; that is, they are equally powerful in terms of their computation capabilities. (The reason we use "real" computers is that they are much much more efficient than TMs.)

One problem often examined is what sorts of problems are within the realm of theoretical computer capabilities. Are there things computers can't ever do? This is the question of computability. It can be reduced to a simpler proposition: given a particular program, will a computer (eventually) produce a result or be stuck computing forever? A decidable problem is one with the former answer.

But this distinction leads, in turn, to another observation. The question of whether a given program is computable (i.e. will it yield an answer) is not, itself, computable. We cannot use any algorithm to classify our own problems. This is known as the Halting Problem. (So named because TMs which produce a final answer are said to "halt".) Note that a program might always halt or always loop -- it can have definite behavior -- we simply cannot determine this via programmatic analysis.

We can thus connect the Halting Problem and Godel's Incompleteness Theorem. They are the analogous proof of undecidability within their respective problem spaces. They may represent truths yet be unreachable from within the language's primitives. And they are both validated by a proof by contradiction.


Now the interesting question is what happens when we apply this to ourselves. The human brain can be seen as a computer, albeit a complex parallel one. Unless we show that the brain can perform operations that are fundamentally inexpressible in current logic or programming languages, the brain is still on par with a Turing Machine.

My question is what is our neural equivalent of the Halting Problem/Incompleteness Theorem? Discuss. :)

Logical Axiom Set --- Godel's Incompleteness Theorem
=
Turing Machine --- Halting Problem
=
Human Brain --- ???



(If you like this sort of mental twist, I highly recommend the book Godel, Escher, Bach: An Eternal Golden Braid.)
  • Current Music
    Less Than Jake - Magnetic North