?

Log in

No account? Create an account
A sufficiently interesting problem - Rat Ramblings [entries|archive|friends|userinfo]
Nicodemus

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

A sufficiently interesting problem [Mar. 4th, 2005|02:07 pm]
Nicodemus
[Current Mood |thoughtfulthoughtful]
[Current Music |Less Than Jake - Magnetic North]

I've been pondering computability today and wanted to toss out this thought...

Let's consider some formal logical system of axioms (e.g. mathematics). Let us consider this our language. Godel's Incompleteness Theorem says that such a formal language will always (provably) have a peculiar limitation. In essence, Godel states that any "sufficiently interesting" language will have some true statement, expressable within that language, which is true yet cannot be proven by that language's axioms. (There is a rigorous definition for "sufficiently interesting" which I will gloss over as "capable of making statements about itself" or, in programming argot, reflection.)

This is quite intriguing and, in ways, counter-intuitive. As Mathworld summarizes Godel's theorem, "any formal system that is interesting enough to formulate its own consistency can prove its own consistency iff it is inconsistent." It sounds quite contradictory and, in fact, is usually demonstrated via proof by contradiction. So there are true mathematical statements that cannot, ever, be proven by mathematical axioms.


Now let us change problem spaces and move to computers. Computation theory is filled with references to Turing Machines. These are theoretical (designed for thought experiment) computers with very simple mechanics. Yet they are theoretically equipotent with all other computers; that is, they are equally powerful in terms of their computation capabilities. (The reason we use "real" computers is that they are much much more efficient than TMs.)

One problem often examined is what sorts of problems are within the realm of theoretical computer capabilities. Are there things computers can't ever do? This is the question of computability. It can be reduced to a simpler proposition: given a particular program, will a computer (eventually) produce a result or be stuck computing forever? A decidable problem is one with the former answer.

But this distinction leads, in turn, to another observation. The question of whether a given program is computable (i.e. will it yield an answer) is not, itself, computable. We cannot use any algorithm to classify our own problems. This is known as the Halting Problem. (So named because TMs which produce a final answer are said to "halt".) Note that a program might always halt or always loop -- it can have definite behavior -- we simply cannot determine this via programmatic analysis.

We can thus connect the Halting Problem and Godel's Incompleteness Theorem. They are the analogous proof of undecidability within their respective problem spaces. They may represent truths yet be unreachable from within the language's primitives. And they are both validated by a proof by contradiction.


Now the interesting question is what happens when we apply this to ourselves. The human brain can be seen as a computer, albeit a complex parallel one. Unless we show that the brain can perform operations that are fundamentally inexpressible in current logic or programming languages, the brain is still on par with a Turing Machine.

My question is what is our neural equivalent of the Halting Problem/Incompleteness Theorem? Discuss. :)

Logical Axiom Set --- Godel's Incompleteness Theorem
=
Turing Machine --- Halting Problem
=
Human Brain --- ???



(If you like this sort of mental twist, I highly recommend the book Godel, Escher, Bach: An Eternal Golden Braid.)
LinkReply

Comments:
[User Picture]From: harlequeen
2005-03-04 10:44 pm (UTC)
Algorythmics-Harel
(Reply) (Thread)
[User Picture]From: nicodemusrat
2005-03-04 11:08 pm (UTC)
Is that a book recommendation, then?
(Reply) (Parent) (Thread)
[User Picture]From: harlequeen
2005-03-04 11:11 pm (UTC)
A good introduction to computing theory,yes.

And quite fun to read, if you're a sad muppet like me.

:)
(Reply) (Parent) (Thread)
[User Picture]From: harlequeen
2005-03-04 10:45 pm (UTC)
We may come up with solutions, but not provable or complete.
(Reply) (Thread)
[User Picture]From: shockwave77598
2005-03-04 10:58 pm (UTC)
Not every program has a final output and a HALT.

Consider the PID loop, for instance. It is crucial to many motion control systems as it responds quickly to commands, but the built in integrator eliminates any errors brought about by external forces or mechanical nonoptimizations.

It runs over and over; hundreds of times a second, in fact. Each time it computes an output based on current position, desired position and previous position. It's output feedsback on its inputs to drive the system to the desired position.
(Reply) (Thread)
[User Picture]From: nicodemusrat
2005-03-04 11:05 pm (UTC)
Of course. The issue is that the question of whether we can inspect a provided program and determine if it halts a priori is a non-computable problem.

Indeed, I'm not saying that a program needs to halt to be useful. That's just the traditional way of framing the analysis.
(Reply) (Parent) (Thread)
[User Picture]From: harlequeen
2005-03-04 11:16 pm (UTC)
The halting problem indeterminacy proof and the proof of infinite primes and the diagonalization proof c > aleph-null are beautiful.
(Reply) (Thread)
[User Picture]From: nicodemusrat
2005-03-04 11:33 pm (UTC)
I agree. It holds a similar place for me as the two-slit experiment in physics. It is unintuitive but, once you've seen it demonstrated, it is both magnificent and enigmatic.

Do you happen to know a site with a good presentation of the aleph-0 continuum proof? (Mathworld is rather terse on it.)
(Reply) (Parent) (Thread)
[User Picture]From: foobart
2005-03-04 11:26 pm (UTC)
The question of whether a given program is computable (i.e. will it yield an answer) is not, itself, computable. We cannot use any algorithm to classify our own problems. This is known as the Halting Problem. (So named because TMs which produce a final answer are said to "halt".) Note that a program might always halt or always loop -- it can have definite behavior -- we simply cannot determine this via programmatic analysis.

<pedant>It is quite possible to write a TM program which is able to correctly classify many programs as halting or non-halting. It's just impossible to write one which will classify all possible programs in such a manner.</pedant>

Anyways, on your actual point, I'm not sure that the formalisms of predicate calculus or turing machines apply to the brain. Logical systems are extremely reliant on discrete value mathematics, and, while I think the brain is very capable to simulate such things, I'm not entirely sure that you can say its domain is limited to such. It seems likely to me that the mysteries of the brain are inexorably linked to continuous processes in nature; that while it's possible for the brain to simulate discrete systems, and it's possible for discrete systems to simulate an approximation of continuous systems, it may not really be possible to call them equivalent.
(Reply) (Thread)
[User Picture]From: nicodemusrat
2005-03-04 11:43 pm (UTC)
Your pedantic note is quite correct, of course. In order to keep the post from getting too unwieldy, I glossed over some (important) details.

My gut instinct is to agree with you, but let's see if we can kick it around a bit logically. :)

Assume that we are able to map the electro-chemical state of the brain at some moment in time. One could consider this the state for a state machine. Given arriving external stimuli and the laws of physics, it is (in principle) possible to determine the next state of the brain. (Or possible next states over the range of possible stimuli.) Given that the physical size of the brain is limited, the number of possible states is limited. This reduces our system to a Finite State Machine; FSMs are provably equivalent to TMs.

Now I can see several counterarguments:

(1) The state and processes of the brain are so subtle that we must take incredibly precise measurements in space and time. Assume that we can do so. But at some point, we encounter Heisenberg Uncertainty in physical matter. Will we get a sufficiently precise "state" snapshot of the brain before hitting this threshold? I wouldn't want to give betting odds on it. :)

(2) Somehow, the availability of new stimulus alters the situation and we are no longer able to prove the system to be equivalent to a TM. I'm not sure that this is, in fact, the case but I'd welcome opinions.
(Reply) (Parent) (Thread)
[User Picture]From: foobart
2005-03-05 11:32 am (UTC)
This reduces our system to a Finite State Machine; FSMs are provably equivalent to TMs.

Dr. Smith would be spinning in his grave if he were dead. But I'll leave off the pedant hat this time.

One way to look at my point is that it depends on Heisenberg. Perhaps a better analogy is finite element analysis, in engineering. In a mechanical system, you can usually do a reasonable job of determining how the system is going to behave by approximating the statics and dynamics of a sufficient number of small parts of the system.

Any attempt to scan the brain and simulate it as a FSM is going to rely in the same kind of process - a discretization of the current state. Whether or not the result is a fair approximation of the processes of the brain in question is anyone's guess. On the one hand, you might think that it will be like doing FEA on the brain, and the results will be good. On the other, you might think that the chaos inherent in the system will mean that you need a ridiculous amount of detail to get convergent predictions for even very short periods of time, and that longer term stability in simulation is a hopeless task.

Another way to look at it is that any measurement is an approximation -- the real thing has infinite precision. Is the brain chaotic enough that infinitesmal differences in electrochemical state can result in macroscopic behavioural changes in the short term? I suspect the answer is yes. But that's just a guess.
(Reply) (Parent) (Thread)
[User Picture]From: foobart
2005-03-05 04:55 pm (UTC)
It occurred to me after writing this that there is definitely a :) missing after that first paragraph...
(Reply) (Parent) (Thread)
[User Picture]From: nicodemusrat
2005-03-05 05:00 pm (UTC)
FSMs are provably equivalent to TMs.

Okay, that was an embarassing slip. I was trying to elide the reasoning that all FSMs can be represented in TMs (correct, yes?), so FSMs are provably less than or equal to TMs in terms of capability. Please do correct my details since it has been admittedly too long since I really studied computation theory and I know you're more up on this stuff than I. :)

Your description of finite element analysis (a term I didn't know) appears to be a much better wording of what I was going for with the capturing of physical state.


Another way to look at it is that any measurement is an approximation -- the real thing has infinite precision.

This is only a critique of our ability to measure at a necessary degree of precision until you reach Heisenberg uncertainty limits. Given that, I'm perfectly willing to admit that we may be talking about needing a level of detail that is unmeasurable in order to accurately simulate/predict the state of the brain. I'd agree with that stance, actually.

But the question left is whether this means that the brain has additional computational capabilities above and beyond a TM. Does this low-level interaction represent some fundamentally different type of logic? If so, is it possible to formalize it?
(Reply) (Parent) (Thread)
[User Picture]From: dour
2005-03-06 02:56 am (UTC)

Mmm, cogsci geekery!

The real thing may have infinite precision, but it's pretty clear that it operates on easily measurable timescales; it's been a long time since I brushed up on this, but IIRC the "recharge" time a neuron requires between firings is never faster than about 5ms. Cellular growth rates (and therefore rate of neural reconfiguration) are also well within our measurement capabilities. Neurotransmitter proliferation and reabsorption rates can't (literally) be any faster, and to top it all off there's a lot of redundancy—the precision of one neuron is not significant.

As such, I think the only question regarding measurement of the brain's activities, regards the fact that its operation is not synchronous; each neuron may run at 200Hz or less, but catching the activity of all of them would require far higher chronometric precision. However, I would venture that the actual speed at which a neuron fires is also very slow in particle terms, and we only need four times that precision to catch every action.

Reducing it to individual parts, I see nothing that cannot be perfectly simulated; therefore, I believe that the brain is Turing-equivalent.
(Reply) (Parent) (Thread)
[User Picture]From: kiltbear
2005-03-05 02:58 am (UTC)
Okay, totally out of my league here, but I'm chatting with a guy in a fur suit, so what the hell. ;)

I think to think of the brain as a computing machine that is taking input to come out with a decision. I think the process is more of a recognition of patterns that get separated into familiarities and non-familiarities. Something familiar tends to cause an attrative state, non-familiar causes a repelant, or at least neutral reaction.

The amount of input is constant, and constantly changing. A decision to do, or not do; decide one way or the other ends up being effected by items that have would "logically" not have anything to do with the actual questions being asked.

Why do I want broccolli today and not steamed spinich? How much of it is driven by a "desire" (put that in your equation), a negative childhood memory around eating creamed spinach, and the fact that the doctor told me I need more iron in my diet, AND that its raining, which has me cranky. For some reason, spinich is not a food to eat when I'm cranky. Go figure.

When I read your link describing the Turing Machine, the description was deceptively simple. It spend a lot of time talking about states, and moving from one to the other, but glosses over the "instructions telling the machine what action to perform if it is currently scanning a particular symbol, and what state to go into after performing this action."

Its understanding how to frame these questions. I was educated on how to build a simple computer from JK flip-flops on up, so I could possibly fathom a tree of decisions and cascaded TMs leading to answers for more complex questions, but my gut level is thinking that the brain actually relies on "if-then-else" structures.

Whim and intuation "appear" to contradict such.

I'm stopping now, my head hurts, and I've probably made a fool of myself.
(Reply) (Thread)
[User Picture]From: nicodemusrat
2005-03-05 05:10 pm (UTC)
Okay, totally out of my league here, but I'm chatting with a guy in a fur suit, so what the hell. ;)

Always glad to make someone's day a little more surreal! :) Actually, your comments are quite good.


A decision to ... decide one way or the other ends up being effected by items that have would "logically" not have anything to do with the actual questions being asked. ... Why do I want broccolli today and not steamed spinich? How much of it is driven by a "desire"

This points up the complexity of the brain and the different types of interactions there. The question is whether this is somehow reducable to logical axioms.

I would posit that since the brain is a strictly physical artifact (some religious views might not grant me that one), we can analyze it as a physical process. The electrochemical nature of it (and neurobiological research suggests that a lot of desires and moods can actually be linked to hormone chemicals!) would indicate that it is understandable within the framework of physics.

If we were able to measure the state of the brain at one instant, we could, in theory, predict its next state. This reduces the brain to a progression from one physical state to another. But as you point out, a constant amount of external input is arriving, so really the brain has a range of next states.

Current State + Specific Input => Next State

This, of course, starts to look an awful lot like the Turing Machine description. The "instructions" would be derivable from physics.

Now this can all break down if we discover that the brain's level of detail is something that we are physically unable to measure (a very reasonable view, I think). See my comments higher up for more on that aspect.
(Reply) (Parent) (Thread)
[User Picture]From: p0lecat
2005-03-05 03:27 am (UTC)
My brain hurts
(Reply) (Thread)
[User Picture]From: brokkentwolf
2005-03-05 07:29 am (UTC)
I stopped reading your post halfway into it. I stopped understanding it about 1/8th into it.

Right now I have a headache!

*Pet my belly!* (much simpler)
(Reply) (Thread)
[User Picture]From: nicodemusrat
2005-03-05 05:11 pm (UTC)
Awww! *PETPETFUZZLE*
(Reply) (Parent) (Thread)
[User Picture]From: sayh
2005-03-05 07:45 am (UTC)
If you think too much about this, you'll think about it until you die. Given this, you have proven your theory, you have tried to compute a non-deterministic problem with your brain.

I bet it would be nice with a walk in the forest instead...
(Reply) (Thread)
[User Picture]From: nicodemusrat
2005-03-05 05:12 pm (UTC)
I think a good walk in the forest can solve more problems than people give it credit for. :)
(Reply) (Parent) (Thread)