A good introduction to computing theory,yes.
And quite fun to read, if you're a sad muppet like me.
We may come up with solutions, but not provable or complete.
Not every program has a final output and a HALT.
Consider the PID loop, for instance. It is crucial to many motion control systems as it responds quickly to commands, but the built in integrator eliminates any errors brought about by external forces or mechanical nonoptimizations.
It runs over and over; hundreds of times a second, in fact. Each time it computes an output based on current position, desired position and previous position. It's output feedsback on its inputs to drive the system to the desired position.
Of course. The issue is that the question of whether we can inspect a provided program and determine if it halts a priori is a non-computable problem.
Indeed, I'm not saying that a program needs to halt to be useful. That's just the traditional way of framing the analysis.
The halting problem indeterminacy proof and the proof of infinite primes and the diagonalization proof c > aleph-null are beautiful.
I agree. It holds a similar place for me as the two-slit experiment in physics. It is unintuitive but, once you've seen it demonstrated, it is both magnificent and enigmatic.
Do you happen to know a site with a good presentation of the aleph-0 continuum proof? (Mathworld is rather terse on it.)
The question of whether a given program is computable (i.e. will it yield an answer) is not, itself, computable. We cannot use any algorithm to classify our own problems. This is known as the Halting Problem. (So named because TMs which produce a final answer are said to "halt".) Note that a program might always halt or always loop -- it can have definite behavior -- we simply cannot determine this via programmatic analysis.
<pedant>It is quite possible to write a TM program which is able to correctly classify many programs as halting or non-halting. It's just impossible to write one which will classify all possible programs in such a manner.</pedant>
Anyways, on your actual point, I'm not sure that the formalisms of predicate calculus or turing machines apply to the brain. Logical systems are extremely reliant on discrete value mathematics, and, while I think the brain is very capable to simulate such things, I'm not entirely sure that you can say its domain is limited to such. It seems likely to me that the mysteries of the brain are inexorably linked to continuous processes in nature; that while it's possible for the brain to simulate discrete systems, and it's possible for discrete systems to simulate an approximation of continuous systems, it may not really be possible to call them equivalent.
Your pedantic note is quite correct, of course. In order to keep the post from getting too unwieldy, I glossed over some (important) details.
My gut instinct is to agree with you, but let's see if we can kick it around a bit logically. :)
Assume that we are able to map the electro-chemical state of the brain at some moment in time. One could consider this the state for a state machine. Given arriving external stimuli and the laws of physics, it is (in principle) possible to determine the next state of the brain. (Or possible next states over the range of possible stimuli.) Given that the physical size of the brain is limited, the number of possible states is limited. This reduces our system to a Finite State Machine; FSMs are provably equivalent to TMs.
Now I can see several counterarguments:
(1) The state and processes of the brain are so subtle that we must take incredibly precise measurements in space and time. Assume that we can do so. But at some point, we encounter Heisenberg Uncertainty in physical matter. Will we get a sufficiently precise "state" snapshot of the brain before hitting this threshold? I wouldn't want to give betting odds on it. :)
(2) Somehow, the availability of new stimulus alters the situation and we are no longer able to prove the system to be equivalent to a TM. I'm not sure that this is, in fact, the case but I'd welcome opinions.
This reduces our system to a Finite State Machine; FSMs are provably equivalent to TMs.
Dr. Smith would be spinning in his grave if he were dead. But I'll leave off the pedant hat this time.
One way to look at my point is that it depends on Heisenberg. Perhaps a better analogy is finite element analysis, in engineering. In a mechanical system, you can usually do a reasonable job of determining how the system is going to behave by approximating the statics and dynamics of a sufficient number of small parts of the system.
Any attempt to scan the brain and simulate it as a FSM is going to rely in the same kind of process - a discretization of the current state. Whether or not the result is a fair approximation of the processes of the brain in question is anyone's guess. On the one hand, you might think that it will be like doing FEA on the brain, and the results will be good. On the other, you might think that the chaos inherent in the system will mean that you need a ridiculous amount of detail to get convergent predictions for even very short periods of time, and that longer term stability in simulation is a hopeless task.
Another way to look at it is that any measurement is an approximation -- the real thing has infinite precision. Is the brain chaotic enough that infinitesmal differences in electrochemical state can result in macroscopic behavioural changes in the short term? I suspect the answer is yes. But that's just a guess.
It occurred to me after writing this that there is definitely a :) missing after that first paragraph...
FSMs are provably equivalent to TMs.
Okay, that was an embarassing slip. I was trying to elide the reasoning that all FSMs can be represented in TMs (correct, yes?), so FSMs are provably less than or equal to TMs in terms of capability. Please do correct my details since it has been admittedly too long since I really studied computation theory and I know you're more up on this stuff than I. :)
Your description of finite element analysis (a term I didn't know) appears to be a much better wording of what I was going for with the capturing of physical state.
Another way to look at it is that any measurement is an approximation -- the real thing has infinite precision.
This is only a critique of our ability to measure at a necessary degree of precision until you reach Heisenberg uncertainty limits. Given that, I'm perfectly willing to admit that we may be talking about needing a level of detail that is unmeasurable in order to accurately simulate/predict the state of the brain. I'd agree with that stance, actually.
But the question left is whether this means that the brain has additional computational capabilities above and beyond a TM. Does this low-level interaction represent some fundamentally different type of logic? If so, is it possible to formalize it?
2005-03-06 02:56 am (UTC)
Mmm, cogsci geekery!
The real thing may have infinite precision, but it's pretty clear that it operates on easily measurable timescales; it's been a long time since I brushed up on this, but IIRC the "recharge" time a neuron requires between firings is never faster than about 5ms. Cellular growth rates (and therefore rate of neural reconfiguration) are also well within our measurement capabilities. Neurotransmitter proliferation and reabsorption rates can't (literally) be any faster, and to top it all off there's a lot of redundancy—the precision of one neuron is not significant.
As such, I think the only question regarding measurement of the brain's activities, regards the fact that its operation is not synchronous; each neuron may run at 200Hz or less, but catching the activity of all of them would require far higher chronometric precision. However, I would venture that the actual speed at which a neuron fires is also very slow in particle terms, and we only need four times that precision to catch every action.
Reducing it to individual parts, I see nothing that cannot be perfectly simulated; therefore, I believe that the brain is Turing-equivalent.
Okay, totally out of my league here, but I'm chatting with a guy in a fur suit, so what the hell. ;)
I think to think of the brain as a computing machine that is taking input to come out with a decision. I think the process is more of a recognition of patterns that get separated into familiarities and non-familiarities. Something familiar tends to cause an attrative state, non-familiar causes a repelant, or at least neutral reaction.
The amount of input is constant, and constantly changing. A decision to do, or not do; decide one way or the other ends up being effected by items that have would "logically" not have anything to do with the actual questions being asked.
Why do I want broccolli today and not steamed spinich? How much of it is driven by a "desire" (put that in your equation), a negative childhood memory around eating creamed spinach, and the fact that the doctor told me I need more iron in my diet, AND that its raining, which has me cranky. For some reason, spinich is not a food to eat when I'm cranky. Go figure.
When I read your link describing the Turing Machine, the description was deceptively simple. It spend a lot of time talking about states, and moving from one to the other, but glosses over the "instructions telling the machine what action to perform if it is currently scanning a particular symbol, and what state to go into after performing this action."
Its understanding how to frame these questions. I was educated on how to build a simple computer from JK flip-flops on up, so I could possibly fathom a tree of decisions and cascaded TMs leading to answers for more complex questions, but my gut level is thinking that the brain actually relies on "if-then-else" structures.
Whim and intuation "appear" to contradict such.
I'm stopping now, my head hurts, and I've probably made a fool of myself.
Okay, totally out of my league here, but I'm chatting with a guy in a fur suit, so what the hell. ;)
Always glad to make someone's day a little more surreal! :) Actually, your comments are quite good.
A decision to ... decide one way or the other ends up being effected by items that have would "logically" not have anything to do with the actual questions being asked. ... Why do I want broccolli today and not steamed spinich? How much of it is driven by a "desire"
This points up the complexity of the brain and the different types of interactions there. The question is whether this is somehow reducable to logical axioms.
I would posit that since the brain is a strictly physical artifact (some religious views might not grant me that one), we can analyze it as a physical process. The electrochemical nature of it (and neurobiological research suggests that a lot of desires and moods can actually be linked to hormone chemicals!) would indicate that it is understandable within the framework of physics.
If we were able to measure the state of the brain at one instant, we could, in theory, predict its next state. This reduces the brain to a progression from one physical state to another. But as you point out, a constant amount of external input is arriving, so really the brain has a range of next states.
Current State + Specific Input => Next State
This, of course, starts to look an awful lot like the Turing Machine description. The "instructions" would be derivable from physics.
Now this can all break down if we discover that the brain's level of detail is something that we are physically unable to measure (a very reasonable view, I think). See my comments higher up for more on that aspect.
I stopped reading your post halfway into it. I stopped understanding it about 1/8th into it.
Right now I have a headache!
*Pet my belly!* (much simpler)
2005-03-05 07:45 am (UTC)
If you think too much about this, you'll think about it until you die. Given this, you have proven your theory, you have tried to compute a non-deterministic problem with your brain.
I bet it would be nice with a walk in the forest instead...
I think a good walk in the forest can solve more problems than people give it credit for. :)