Monday, November 10, 2014

3rd and 4th post to philosophy group: brains=computers (control theory)

The operation of the brain is complex and is far from completely understood, so it can't be modeled precisely with equations. But we do know it operates very much like a computer in terms of sending digital signals between elements that do comparative operations on the digital inputs, like a complex version of the simple gates in CPUs. There is not anything known about a neuron that can't be implemented with NAND gates. For any mental activity we can precisely define, we can implement it on a computer and then give precise equations for cause and effect. For mental activity we can't as equally well define, we can't assume the ideas and equations we've learned (from precisely defined mental activity) are incomplete in their capability (to precisely implement the undefined mental activity).

Our machines can see, smell, touch, and hear whatever elements of the environment we choose, then they can think deeply about the consequences and then react, moving whatever things they need to move in order to change the environment they have sensed in order to achieve, as best they can, the goal we've defined for them. This is the field of "controls" in mechanical and electrical engineering, being precisely determined by equations. Going even beyond this, there are A.I. programs that can "run amok" on their own with the programmer not knowing exactly how they were able to achieve their goals. There is a common fear that some of these machines will be let loose from a hackers software or future 3D printer laboratory with the goal of reproducing itself. Or that if the goals are not defined, then they will evolve in a more ethical laboratory until this goal is selected for, and it escapes.

These machines can be programmed to learn things the programmers can't model or copy unless he has access to all the changed memory bits inside the machine. They can learn to do things better than their programmers were able to program into them by watching the results of their own actions and improving their own programming. This was common in 1990 when I first learned about neural nets. Genetic algorithms can change even the design of the neural net after several generations.

A steam engine governor is a much earlier example of a machine sensing the environment and adjusting the environment being sensed in order achieve a set goal. When the rotation of a shaft got too slow or too fast, it increased or decreased the amount of steam being let through. It could react faster than a person at much less cost than a person. The is the economic problem of our times, the time of the computer replacing the need for brains, even programming brains.

Many thoughtful students first learning about engineering controls become immediately struck by the sensation that the feedback loop in a control system is where consciousness lies. It is the difference between where the machine senses where it is, and where it wants to be. The "amount" of consciousness is called the "error" value which is sent off to cause movement of the machine's "muscles", implying a philosophy on the part of the engineer of "consciousness is pain", but this is because basic control systems are trying to regain a point of maximum profit that is known to be possible. It could also be called "opportunity" in machines that redesign their programming to gain more profit than their designers thought was possible.

=========
You seem to be taking as an axiom that human thought is fundamentally different from the thinking of machines. My first paragraph explained why I would have to consider that a leap of faith. I was not speaking metaphorically when I assigned human thinking words to machines. The "want" of a thermostat appears to be only a quantitative difference from the "want" of a brain. Since it is only 1 or 2 comparative operations, its complexity is (as a very rough estimate) only 1 millionth the capability of 1 neuron, which is 100 billionth of a brain. So I do not think most people would feel insulted if I claimed their want is not fundamentally different from a thermostat....but only if I also stated their marvelous brains are 100 quintillion times more complicated than a thermostat, and that we will never be able to conceive of what a 100 quintillion difference is except by math. We can conceive that we want a room to be warmer, and we can instill not only that want but the necessary resultant action into a machine.

The reason I have not read the rest of the book is because it holds as a foundational axiom something I find not only a leap of faith, but high nigh untenable given the preponderance of evidence to the contrary. A glance at the rest did not indicate the axiom was abandoned or that the book can stand without it.

Engineers and programmers have not invented different words to distinguish the activity of their machines from their brains. Programming is a transference of a precise set of "wants" from the programmer's mind to the machine. I do not know of any programmers or engineers who would insist that there is a qualitative difference.

The ability to reason logically was once considered to be the thing that separated humans from other animals, and the highest form of intellectual activity. I believe this was a primary motivation in Boole's invention (or formalization) of digital logic around 1850. Has the measuring stick of the mind been moved to more vague (imprecisely definable) areas in order to keep a mystical idea of mind alive?

When I make a numerical estimation of the difference in complexity of a thermostat and a brain, I am being literal. There are various estimates as to how many NAND or XOR logic gates are needed to implement a neuron. I believe a thermostat can be made to be as "universal" as they are called, as they are all that's needed to implement a complete Turing machine (with wiring).

No comments:

Post a Comment