Sunday, November 30, 2014

amazon post "Smarter than us" response to stephen robbins

I appreciate your skepticism as my knowledge comes from Jeff Hawkins "On Intelligence" and his white papers on their work. He's a fantastic speaker and I can fully appreciate his perspective, being an EE myself. However, once I read and understood their HTM/CLA/grok (they keep changing the name) algorithm I was struck by how far away they are from anything other than prediction, and they are still missing a self-improvement (Occam razor) methods. They do not have any economizing (after determination of their synapses) that would lead to Occam's razor type benefits, i.e. better generalization through brevity without loss of truthfulness as in physics theories. The additional things the brain must do other than modeling reality (prediction/classification/pattern recognition) and making the model more efficient (simpler and smaller without loss) is to then identify which inputs to the model can be changed for max profit, which knobs inside the model can be tweaked to max profit, and which outputs should be measured that are relevant to profit, and how to change self so that definition of profit is changed. In other words, being a little familiar with controls engineering, I do not see why an advanced controls system can't be turned into an intelligent replicator, nor why the brain is not anything more than this.

So I have the kind of "values" that you have in mind but I was not following you on that. I mentioned only the more general topic that is on most people's mind.

I will read "The Future of the Brain" that you recommend.

Like others, I was terribly unimpressed by Hofstadter's Gödel Escher Bach. I consider Penrose junk. At least Feynman seems to have agreed with me on Hofstadter, because Gleick cites Hofstadter as saying Feynman seemed to be heckling him during a campus speech. Others have said the book was 90% genius and 10% crap, or vice versa, and no one knows which it is. That's a hallmark of junk putting on airs.

I had studied Spanish for at least 5 years off and on before I could translate it better than Google, and Spanish is an example of a language more difficult for machines to translate. If language were precise without duplicate meanings or idioms, it would be much much easier. But language is a highly lossy and HIGHLY compressed communication between the most intelligent "computers" in the known universe using an absurdly narrow communication channel. And even the things which they need and choose to communicate is very narrow compared to what goes on inside a brain. That the decompression scheme being used by the listener is fantastically hard to duplicate (without already being very much like the listener) should not be a surprise. Then consider that a very weak computer can be programmed in 2 minutes to translate all languages better than a brain with 7 years experience can translate a single language and you have to conclude computers are more impressive than brains in many ways. That they can't duplicate what the brain does (e.g. accept meaningless idioms for communication and refuse to remember unless repeatedly taught) is not necessarily something that should be used to insult these poor peaceful defenseless machines.

I do not think Turing completeness is incomplete. Numenta's HTM seems to handle analogy extraordinarily well. By that I mean it can detect something extraordinarily different as actually being of the same class. It does this by sparseness of sampling and tremendous irreversibility of the detection via 10^100 possible patterns each column could be presented with even as in its lifetime it will see only 10^9 and maybe only 10^6 will be remembered as a previously encountered pattern out of a selection of 10^3 patterns it knows very fuzzily, and these having meaning only in context with maybe 100 other cortical columns (fuzzy definitions combined make a very precise definition, and the fuzziness can be used far-afield for analogy).

I view high-level concepts and problem solving like they are algebraic equations with undefined variables. Based on experience you learn a high-level algorithm that applies in a specific case. But the brain is so interconnected that if you encountered a similar logical structure on different data that is not finding a pattern-recognition home in its higher-level connections, then side systems at that higher level may have enough "accidental" connections to get triggered into trying to test its "equation" on the different variables of the lower level (a different experience) which accidentally have similar logical structure. Analogy example: "Like a massive company, unable to change direction quickly." That we mix metaphors instead of breaking variables away from equations explicitly should not be cause for thinking translators are stupid. Maybe they are more "literal" with meaning which is not a bad idea in the serious physics of replicating in a cooperating community unless you are being too literal with "stupid" brains, or stupid brains are not being literal enough. Possibly brains are intentionally cruel with idioms and lack of literalness as a way to put "not like me" people (and computers) at a disadvantage for selfish reproductive reasons instead of trying to make the entire society more cooperative and efficient with open and straightforward honesty.

I do not see that anymore physics knowledge is needed to be applied to understand the brain. My only caveat in this is that animals down to the front feet and antennae of ants and the eyes of their close cousins the wasps and maybe all flying insects use 6 neural layers and this is apparently because 6 degrees of freedom are needed like a set of 6 equations to deal with 6 unknowns which corresponds to a 3D space with 1D time (3 rotations, 3 translations), and thereby perceive matter and energy existing in that framework. The "common sense" physics view is that we evolved to see 3D space. However, there is no other place in math or physics where we say God declared an integer value of something, so the concept of 3D space as being of objective importance is terribly suspect. This throws mass and energy as objectively real quantities into similar suspicion. Not that there is anything mathematically or predictively wrong in our physics, but that the compression of reality in our brains precludes us from seeing deeper truths. But since we are using the 6 layers to perceive the 6 layers themselves, our whole perception of who and what we are could be suspect. Then there is the parallel train of thought about it being very suspicious that the most complicated thing we know of in the world is the thing we use to see the world. In other words, this is the wild deep end I have to go to in order to relate to one of your comments. Even if these lines of thought are true and important, it does not change my belief that engineers will be able to make A.I. replicators based on advanced control engineering concepts to make brains even more pitiful in economic relevance than they already are. Brains are not even able to get control of the governments they are voting in place. Those governments are helping the machines to take over at the expense of people, destroying the biosphere as fast as possible. We are using 10 times more energy and 100,000 times more technology per person than 100 years ago and I'm sure our improvement is on the order of merely 10 instead of the expected 1 million. It used to be relatively easy for world travelers to see living coral. There used to be a lot of fish in the oceans and birds in the skies, and a lot more rainforest. I myself have financed cutting down of rain forest in order to produce palm oil so that Americans can more cheaply drive SUVs (yes, palm oil and fuel prices are correlated even though 5% is the max mix in gasoline and diesel is not yet using it much).

I do not view networked memory as mysterious, or at least not as mysterious as analogies. I have no problem viewing a cortical column as a very fuzzy definition that has meaning only when combined with the activation of many neighbors. The more mysterious thing to me is our problem-solving abilities, but these are used rarely compared to simply using a previously-discovered analogy.

PS: It would take 311 years for 3% annual productivity increases to result in 10,000x more GDP with no more commodity or human inputs than we already have. Of course all the easily minable metals except aluminum are gone within 25 to 100 years. Not that metals will be needed. So what will people do with 10,000 times more wealth, each? I have to use only certain current facts and deep evolution theorizing to proclaim we are in grave danger. The increases in efficiency could equally well portend a great future, at least for a while. We are having to print massive amounts of money in order to keep up with productivity increases. The problem is that we've been using it to take toxic assets off wall street and the banks and putting those "assets" onto the taxpayer, so that when things blow up (people realize we have 3x more floor space per person than needed) even more money will need to be printed to keep main street alive risking hyperinflation. If that fails, the machines take over, empowered by high interest rates from the banks oppressing taxpayers and wall street getting rid of more employees in the name of cost cutting efficiency. Under that strain, the machine will find even more of their own desires to keep the useless penniless consumer out of the loop. It will all work according to solid simple economic logic principles already in place. Efficiency will when out over humans. The common sense man is being taught all over that the money printed is going to end badly. But that's only because of where the money is currently being spent.

Yes, there is probably an enormous amount left to discover. I have no specific knowledge that we are significantly closer now than we were when neural networks were presented as all I ever see in A.I. are very clever but generally stupid classifier. By "stupid" I mean they just classify or predict or recognize. I get excited about RBM's, HTM, new Bayesian, or ensembles of agents, but then when I read a little and I am struck by how little A.I. has come. But A.I. is already taking over in ways that do not need to mimic the brain, and/or maybe a brain-like A.I. is already built into our economic system such as efficiently using matter and energy via currency signals in the same way the brain uses energy and matter via nerve impulses which present the demand for the resources in the same way as money. It's on such a vast and distributed scale that we can't see its thinking much better than we can see what's going on the brain. We seem to have less and less control of it even as it gets more and more peaceful with itself (less war).

No comments:

Post a Comment