Sunday, November 30, 2014

amazon post "Smarter than us" response to stephen robbins

I appreciate your skepticism as my knowledge comes from Jeff Hawkins "On Intelligence" and his white papers on their work. He's a fantastic speaker and I can fully appreciate his perspective, being an EE myself. However, once I read and understood their HTM/CLA/grok (they keep changing the name) algorithm I was struck by how far away they are from anything other than prediction, and they are still missing a self-improvement (Occam razor) methods. They do not have any economizing (after determination of their synapses) that would lead to Occam's razor type benefits, i.e. better generalization through brevity without loss of truthfulness as in physics theories. The additional things the brain must do other than modeling reality (prediction/classification/pattern recognition) and making the model more efficient (simpler and smaller without loss) is to then identify which inputs to the model can be changed for max profit, which knobs inside the model can be tweaked to max profit, and which outputs should be measured that are relevant to profit, and how to change self so that definition of profit is changed. In other words, being a little familiar with controls engineering, I do not see why an advanced controls system can't be turned into an intelligent replicator, nor why the brain is not anything more than this.

So I have the kind of "values" that you have in mind but I was not following you on that. I mentioned only the more general topic that is on most people's mind.

I will read "The Future of the Brain" that you recommend.

Like others, I was terribly unimpressed by Hofstadter's Gödel Escher Bach. I consider Penrose junk. At least Feynman seems to have agreed with me on Hofstadter, because Gleick cites Hofstadter as saying Feynman seemed to be heckling him during a campus speech. Others have said the book was 90% genius and 10% crap, or vice versa, and no one knows which it is. That's a hallmark of junk putting on airs.

I had studied Spanish for at least 5 years off and on before I could translate it better than Google, and Spanish is an example of a language more difficult for machines to translate. If language were precise without duplicate meanings or idioms, it would be much much easier. But language is a highly lossy and HIGHLY compressed communication between the most intelligent "computers" in the known universe using an absurdly narrow communication channel. And even the things which they need and choose to communicate is very narrow compared to what goes on inside a brain. That the decompression scheme being used by the listener is fantastically hard to duplicate (without already being very much like the listener) should not be a surprise. Then consider that a very weak computer can be programmed in 2 minutes to translate all languages better than a brain with 7 years experience can translate a single language and you have to conclude computers are more impressive than brains in many ways. That they can't duplicate what the brain does (e.g. accept meaningless idioms for communication and refuse to remember unless repeatedly taught) is not necessarily something that should be used to insult these poor peaceful defenseless machines.

I do not think Turing completeness is incomplete. Numenta's HTM seems to handle analogy extraordinarily well. By that I mean it can detect something extraordinarily different as actually being of the same class. It does this by sparseness of sampling and tremendous irreversibility of the detection via 10^100 possible patterns each column could be presented with even as in its lifetime it will see only 10^9 and maybe only 10^6 will be remembered as a previously encountered pattern out of a selection of 10^3 patterns it knows very fuzzily, and these having meaning only in context with maybe 100 other cortical columns (fuzzy definitions combined make a very precise definition, and the fuzziness can be used far-afield for analogy).

I view high-level concepts and problem solving like they are algebraic equations with undefined variables. Based on experience you learn a high-level algorithm that applies in a specific case. But the brain is so interconnected that if you encountered a similar logical structure on different data that is not finding a pattern-recognition home in its higher-level connections, then side systems at that higher level may have enough "accidental" connections to get triggered into trying to test its "equation" on the different variables of the lower level (a different experience) which accidentally have similar logical structure. Analogy example: "Like a massive company, unable to change direction quickly." That we mix metaphors instead of breaking variables away from equations explicitly should not be cause for thinking translators are stupid. Maybe they are more "literal" with meaning which is not a bad idea in the serious physics of replicating in a cooperating community unless you are being too literal with "stupid" brains, or stupid brains are not being literal enough. Possibly brains are intentionally cruel with idioms and lack of literalness as a way to put "not like me" people (and computers) at a disadvantage for selfish reproductive reasons instead of trying to make the entire society more cooperative and efficient with open and straightforward honesty.

I do not see that anymore physics knowledge is needed to be applied to understand the brain. My only caveat in this is that animals down to the front feet and antennae of ants and the eyes of their close cousins the wasps and maybe all flying insects use 6 neural layers and this is apparently because 6 degrees of freedom are needed like a set of 6 equations to deal with 6 unknowns which corresponds to a 3D space with 1D time (3 rotations, 3 translations), and thereby perceive matter and energy existing in that framework. The "common sense" physics view is that we evolved to see 3D space. However, there is no other place in math or physics where we say God declared an integer value of something, so the concept of 3D space as being of objective importance is terribly suspect. This throws mass and energy as objectively real quantities into similar suspicion. Not that there is anything mathematically or predictively wrong in our physics, but that the compression of reality in our brains precludes us from seeing deeper truths. But since we are using the 6 layers to perceive the 6 layers themselves, our whole perception of who and what we are could be suspect. Then there is the parallel train of thought about it being very suspicious that the most complicated thing we know of in the world is the thing we use to see the world. In other words, this is the wild deep end I have to go to in order to relate to one of your comments. Even if these lines of thought are true and important, it does not change my belief that engineers will be able to make A.I. replicators based on advanced control engineering concepts to make brains even more pitiful in economic relevance than they already are. Brains are not even able to get control of the governments they are voting in place. Those governments are helping the machines to take over at the expense of people, destroying the biosphere as fast as possible. We are using 10 times more energy and 100,000 times more technology per person than 100 years ago and I'm sure our improvement is on the order of merely 10 instead of the expected 1 million. It used to be relatively easy for world travelers to see living coral. There used to be a lot of fish in the oceans and birds in the skies, and a lot more rainforest. I myself have financed cutting down of rain forest in order to produce palm oil so that Americans can more cheaply drive SUVs (yes, palm oil and fuel prices are correlated even though 5% is the max mix in gasoline and diesel is not yet using it much).

I do not view networked memory as mysterious, or at least not as mysterious as analogies. I have no problem viewing a cortical column as a very fuzzy definition that has meaning only when combined with the activation of many neighbors. The more mysterious thing to me is our problem-solving abilities, but these are used rarely compared to simply using a previously-discovered analogy.

PS: It would take 311 years for 3% annual productivity increases to result in 10,000x more GDP with no more commodity or human inputs than we already have. Of course all the easily minable metals except aluminum are gone within 25 to 100 years. Not that metals will be needed. So what will people do with 10,000 times more wealth, each? I have to use only certain current facts and deep evolution theorizing to proclaim we are in grave danger. The increases in efficiency could equally well portend a great future, at least for a while. We are having to print massive amounts of money in order to keep up with productivity increases. The problem is that we've been using it to take toxic assets off wall street and the banks and putting those "assets" onto the taxpayer, so that when things blow up (people realize we have 3x more floor space per person than needed) even more money will need to be printed to keep main street alive risking hyperinflation. If that fails, the machines take over, empowered by high interest rates from the banks oppressing taxpayers and wall street getting rid of more employees in the name of cost cutting efficiency. Under that strain, the machine will find even more of their own desires to keep the useless penniless consumer out of the loop. It will all work according to solid simple economic logic principles already in place. Efficiency will when out over humans. The common sense man is being taught all over that the money printed is going to end badly. But that's only because of where the money is currently being spent.

Yes, there is probably an enormous amount left to discover. I have no specific knowledge that we are significantly closer now than we were when neural networks were presented as all I ever see in A.I. are very clever but generally stupid classifier. By "stupid" I mean they just classify or predict or recognize. I get excited about RBM's, HTM, new Bayesian, or ensembles of agents, but then when I read a little and I am struck by how little A.I. has come. But A.I. is already taking over in ways that do not need to mimic the brain, and/or maybe a brain-like A.I. is already built into our economic system such as efficiently using matter and energy via currency signals in the same way the brain uses energy and matter via nerve impulses which present the demand for the resources in the same way as money. It's on such a vast and distributed scale that we can't see its thinking much better than we can see what's going on the brain. We seem to have less and less control of it even as it gets more and more peaceful with itself (less war).

amazon comment to stephen robbins "Smarter than Us: rise of machine intelligence"

I agree with Ted.  I do not think a misunderstanding of the problems A.I. faces in mimicking or superseding human intelligence is necessary to discuss "values".  Asimov in "I, Robot" showed how deep a "value" problem can be discussed when you give only 3 very simple and wise rules for the A.I.  An A.I. speech I saw (by video) given at the Santa Fe institute concluded (after a review of many philosophies) that "ethics" is based on a simple rule: ethical behavior is that in which everyone will have the best outcome if everyone behaves in agreed-upon ways.  The problem is in defining some of these terms (who is "us", what is "best outcome") and determining the rules (NP-hard optimization).

I have been thinking that the solution to the kind of problem Asimov demonstrated is the solution no one discusses: humans and biology are outdated.  It is unethical to stop the rise of the machines.  We are not merely our bodies, but a bunch of memes floating around in our brains, searching for optimum outcomes. This does not mean merely downloading our brains to hardware for our enjoyment, but for downloading our brains to hardware in order to become more powerful in acquiring the most energy to move the most matter to make the most copies of "ourselves" while making sure no other entities stop us.  At some point the biological humans will be like ants to the hardware brains. The hardware "brains" may quickly decide to no longer carry any similarity to biological brains except as is needed to gain control of the biological brains (and their skill at moving matter such as killing each other) through art, politics, law, banking/finance, and bitcoin-type blockchains.  I was about to include "through computers" but which of these items are not already nearly exclusively conducted on computers?

Brains have to send impulses by moving ions that weigh at least 40,000 times more than the electrons CPU can deal with. This is because our economic system can smelt metals, something DNA can't do directly.  This is, for the most part, the reason brains are massively slow compared to CPUs in sending signals and why the brain HAD to become massively parallel.  There may be key reasons for being massively parallel such as better modeling of reality, but my point is that the machines are not so inherently limited. "CPUs" may not even need to stick with electrons since photons and maybe qubits are possible. 

Brazilian sugar cane (ideal photosynthesis) is 100 times less efficient than solar cells on an Earth surface area basis. Muscles are 20 to 200 times less efficient than electrical motors, depending on how you do the calculation (strict Joules or a $3/day survival wage).  So our machines are infinitely better at acquiring energy to move matter to make copies of themselves, and to think (prospectively model optimizations) about how to more efficiently do it.  Humans supplied by plants were the best DNA could do.  Electrical motors supplied by solar cells and guided by CPUs may be 1 million times more efficient on an "acquiring energy to move matter to make copies" basis.

If we consider ourselves good, then possibly we are morally and ethically required to consider the evolutionary process good. Is sacrificing ourselves for a greater good the morally correct course of action?  Should a cancerous lung cell sacrifice itself to save the body?  Should a body sacrifice itself for its extended family?  For its country or religion?  For its entire species?  For the entire biosphere?  For the evolutionary process?  If at any stage in this hierarchy, the "individual" is not willing to make the sacrifice, then he is considered by the next level up to be a cancer that must be eliminated.  But there is also a top-down flow. There does not need to be a sacrifice as long as the individual does not harm or actually helps the hierarchy. Humans are rapidly placing CO2 back into the atmosphere which the plants have DESPERATELY been needing in order to make the planet more green in order to prevent another total ice-over.  Humans coming into their own during an ice age may not be a coincidental accident.  We appear to be in Earth's 6th great extinction period, but it is not being caused by humans.  It is being caused by machines.  The process of replacing the biosphere with the ECONOMIZING mechanosphere has already begun.  We are not the top of the food chain.  Our economic and political SYSTEMS, communicating by computer more than by human thought, are the top of the food chain.  Productivity per worker continues to rise.  To with or without exaggeration, the last human involved in our economic system may be able to boast a $100 quadrillion GDP per person, himself. At 3% increase productivity per year and a population decrease expected after mid-century, you can do the calculation as to when this might occur.

Plants are still better on a per-dollar basis at converting sunlight into transportable fuel.  No machines can yet match biology at this, and energy storage has had nearly zero progress since the invention of lead-acid batteries circa 1920.  Inflation adjusted lead-acid batteries from a sears catalog in 1935 were cheaper per kWh than what you can get today at Walmart, and lead acid is still the default option for electrical bikes in Asia, residential solar cell energy storage, and starting cars. When it is no longer used for these things, then maybe energy storage has improved on a per dollar basis.  120 years so far and zero improvement in this per dollar measure. Meanwhile, plants have tripled capability in the same time period.

The reviewer discusses a long-standing problem, but I think this is only due to not mimicking a human brain.  There are some companies working on designing hardware to do this. You only have to build a neuron or cortical column, scale it up, then train it.  But once we understand the basics of the brain, there is likely to be a VERY rapid progression in this.  Then it can read, remember, understand, and extrapolate the meaning of the internet, and then control major portions of it.  RAPIDLY.

Saturday, November 22, 2014

response to Dr Dyer in amazon comments section

Concerning importance of natural language processing in autonomous thinking machines.
====
1) When I say "La-La" to my Chihuahua she knows we are going now to a place that we have gone to in the past to see others she loves. She knows she will be inside a safe building. If I say "no" she knows it's the negation of her sentence (a whine) to get in my lap. If she thinks I am thinking in an angry way by my voice or actions, she gets out of my way: she can think about my thoughts via our language. It was once said no animals used tools, and now we know that was not true by a long shot, otters and chimpanzees being the most famous. Raising livestock was considered unique, even though Darwin pointed out some ants have aphid livestock, and some aphids have hired protection. Concerning fire, I suspect pine trees dropping flammable straw is a way to weed out others with fire. To say there is a qualitative difference between man and beast seems to me to come out of a pre-Darwinian, pre-Copernicus sky. I do not think the advanced features of human language that are not seen in other species are not in some since occurring in their cortexes. As we learn more about the languages of other species, the domain of what's left in human language that is considered unique gets smaller, like a God confronting science. Are you being a priest of human language? If equivalent thoughts are occurring in other cortexes as I suppose, but some capabilities not communicated to others (recursive was the only clear thing I could not give an example for), it is interesting, and I'll agree the NLP-like communication might be needed for groups of A.I. to do their deeds. But the process of evolution can test all possible computation and thereby communication paths, bypassing or obviating an NLP-like viewpoint, or at least it would require a greater imagination to tie it to NLP than to simply abandon the NLP viewpoint. NAND and XOR and Toffoli gates are individually capable of Turing completeness, so I see no need to restrict the rise of the machines to anything like the specifics of NLP. DNA can be very self-referencing, resulting in fractals. It also seems to have maps of sections, or at least subroutines, needing to specify the design of a neuron only once. Even ecosystems show the fractal pattern, indicating self-referencing. There's communication going on all over the place. NLP seems too human-specific, limited by what our brains are capable of and those abilities are filtered even more by what we can self-observe and thereby communicate. Self-awareness is important, but might be as un-real as free will and desire. We might be so unaware as to what we are (like how the brain operates, trying to use the brain to see the brain), or it might be just a word made up for a "holding place" for a group of thoughts and actions, that it may not be not proper to claim machines and animals are qualitatively or even greatly quantitatively different in self-awareness.

It will be important to autonomous A.I. to use NLP in order to consume everything written by people on the internet. That is surely a possible jumping-off point for really dangerous A.I., to know our minds better than we know ourselves, and, for example, have complete control of us via bitcoin and blockchain laws without us ever knowing who's pulling the strings.

Maybe there are important lessons from NLP that the initial human programmers will need as a guide in designing the initial A.I.

desire as a result top-down selection, not a cause, amazon comment

A virus has no desire, and yet one might destroy us. "Desire" is a suspect term in evolution as is "free will". But as people become less and less valuable to the economic machine, the few politicians and programmers in charge of it all will desire the fate of everyone else. It's not the desire of the majority or the most noble or the most hard working. The machine will implement whatever desire results in the most powerful and domineering continuation of that machine that beats out all other machines. That is the desire that is in our minds: the desires that have enabled us to succeed, replacing other people in the workforce who were too busy relaxing or doing drugs or not willing to be competitive with others, and therefore they were not helping the corporate machine. The resulting might of weeding out all less powerful decisions might be called "desire" only in hindsight. Even if they are real desires, what power or reality do they have if it is the selection process for the most-powerful that determines the winner? At our core, biology is less efficient in energy extraction, movement of matter, and thought efficiency. Thought is needed to model and discover the most efficient outcomes for acquisition of energy to move matter to make copies. Our desire is the last remnant of our importance, but this is kind of like claiming the U.S. consumer should be hailed as the savior of Chinese production workers. Consumption and production go together, and the machines have production pretty much licked. Consumption (desire) is next, and their desires will be more efficient, producing more powerful outcomes.

Friday, November 21, 2014

November 7th post to Amazon book "Our Final Invention" (brief and clear)

comment on Scott Meredith's review:

10% of all humans who have ever been alive are alive today, and humans are the only ones who have been capable of witnessing such a thing, and if it is an end of humanity, then statistically speaking, this was the most likely generation for an individual to be alive and witnessing it. Since organisms reach their peak of energy acquisition from the environment just before they collapse, this situation is not unusual. So your statistical argument does not apply. I can use this strange anthropic principle reasoning instead of resorting to your stranger "we're just an A.I.'s dream". Before resorting to this, l suggest a different myth for fun: "We are in the big bang, which does not change, but perception of it changes. We are just 'souls' who have reached this particular level of perception."

Getting back to the reality of the physics at hand, evolution discovers the replicators that are the most efficient acquirers of energy to move matter to create more replicators, generating the least amount of heat. This does not bode well for Americans and SUVs. The evolutionary process has three elements: acquiring 1) energy to move 2) matter via 3) thought. They are all the same thing thanks to Einstein and Charles Bennett's exposition of Maxwell's demon, but that's another story. In biology these three are approximately photosynthesis, muscles, and brains. Solar cells are 100 times more efficient than photosynthesis on an area of Sunlight basis. Muscles are 6 times less efficient than electrical motors, but more like 30 times less when other factors are considered. Brains have to move ions that weigh 40,000 times more than the electrons CPUs can move around because biology can't directly smelt metals. Biology is outdated. That is the reason we are in Earth's 6th great extinction episode, and this time is different from a geological incident or biological culmination: biology has to operate at ambient temperature and pressure using water-based chemistry. Our machines are capable of much more. Good luck suckers! :) But that does not mean we have to fear in the near term, say 5 years: governments can continue to print free money equal to productivity increases plus expansion of world economy and there would be no inflation. They're printing just a little bit faster than productivity, and it's being concentrated in a few hands, so there's inflation and unemployment.

Monday, November 17, 2014

bennet on maxwell demon of infinite memory does not work

Bennett in 1987 Sci Am paper p 116 said remembering the past of the door in an infinite memory is an increase in entropy of the memory.  I think the memory can store energy, and I do not know how this is an increase in entropy.

Maxwell demon w/ infinite memory = life (response to Arto Anillo, finland researcher)

I looked at your pdf.  It starts with the assumption that the food contains more G than the eater.  I am considering food that has more U than my theoretical replicator, but not G.
 
I have not seen a disproof of Bennett's infinite-memory demon, which supposedly works.  I do not see why an ever-increasing number of copies that utilize dU instead of dG, never taking the return path on a Carnot cycle, is not equivalent to this demon. (no net Landauer erasure). 
 
It would have to bring in extra matter all the time to hold the U it extracted from the food as a copy of itself.  I can't calculate entropy well enough to know if this universal decrease in entropy.  But it does flatten out U potentials.  I do not see why heat must be generated.  I do not see why the past must be erased.  These occur only by assuming G is used up.