The first 25% of this book is good. Last 3/4 seemed too obvious. I was glossing over so often at all the obvious text that I may have missed a few gems. It was like "if this obvious possibility occurs, then this obvious outcome will occur or we can try these obvious workarounds".
Here are notes I made while reading it:
The first chapter has a good summary of achievement milestones in A.I., especially games, a good intuitive introduction to Bayesian networks, mentions unexpected solutions discovered, proving principia mathematic and others, checking circuit layouts, and stock market effects (the crash and flash). Does some calculations to wildly estimate the solution space evolution has searched to come up with smart brains, and does some wild estimations as to what it might take to replicate it the procedure.
Bertrand Russell and Jon von Neumann advocated using nuclear weapons or at least force to prevent other countries besides U.S. from acquiring nuclear weapons.
A moderately good technology can acquire greater skills as needed. It only needs to be faster at discovering and implementing the skills as needed. His example, generalized "geeky" learning skills should be able to learn specific skill like social skills, if needed, as needed. Conversely, social skills might enable access to resources that lead to acquiring geeky skills. Any increase in any type of intelligence will speed up the arrival of other and greater intelligence. For example, breeding better people brains will enable better programmers for A.I.
Mentions von Neumann probes (intersteller replicators).
He mentions a Dyson sphere is capable of capturing the 10^26 watts from our sun.
He mentions the brain has a lot of dedicated circuits, which is relevant to the importance of computers very efficiently solving algorithms and thereby reducing the need for brains. Personal thought on this: In an economic system, this means brains are freed to do other things or maybe they are made too much in supply and thereby devalued. The "other" work people do must be more valuable in the sense of making the system more powerful in replacing other systems, otherwise systems that opt to not employ or carry the burden of people will be more efficient and overtake the societies that still value on people. This assume there are not strong trade barriers to protect people-based societies and that the machine based societies are kind enough to not take them over for their resources (e.g. Europe and U.S. extracting oil from Arab countries, metals from south America, and "free" labor from Asia).
We only need to create a hardware brain equal to a mouse and it should be capable of being expanded or improve itself to whatever a larger piece of hardware is capable of, far beyond human brains.
He does not seem to be aware of length contraction in the direction of travel when considering space travel limitations. There is no limitation to a traveler who's initial weight is low enough that he can pick up enough energy in his travels to keep accelerating so that at 99.999% the speed of light he is 223 times closer to the next galaxy than when he began the acceleration, so that million light years away is , not even to with in the Hubble volume which he mentions as is the observable universe is our limit.
He says brains have a volume limit of 0.11 m^3 based on 11 m/s (?) transmission speed whereas speed-of-light computers could be size of dwarf planet. He's mixing apples and oranges. This is a hard limit only for synchronous computing where a clock tells all circuits to change states, and that every part needs to "be on the same page". Parallel computing does not have this limit, and external memory is cached from the hard drive to overcome this limitation. Although local regions of a brain are "on the same page" to a degree in order to form concepts, it's not the same kind of harm limit as in typical synchronous computing.
email to the author nick:
You're mixing apples and oranges when saying brains have a size limit. This is a hard limit only for synchronous computing where a clock tells all circuits to change states, and that every part needs to "be on the same page". Parallel computing does not have this limit, and even in synchronous computing memory is cached from the hard drive to help alleviate it. Although local regions of a brain are "on the same page" to a degree in order to form concepts, it's not the same kind of hard limit as in typical synchronous computing. Also, signals in standard wire communication are at near-light speed. It's only the gates that slow it down. But the fact that electrons are about 40,000 times lighter than sodium and potassium ions does matter. Energy to move matter is a function of velocity^2 so it's a big effect, especially for heat. But I believe the fatty matter surrounding nerves alleviates the need for ion channel movement over long nerve distances. Synapses require whole molecules, so they are a great bottle neck, just as transistors have capacitance requiring a lot of electrons to affect the switch, which was the bottleneck in terms of heat in computers which used to limit switching speed. They can't go to much higher switching speeds than about 10 GHz and remain synchronous because speed of light / 10 Ghz = 0.03 meters, getting uncomfortably close to the distant across a CPU, especially given the routes and capacitance delays. To be clear, electrons and ions have to be moved only enough to transmit the signal, so there is a brief acceleration and deceleration (resulting in my v^2) when a pulse comes by, and the ions are moving orthogonal to the pulse direction rather than with it. Getting even more fundamental than the mass of the thing being moved that results in slow speeds that results in massive parallel computing in the brain is that DNA can't smelt metals directly. We're up 256 parallel wires in 4-core CPU 64-bit "serial" computing, and if we go to slower switching speeds (lower heat) and thereby 3D (thanks to lower heat) 1 million parallel wires seems feasible even if using just 1 core.
=============
comment to migedy
How many people are needed create an artificial intelligence that can expand itself uncontrollably, replacing the biosphere? None. We already have global free trade economics discovering the most efficient outcome and using the resulting wealth to control governments and people with the help of actors, lobbyists, lawyers, bankers, programmers, etc. Computers just continually decrease the need for the number of people involved at each step, and if you take it to the limit then you have zero people. I know you think that's absurd and maddening because a counterargument is very difficult, but I do not need to push the unavoidable math against people's feelings.
Replacing the biosphere has just been a side effect. The primary goal has been to get rid of expensive workers, and its success has been an astonishing 150-year run and is nearly complete. Someday, it will learn how to expand without all those inefficient desires of consumers. Those darned consumers have the audacity to desire things that are hard for machines to make by themselves, things that don't even help other machines to make more machines. How dare they! Evolution occurs at all scales, all the time, with or without intelligence or desire. Genes and memes are just the memory bits of the winning programs discovered, not the source of the intelligence. A.I. is not the threat. Basic evolution and biology's inefficiency are the danger. Threats and survivors do not need what we consider intelligence any more than viruses that have been around forever. You can call our world-wide economic machine intelligent and with desire if you want, even having desires outside of what humans want like destruction of the Amazon and coral, but it's not necessary.
Water-based evolution is doomed. The new genes are being cast in molten metals and metalloids (like silicon) and carbon that does not use or want H or O, except for long-term energy storage.
=============
comment to migedy
How many people are needed create an artificial intelligence that can expand itself uncontrollably, replacing the biosphere? None. We already have global free trade economics discovering the most efficient outcome and using the resulting wealth to control governments and people with the help of actors, lobbyists, lawyers, bankers, programmers, etc. Computers just continually decrease the need for the number of people involved at each step, and if you take it to the limit then you have zero people. I know you think that's absurd and maddening because a counterargument is very difficult, but I do not need to push the unavoidable math against people's feelings.
Replacing the biosphere has just been a side effect. The primary goal has been to get rid of expensive workers, and its success has been an astonishing 150-year run and is nearly complete. Someday, it will learn how to expand without all those inefficient desires of consumers. Those darned consumers have the audacity to desire things that are hard for machines to make by themselves, things that don't even help other machines to make more machines. How dare they! Evolution occurs at all scales, all the time, with or without intelligence or desire. Genes and memes are just the memory bits of the winning programs discovered, not the source of the intelligence. A.I. is not the threat. Basic evolution and biology's inefficiency are the danger. Threats and survivors do not need what we consider intelligence any more than viruses that have been around forever. You can call our world-wide economic machine intelligent and with desire if you want, even having desires outside of what humans want like destruction of the Amazon and coral, but it's not necessary.
Water-based evolution is doomed. The new genes are being cast in molten metals and metalloids (like silicon) and carbon that does not use or want H or O, except for long-term energy storage.
No comments:
Post a Comment