I agree with Ted. I do not think a misunderstanding of the problems A.I. faces in mimicking or superseding human intelligence is necessary to discuss "values". Asimov in "I, Robot" showed how deep a "value" problem can be discussed when you give only 3 very simple and wise rules for the A.I. An A.I. speech I saw (by video) given at the Santa Fe institute concluded (after a review of many philosophies) that "ethics" is based on a simple rule: ethical behavior is that in which everyone will have the best outcome if everyone behaves in agreed-upon ways. The problem is in defining some of these terms (who is "us", what is "best outcome") and determining the rules (NP-hard optimization).
I have been thinking that the solution to the kind of problem Asimov demonstrated is the solution no one discusses: humans and biology are outdated. It is unethical to stop the rise of the machines. We are not merely our bodies, but a bunch of memes floating around in our brains, searching for optimum outcomes. This does not mean merely downloading our brains to hardware for our enjoyment, but for downloading our brains to hardware in order to become more powerful in acquiring the most energy to move the most matter to make the most copies of "ourselves" while making sure no other entities stop us. At some point the biological humans will be like ants to the hardware brains. The hardware "brains" may quickly decide to no longer carry any similarity to biological brains except as is needed to gain control of the biological brains (and their skill at moving matter such as killing each other) through art, politics, law, banking/finance, and bitcoin-type blockchains. I was about to include "through computers" but which of these items are not already nearly exclusively conducted on computers?
Brains have to send impulses by moving ions that weigh at least 40,000 times more than the electrons CPU can deal with. This is because our economic system can smelt metals, something DNA can't do directly. This is, for the most part, the reason brains are massively slow compared to CPUs in sending signals and why the brain HAD to become massively parallel. There may be key reasons for being massively parallel such as better modeling of reality, but my point is that the machines are not so inherently limited. "CPUs" may not even need to stick with electrons since photons and maybe qubits are possible.
Brazilian sugar cane (ideal photosynthesis) is 100 times less efficient than solar cells on an Earth surface area basis. Muscles are 20 to 200 times less efficient than electrical motors, depending on how you do the calculation (strict Joules or a $3/day survival wage). So our machines are infinitely better at acquiring energy to move matter to make copies of themselves, and to think (prospectively model optimizations) about how to more efficiently do it. Humans supplied by plants were the best DNA could do. Electrical motors supplied by solar cells and guided by CPUs may be 1 million times more efficient on an "acquiring energy to move matter to make copies" basis.
If we consider ourselves good, then possibly we are morally and ethically required to consider the evolutionary process good. Is sacrificing ourselves for a greater good the morally correct course of action? Should a cancerous lung cell sacrifice itself to save the body? Should a body sacrifice itself for its extended family? For its country or religion? For its entire species? For the entire biosphere? For the evolutionary process? If at any stage in this hierarchy, the "individual" is not willing to make the sacrifice, then he is considered by the next level up to be a cancer that must be eliminated. But there is also a top-down flow. There does not need to be a sacrifice as long as the individual does not harm or actually helps the hierarchy. Humans are rapidly placing CO2 back into the atmosphere which the plants have DESPERATELY been needing in order to make the planet more green in order to prevent another total ice-over. Humans coming into their own during an ice age may not be a coincidental accident. We appear to be in Earth's 6th great extinction period, but it is not being caused by humans. It is being caused by machines. The process of replacing the biosphere with the ECONOMIZING mechanosphere has already begun. We are not the top of the food chain. Our economic and political SYSTEMS, communicating by computer more than by human thought, are the top of the food chain. Productivity per worker continues to rise. To with or without exaggeration, the last human involved in our economic system may be able to boast a $100 quadrillion GDP per person, himself. At 3% increase productivity per year and a population decrease expected after mid-century, you can do the calculation as to when this might occur.
Plants are still better on a per-dollar basis at converting sunlight into transportable fuel. No machines can yet match biology at this, and energy storage has had nearly zero progress since the invention of lead-acid batteries circa 1920. Inflation adjusted lead-acid batteries from a sears catalog in 1935 were cheaper per kWh than what you can get today at Walmart, and lead acid is still the default option for electrical bikes in Asia, residential solar cell energy storage, and starting cars. When it is no longer used for these things, then maybe energy storage has improved on a per dollar basis. 120 years so far and zero improvement in this per dollar measure. Meanwhile, plants have tripled capability in the same time period.
The reviewer discusses a long-standing problem, but I think this is only due to not mimicking a human brain. There are some companies working on designing hardware to do this. You only have to build a neuron or cortical column, scale it up, then train it. But once we understand the basics of the brain, there is likely to be a VERY rapid progression in this. Then it can read, remember, understand, and extrapolate the meaning of the internet, and then control major portions of it. RAPIDLY.