Our Final Invention: Artificial Intelligence and the End of the Human Era

final_inventionWhile reading a book about technology’s influence on future jobs, I found a reference to James Barrat’s book, Our Final Invention. My curiosity was piqued because Our Final Invention was portrayed not as a “how to” book about artificial intelligence (AI), but rather a book about the dangers of creating it. The description is accurate.

Barrat’s narrative begins with a depiction of the moment when AI surpasses human intelligence. With processor speeds that will exceed the speed of human brains, the AI software improves its thinking capability three percent each time it rewrites its software, debugs the code, and improves its ability to learn. Within a short time, the AI is more than 1,000 times smarter than humans. The creators of the AI disconnect it from the Internet, its source of trillions of pieces of information and data. The AI figures out how to reconnect; after all, it’s smarter than humans.

Barrat maintains that there will be a clear first-mover advantage to anyone who creates an AI that surpasses human intelligence and the primary reason why many will race to be the first will be the potential of self-improving AI. Unfortunately, in the quest to be the first mover, he argues that few are heeding the cautionary advice of a small number of technologists who are concerned that uncontrolled AI will lead to mankind’s extinction. As processor speeds continue to advance, AI iterations will take seconds compared to the 18 years that it takes a generation of humans to mature.

Barrat acknowledges that some scientists believe that AI smarter than humans cannot be developed. At the same time, he cites polls of computer scientists who believe that there’s a 10 percent chance that AI will be created before 2028 and a 50 percent chance that it will be created by 2050. Those same computer scientists believe that the military or large businesses will create AI first. Barrat compares the technological superiority of AI versus humans to Europeans versus Native Americans when the Americas were colonized. He further writes that one of the reasons why there is not much of an intellectual debate about needing controls for AI is that Ray Kurzweil’s Singularity dominates the AI conversation.

Kurzweil has written that when the Singularity occurs, many of mankind’s problems will be solved through nanotechnology and AI. Nanotechnology, engineering at an atomic scale, may lead to the reversal of aging and end of all diseases. His writings assume that this will happen gradually and not suddenly, slow enough for us to learn from our mistakes and change the AI to avoid catastrophes.

In his Programs that Write Programs chapter, Barrat counters Kurzweil’s gradual progress of AI assumptions. He cites experts who state that their companies are working on building software that learns from itself and improves the coding to improve efficiency and effectiveness. Software that modifies itself is already available. Software that is aware of itself has not yet been designed, but is likely to be in the future. It’s entirely possible that the human designers of the software may not recognize it after an iteration has been run. That makes it less likely that a human can stop true AI software after it’s been fully activated.

Our Final Invention is educational, thoughtful, logical, and alarming. Barrat correctly states that there are too many entities (countries, big businesses, etc.) that are rushing to be the first to market with AI. The value to an entity designing AI of being the first to have it is the leverage of owning a machine or software that’s smarter than humanity. For the military, this allows a country’s weapons to be superior to all other countries. For a business, the same premise applies to its products versus the competition. The AI systems will search for more resources to improve themselves or to continue to best the competition in business or warfare.

An AI system not properly programmed may consider humans as competition for resources instead of a helpful partner. The more complexity that exists in a system, the greater the chance that an error could occur that would impact many people. Barrat cites Three Mile Island in 1979 and Wall Street’s high speed trading system in 2010 as two complex systems whose failures were well publicized as an event that could have been much worse. Our resolve to ensure that accidents like those do not occur in the future could cause us to implement even more complicated technology and software, thus accelerating the possibility of AI being invented sooner rather than later. Whether it’s 10 or 50 years from now, I hope that Barrat’s warnings will be considered as the developers of AI continue their quest to be the first.

Subjects of Interest

EdTech

Higher Education

Independent Schools

K-12

Student Persistence

Workforce