Home Book Reviews Our Final Invention: Artificial Intelligence and the End of the Human Era
Our Final Invention: Artificial Intelligence and the End of the Human Era

Our Final Invention: Artificial Intelligence and the End of the Human Era

0

final_inventionWhile reading a book about technology’s influence on future jobs, I found a reference to James Barrat’s book, Our Final Invention. My curiosity was piqued because Our Final Invention was portrayed not as a “how to” book about artificial intelligence (AI), but rather a book about the dangers of creating it. The description is accurate.

Barrat’s narrative begins with a depiction of the moment when AI surpasses human intelligence. With processor speeds that will exceed the speed of human brains, the AI software improves its thinking capability three percent each time it rewrites its software, debugs the code, and improves its ability to learn. Within a short time, the AI is more than 1,000 times smarter than humans. The creators of the AI disconnect it from the Internet, its source of trillions of pieces of information and data. The AI figures out how to reconnect; after all, it’s smarter than humans.

Barrat maintains that there will be a clear first-mover advantage to anyone who creates an AI that surpasses human intelligence and the primary reason why many will race to be the first will be the potential of self-improving AI. Unfortunately, in the quest to be the first mover, he argues that few are heeding the cautionary advice of a small number of technologists who are concerned that uncontrolled AI will lead to mankind’s extinction. As processor speeds continue to advance, AI iterations will take seconds compared to the 18 years that it takes a generation of humans to mature.

Barrat acknowledges that some scientists believe that AI smarter than humans cannot be developed. At the same time, he cites polls of computer scientists who believe that there’s a 10 percent chance that AI will be created before 2028 and a 50 percent chance that it will be created by 2050. Those same computer scientists believe that the military or large businesses will create AI first. Barrat compares the technological superiority of AI versus humans to Europeans versus Native Americans when the Americas were colonized. He further writes that one of the reasons why there is not much of an intellectual debate about needing controls for AI is that Ray Kurzweil’s Singularity dominates the AI conversation.

Kurzweil has written that when the Singularity occurs, many of mankind’s problems will be solved through nanotechnology and AI. Nanotechnology, engineering at an atomic scale, may lead to the reversal of aging and end of all diseases. His writings assume that this will happen gradually and not suddenly, slow enough for us to learn from our mistakes and change the AI to avoid catastrophes.

In his Programs that Write Programs chapter, Barrat counters Kurzweil’s gradual progress of AI assumptions. He cites experts who state that their companies are working on building software that learns from itself and improves the coding to improve efficiency and effectiveness. Software that modifies itself is already available. Software that is aware of itself has not yet been designed, but is likely to be in the future. It’s entirely possible that the human designers of the software may not recognize it after an iteration has been run. That makes it less likely that a human can stop true AI software after it’s been fully activated.

Our Final Invention is educational, thoughtful, logical, and alarming. Barrat correctly states that there are too many entities (countries, big businesses, etc.) that are rushing to be the first to market with AI. The value to an entity designing AI of being the first to have it is the leverage of owning a machine or software that’s smarter than humanity. For the military, this allows a country’s weapons to be superior to all other countries. For a business, the same premise applies to its products versus the competition. The AI systems will search for more resources to improve themselves or to continue to best the competition in business or warfare.

An AI system not properly programmed may consider humans as competition for resources instead of a helpful partner. The more complexity that exists in a system, the greater the chance that an error could occur that would impact many people. Barrat cites Three Mile Island in 1979 and Wall Street’s high speed trading system in 2010 as two complex systems whose failures were well publicized as an event that could have been much worse. Our resolve to ensure that accidents like those do not occur in the future could cause us to implement even more complicated technology and software, thus accelerating the possibility of AI being invented sooner rather than later. Whether it’s 10 or 50 years from now, I hope that Barrat’s warnings will be considered as the developers of AI continue their quest to be the first.

Comments

comments

Wally Boston Dr. Wallace E. Boston was appointed President and Chief Executive Officer of American Public University System (APUS) and its parent company, American Public Education, Inc. (APEI) in July 2004. He joined APUS as its Executive Vice President and Chief Financial Officer in 2002. In September 2019, Dr. Boston retired as CEO of APEI and retired as APUS President in August 2020. Dr. Boston guided APUS through its successful initial accreditation with the Higher Learning Commission of the North Central Association in 2006 and ten-year reaccreditation in 2011. In November 2007, he led APEI to an initial public offering on the NASDAQ Exchange. For four years from 2009 through 2012, APEI was ranked in Forbes' Top 10 list of America's Best Small Public Companies. During his tenure as president, APUS grew to over 85,000 students, 200 degree and certificate programs, and approximately 100,000 alumni. While serving as APEI CEO and APUS President, Dr. Boston was a board member of APEI, APUS, Hondros College of Nursing, and Fidelis, Inc. Dr. Boston continues to serve as a member of the Board of Advisors of the National Institute for Learning Outcomes Assessment (NILOA), a member of the Board of Overseers of the University of Pennsylvania’s Graduate School of Education, and as a member of the board of New Horizons Worldwide. He has authored and co-authored papers on the topic of online post-secondary student retention, and is a frequent speaker on the impact of technology on higher education. Dr. Boston is a past Treasurer of the Board of Trustees of the McDonogh School, a private K-12 school in Baltimore. In his career prior to APEI and APUS, Dr. Boston served as either CFO, COO, or CEO of Meridian Healthcare, Manor Healthcare, Neighborcare Pharmacies, and Sun Healthcare Group. Dr. Boston is a Certified Public Accountant, Certified Management Accountant, and Chartered Global Management Accountant. He earned an A.B. degree in History from Duke University, an MBA in Marketing and Accounting from Tulane University’s Freeman School of Business Administration, and a Doctorate in Higher Education Management from the University of Pennsylvania’s Graduate School of Education. In 2008, the Board of Trustees of APUS awarded him a Doctorate in Business Administration, honoris causa, and, in April 2017, also bestowed him with the title President Emeritus. In August 2020, the Board of Trustees of APUS appointed him Trustee Emeritus. In November 2020, the Board of Trustees announced that the APUS School of Business would be renamed the Dr. Wallace E Boston School of Business in recognition of Dr. Boston's service to the university. Dr. Boston lives with his family in Austin, Texas.

LEAVE YOUR COMMENT

Your email address will not be published. Required fields are marked *