Pause Giant AI Experiments?

The Future of Life Institute issued an open letter this week that called for a six month pause on giant AI experiments, specifically those that are greater than Chat GPT4. The letter’s original signees write that despite many endorsements of the Asilomar AI Principles, AI development is on the verge of getting out of control.

The first principle of the Asilomar AI Principles is that the goal of AI research should be to create not undirected intelligence but beneficial intelligence. The FLI’s open letter states that the level of planning and management is not in place to ensure that undirected intelligence does not occur. The examples of AI labs’ releases in recent months indicate a “race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Because the recently released AI systems are becoming human competitive at general tasks, the letter’s signees write that we must ask ourselves the following questions:

  1. Should we let machines flood our information channels with propaganda and untruth?
  2. Should we automate away all the jobs, including the fulfilling ones?
  3. Should we development non-human minds that might eventually outnumber, outsmart, obsolete, and replace us?
  4. Should we risk loss of control of our civilization?

The decisions regarding these questions should not be delegated to unelected tech leaders write the FLI open letter’s authors. They call on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4…If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

FLI states that “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” AI developers must work with policymakers to accelerate development of robust AI governance systems.

The open letter received coverage from most of the news services. CNN, NPR, BBC, CBS, and Reuters. The Reuters article provided quotes from critics of the FLI who stated that Elon Musk and his associates prioritized apocalyptic scenarios over racist or sexist biases.

The most extreme response that I read was the opinion piece in Time that argued that pausing AI development was not enough and we need to shut it down. Written by Eliezer Yudkowsky, a decision theorist from the U.S. who leads research at the Machine Intelligence Research Institute, he writes that he refrained from signing the letter because it’s understating the seriousness of the situation and asking for too little to solve it.

Yudkowsky’s opinion piece reminds me of a book, Our Final Invention by James Barrat, that I read and reviewed in April 2016. Barrat surfaced many of the issues expressed in the FLI letter as well as the Yudkowsky opinion piece. One point that he made is one I will always remember – An AI system not properly programmed may consider humans as competition for resources instead of a helpful partner.

The UK has had an ethics framework for “automated and algorithmic decision-making” for a few years. Its most recent revision was published in May 2021.

Perhaps coincidentally and perhaps not, the UK Department for Science, Innovation and Technology announced at the same time as the FLI open letter, the release of an AI Whitepaper for the regulation of AI. The government has asked the industry to provide them with “views through a supporting consultation.”

I had the good fortune to first meet Oxford University professor Helen Margetts in 2010 when she was the director of the Oxford Internet Institute. She is currently director of the Public Policy Programme at The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence. My discussions with her about her research interest in the ethics of artificial intelligence design and implementation were enlightening to me and furthered my understanding of the major issues in the field.

The U.S. has failed to keep pace with AI developments from an overall policy perspective. Individual agencies like Intel have done a better job of issuing policies. A recent New York Times article provides an overview  of where we are (or aren’t). This is sad given that our tech industry’s investment in AI technology exceeds the UK’s.

Is the U.S. asleep at the switch? We have several federal agencies that study artificial intelligence. In fact, the National Science Foundation announced the creation of seven AI institutes in August of 2020. Perhaps because these initiatives are scattered versus centralized, their research activities, outcomes, and recommendations receive less press coverage than those in the U.K.

I consider myself more of a realist than an alarmist. There is no way that the nations and corporations of the world will agree to the moratorium proposed by FLI. In this rapidly advancing field, six months may be the gap between U.S. technology and Chinese technology and our leaders would consider it ridiculous to relinquish our lead.

At the same time, it’s up to many of us who choose to use AI to call for transparency in ethics decisions of the AI technology we use and deploy. For educators, it’s even more important to understand the capabilities of AI to prepare our students for life in a world with AI incorporated into many daily activities of employment, living, and learning. Now is not the time to be a Luddite. No one can possibly keep track of all the AI initiatives underway. I’ve decided to focus on generative AI and its impact on education and the workforce. Individually, our platforms may be small but collectively, we can build a stronger voice.

Subjects of Interest

EdTech

Higher Education

Independent Schools

K-12

Student Persistence

Workforce