AI Education Frameworks – Australia vs. U.S.

I recently read an article published in the Guardian about Australia’s education ministers formally backing a national framework that will guide the use of AI in public schools.

Australia’s federal education minister, Jason Clare, was quoted stating that ChatGPT was “not going away” and ChatGPT as a tool was becoming as vital as a “calculator or the internet.” Mr. Clare added that public schools need “to learn how to use it because private schools are using it, kids are using it to do their homework, and educators are playing catch-up.”

Guardian reporter Caitlin Cassidy wrote that Australia’s education sector initially banned the usage of ChatGPT in public schools. The primary concerns of education leaders were information privacy and the tool’s potential to increase plagiarism.

Despite banning ChatGPT, a taskforce of federal, state, and territory education ministers weren’t sitting on their hands. Over the past few months, they met with major education product vendors and learned that approximately 90% plan to move AI into their existing product suite within the next year or two.

Ms. Cassidy cited a UNESCO 2023 global monitoring report, Technology in Education: A Tool on Whose Terms, that called for “urgent governance and regulation of technology in education lest it replace in-person, teacher-led instruction.”

The UNESCO report warned that countries needed to set their own terms for how technology is designed and used in education, particularly with rapid developments in AI being implemented frequently. Children need to be taught to live with and without technology. Technology should support but never replace human interactions in teaching.

A spokesperson for Australia’s Department of Education stated that early research demonstrated that AI can provide intelligent tutoring, more personalization, targeted learning materials, and assist with the education of at-risk students. At the same time, the taskforce recognizes that it will be critical to upskill teachers to introduce AI applications.

The teacher upskilling issue led to working group discussions about “’what can be done at the school level, what can be done at the system level, and what can be done nationally.’” The article did not specify whether the national upskilling would include courses conducted online or courses developed for delivery in person in schools.

All the Australian recommendations, particularly the teacher upskilling, appear to align with the UNESCO report’s recommendation that technology should support but not replace humans. Currently, the framework has gone through several drafts and is available online here. It has not yet been finalized.

Once finalized, the Framework will be reviewed within 12 months of publication and every 12 months thereafter. There are six core elements:

  1. Teaching and learning – Generative AI tools are used to enhance teaching and learning.
  2. Human and social wellbeing – Generative AI tools are used to benefit all members of the school community.
  3. Transparency – Students, teachers, and schools understand how generative AI tools work, and when and how these tools are impacting them.
  4. Fairness – Generative AI tools are used in ways that are accessible, fair, and respectful.
  5. Accountability – Generative AI tools are used in ways that are open to challenge and retain human agency and accountability for decisions.
  6. Privacy and security – Students and others using generative AI tools have their privacy and data protected.

After reading the Guardian article and the draft Framework, I decided to read the latest guidance on AI in the United States.

In the U.S., K-12 education regulations have largely been delegated to the states. The U.S. Department of Education’s Office of Educational Technology (OET) conducted four listening sessions in June and August of 2022 with more than 700 attendees about the use of AI in Education.

After those listening sessions were completed, the OET issued a report in May 2023, Artificial Intelligence and the Future of Teaching and Learning. The purpose of the report is to provide insights and recommendations for AI utilization in schools. Unlike the Australian framework, the report does not focus on a specific AI tool or technique.

Constituents participating in the OET’s listening sessions articulated three reasons to address AI now:

  1. AI may enable achieving educational priorities in better ways, at scale, and with lower costs.
  2. Urgency and importance arise through awareness of system-level risks and anxiety about potential future risks.
  3. Urgency arises because of the scale of possible unintended or unexpected consequences.

The OET cited the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights as well as a recent European Union publication, Ethical guidelines for artificial intelligence (AI) and data in teaching and learning for Educators as sources for ethical policies and guidelines for AI applications.

The drafters of the OET report wrote that two overarching questions guided their work in the report:

  1. What is our collective vision of a desirable and achievable educational system that leverages automation to advance learning while protecting and centering human agency?
  2. How and on what timeline will we be ready with necessary guidelines and guardrails, as well as convincing evidence of positive impacts, so that constituents can ethically and equitably implement this vision widely?

The OET wrote that four key foundations frame the themes for the report. These are:

  1. Center People (Parents, Educators, and Students)
  2. Advance Equity
  3. Ensure Safety, Ethics, and Effectiveness
  4. Promote Transparency

The report continues by elaborating a definition of AI and addressing learning, teaching, assessment, and research and development. After those sections, the report concludes with seven recommendations. These are:

  1. Emphasize Humans in the Loop – “the human is fully aware and fully in control, but their burden is less, and their effort is multiplied by a complementary technological enhancement.”
  2. Align AI Models to a Shared Vision for Education – “policy and decision makers must use their power to align priorities, educational strategies, and technology adoption decisions to place the educational needs of students ahead of the excitement about emerging AI capabilities.”
  3. Design Using Modern Learning Principles – “we must harness AI’s ability to sense and build upon learner strengths.”
  4. Prioritize Strengthening Trust – “Trust needs to incorporate safety, usability, and efficacy.”
  5. Inform and Involve Educators – “involving practitioners includes support for teacher professional learning, opportunities for hands-on experiences, and teachers and students as co-designers.”
  6. Focus R&D on Addressing Context and Enhancing Trust and Safety – “we recommend attention to ‘context’ as a means for expressing the multiple dimensions that must be considered when elaborating the phrase ‘for whom and under what conditions.’”
  7. Develop EducationSpecific Guidelines and Guardrails – local leaders may find it difficult to make informed decisions about the deployment of AI if data privacy, security, bias, transparency, and accountability issues are not taken into consideration.

The report concludes by reminding readers that the Department will update its 2024 National Education Technology Plan (NETP) with these AI recommendations. The last update of the NETP was in 2016.

It’s not surprising to me that the governments of Australia and the U.S. have attempted to issue frameworks for utilizing Artificial Intelligence in education. In the documents that I reviewed, each of them cite the rapidly changing environment and the discussions with education technology vendors that indicate plans to incorporate AI in their products.

Australia’s draft framework currently addresses ChatGPT and other large language models (LLMs), but steers away from a broader AI technology field. The U.S. report does not limit its review to LLMs.

Another difference between the proposed frameworks is that Australia intends to update its framework 12 months after its publication and every 12 months afterwards. The U.S. plans to include the framework in its upcoming 2024 revision of the National Education Technology Plan, a document that was last updated in 2016. There is no promise or guarantee for future updates. That appears to be a failure to answer the second overarching question of “how and on what timeline will we be ready?”

I like Australia’s approach. ChatGPT and other LLMs have been available to the public since late 2022 and early 2023. ChatGPT is used by more than 200 million people daily. I don’t know how many more people use other LLMs like Google’s Bard.

No one has ever accused educators of practicing the “Ready, Fire, Aim” philosophy. At the same time, the frequently quoted phrase “a mind is a terrible thing to waste” comes to mind as most applicable in a time of rapid adoption of a technology that appears to be able to make a difference in teaching, learning, and the workforce.

The U.S. economy may be ranked number one in the world, but our students scored slightly better than average in the 2018 OECD’s Programme for International Student Assessment (PISA) measured in 89 countries.

Education and workforce development are vital to maintaining our global, regional, and local economic competitiveness. Taking years to implement regulations and guidelines on a broad basis is not in our best interest.

Australia’s education minister said that public schools need “to learn how to use it because private schools are using it, kids are using it to do their homework, and educators are playing catch-up.” Let’s hope the U.S. wises up and applies the same attitude to the review and utilization of AI in education.

Subjects of Interest

EdTech

Higher Education

Independent Schools

K-12

Student Persistence

Workforce