March 2024 AI Update: Transformations in Education, Regulation, and Industry Insights

I continue to be amazed by the frequency of articles about the latest Artificial Intelligence (AI) products and applications. There are far too many Generative AI or LLM products to report. The market will settle out with users likely choosing products from the large tech companies (Google, Microsoft, Meta) or the small companies that managed to gain market share.

I posted my last AI update on January 26. I have attempted to organize the news in the same primary categories as before. These include using AI in Teaching, Regulatory Issues, AI Threats, Research, The Future of Work, Conferences, and Webinars among others.

AI Books

Wharton professor Ethan Mollick announced that he has written Co-Intelligence, a book about living and working with AI. The book will be available on April 2. Anyone interested in pre-ordering a copy can do so through the link above. In his announcement, Professor Mollick acknowledged that he used AI to provide him with ideas for edits as well as a few of the summaries of papers that he reviewed and cited in the book.

ai robot reading book and taking notes on library desk

AI in Teaching

Lauren Wagner reported for The74 that University of Nebraska-Lincoln researchers found that emailed progress updates kept undergraduates in STEM courses on track instead of dropping or failing the course.

The researchers trained an AI model in a computer science course using homework, test scores, and final grades of 537 students. In a 2019 class, they tested the model on 65 undergraduates taking the same course. Half received automated emails six, nine, and twelve weeks into the semester with the AI model’s projection of their prospects for success: good, fair, prone-to-risk, or at-risk of failing.

The other half received one message that said, “unable to make a projection.” At the end of the semester, 91% of the first group passed the course versus 73% of the second group. 86% of the first group reported increasing their effort after seeing the AI model forecast.

The study’s results helped the researchers obtain a grant from the National Science Foundation to develop a smartphone app called Messages From a Future You. The model will be upgraded by also gathering information from each student about their personality, life, and classroom experiences.

The app is meant to support STEM students through difficult classes. The researchers are hoping the app provides explanations for what is causing poor performance and what the remedy should be.

The University of Texas at San Antonio announced the creation of a new college dedicated to AI, cybersecurity, computing, and data science. The college is planned to launch in the fall of 2025. UTSA’s president said that while the university is concentrating its capacity in one college, it still needs to make sure all students understand AI’s capabilities.

AI is Coming to Teacher Prep. Here’s What That Looks Like is the title of a recent Education Week article. Accordingly, the Relay Graduate School of Education “is developing several AI-driven simulators that will give prospective teachers a chance to practice interacting with students – before they actually set foot in a classroom.”

The simulator will be piloted before being formally included in the curriculum. The first lesson was developed by Wharton Interactive for Relay.

robot standing in front of chalkboard teaching elementary children in classroom

Regulating AI

On Wednesday, the European Union (EU) passed a major set of regulatory ground rules to govern artificial intelligence. The EU AI Act assigns AI into four categories of risk: unacceptable and high, medium, and low hazard. The regulations will go into effect at the end of May.

Biometric and facial recognition apps are banned for sensitive characteristics, social scoring, and AI that manipulates behavior or exploits vulnerabilities. High-risk AI in infrastructure, education, and employment will require strict obligations such as risk assessment, transparency, and human oversight.

An attorney with an international law firm stated that “considering the pace of change in the technology, a further complication could be that the EU AI Act quickly becomes outdated, especially considering the timeframes for implementation.”

Brookings’ Norman Eisen, Nicol Turner Lee, and Samara Angel wrote an article recommending 8 best practices for state election officials on AI. The authors note that their recommendations are not intended to replace federal regulations. Since few related to AI are in place, they opted to make these recommendations ahead of the fall elections. The eight recommendations are:

  • Dialogue with voters and the public</b around potential challenges of AI upfront to present benefits and mitigate risks.
  • Ensure that humans are always in the loop when it comes to AI-generated content and tools around election matters.
  • Evaluate AI tools continuously throughout their development, from their design to integration and operation in electoral processes, and place scrutiny on the procurement of any product/service that relies on AI.
  • Develop a review and feedback process for AI tools and information campaigns that is updated regularly and disseminated to voters and other stakeholders.
  • Train staff to use AI responsibly.
  • Seek collaboration from a broad range of stakeholders in developing approaches to AI.
  • Test for and mitigate potential AI dangers prior to launching AI tools and services and, when issues emerge, step back to interrogate the problems.
  • Apply focused oversight on generative AI, especially election-related AI chatbots that can serve to discourage and, in some instances, disenfranchise voters.

AI Threats

A Will Oremus article in The Washington Post outlined the discussions among U.S. lawmakers how to regulate deepfakes. While political fake videos might be presumed to gain the attention of lawmakers, a recent House committee meeting discussed the use of AI tools to generate non-consensual nude images and child sex abuse material.

At the center of the debate, according to Mr. Oremus, is what Congress can do about the deep fakes “without running afoul of the First Amendment.” The tech industry will fight legislation that seeks to hold them responsible, pushing for prosecution of individual “bad actors” instead.

Bills attempting to resolve some of these situations on a limited basis have not progressed very far so far.

A Chinese national who stole AI secrets from Google was indicted last week. Leon Ding was arrested on four counts of trade secret theft. In one year, Ding copied 500 unique files containing confidential information from Google’s network to his personal Google cloud. He specifically copied data from source files to Apple Notes to avoid detection.

3 Pillars of AI

Jeff Bullas wrote an interesting post titled The Holy Trinity of AI: 3 Forces Powering Artificial Intelligence.

In his article, Pillar 1 is software. He writes that Open AI’s Chat GPT4 consumes 1.7 trillion parameters “with the intelligence and brute force of the Large Language Models (LLMs). The more parameters a model has, the more complex and expressive it can be and the more data it can handle. The more data it has, the smarter it gets.” I like this short explanation.

Pillar 2 is hardware. Graphics Processing Units (GPUs) were first designed for gaming computers but are supercomputers with their ability to process data rapidly. Nvidia is the dominant player.

Pillar 3 is data. Data is the lifeblood of AI. Who has the data? Nations and international corporations with access to the data of billions of people. Google collects 39 data points about each customer. Twitter has 24 pieces of data. Amazon has 23 data points. Facebook collects 14.

Bullas has an infographic with many global data facts. Among these are: 5.4 billion people are online, there are 1.9 billion websites, 500 million tweets sent daily, 183 billion emails sent daily, and total world data is predicted to reach 175ZB by 2025.

A point not made by Bullas is the importance of organizing this data in a format that guarantees its usefulness, accuracy, and relevance for data analytics. The larger the dataset, the more important it is to ensure the data is “clean.”

ChatGPT and LLM News

Elizabeth Kolbert wrote an article for The New Yorker titled The Obscene Energy Demands of A.I. In the U.S., data centers now account for about four percent of electricity consumption, and that demand is expected to increase by 50% by 2026.

It’s estimated that ChatGPT is responding to 200 million requests per day and is consuming more than half a million kilowatt-hours of electricity. Open AI’s CEO, Sam Altman, has stated that a breakthrough in generating electricity, like fusion reactors, is needed if these AI advances are to continue.

AI Productivity Tools

Anthropic announced the release of Claude 3 Haiku. Haiku claims to have advanced vision capabilities, “allowing it to process and analyze visual input such as charts, graphs, and photos. This feature opens possibilities for enterprises that rely on visual data. The Claude 3 suite focuses on speed and security for corporate enterprises.

Beautiful.ai launched its team version of AI-powered presentation software. The product features allow companies to maintain their brand, automate design, and collaborate.

AI Research

Alex Kantrowitz reported the findings of a Gartner study that indicated that AI chatbots will reduce search engine traffic by 25% by 2026. Kantrowitz was a skeptic of the prediction until he had a conversation with Gartner VP Alan Antin. Essentially, there are 8 billion internet searches per day. People are already using ChatGPT for answers instead of searching.

Gartner’s forecast is based on the probability that there will be multiple entry points to the internet as well as the probability that a key player like Apple with its 1.8 billion iPhones could change the way people search from its devices. Given that 2026 is only two years away, it will be interesting to see if they change their forecast up or down.

A new research paper titled Algorithmic progress in language models indicates that LLMs are improving faster than processors do under Moore’s Law (24 months for Moore’s Law). The compute level is improving/halving every five to 14 months.

Podcasts, Video Interviews, and Opinion Pieces

Nido Qubein, president of High Point University, wrote an opinion piece for Higher Ed Dive titled How universities can prepare graduates for an AI-driven world. Mr. Qubein argues that using AI to write essays and research papers is a small part of AI’s impact on higher ed.

Colleges should answer a bigger question: How can colleges and universities help students succeed and lead in a complex AI-powered world after graduation?

Mr. Qubein believes that the solution is to teach life skills to college students. Students should learn to be resilient, self-reliant, compassionate, and capable of sound judgment. Colleges need to deliver more experiences sooner for every major and every type of student pursuing a degree.content creator interviewing with multiple media sources at the same time

According to Mr. Qubein, people skills are in high demand and are harder to develop than technical skills. AI can guide the technical aspects of something, but humans are required to frame it with empathy and wisdom. Most business leaders are hesitant to hire new college graduates who lack emotional intelligence.

New technologies mean new routines, new economic realities, new opportunities, and the sunset of others. Mr. Qubein writes that college graduates need the ability to embrace the positive and ride out the difficulties that come with disruption. Prioritizing students’ critical life skills should be a college’s most important mission.

AI Webinars and Conferences

The U.S. Patent and Trademark Office (USPTO) Artificial Intelligence (AI) and Emerging Technologies (ET) Partnership will hold a free public symposium on intellectual property (IP) and AI. The event will take place virtually and in-person at Loyola Law School, Loyola Marymount University on March 27, from 10 a.m. to 3 p.m. Pacific time. Attendance is limited.

The symposium will feature panel discussions by experts in the field of patent, trademark, and copyright law that focus on:

    1. A comparison of copyright and patent law approaches to the type and level of human contribution needed to satisfy authorship and inventorship requirements;
    2. Ongoing copyright litigation involving generative AI; and
    3. A discussion of laws and policy considerations surrounding name, image, and likeness (NIL) issues, including the intersection of NIL and generative AI.

Powering the Future: How AI is Revolutionizing Finance is being offered by Fast Company and Inc. Magazine on March 20 from 2 to 3 p.m. Eastern time. Explore how AI is powering the future of finance by reducing time, effort, and costs while allowing executives to focus on more strategic, value-added work.

The University of Pennsylvania’s Wharton School created a free series of webinars for entrepreneurs to learn about AI. The webinars are offered under the AI Horizons brand by the AI at Wharton program. Replays of the webinars are available on YouTube.

AI and Libraries: Applications, Implications, and Possibilities is the title of a free web conference hosted by Library20.com on Thursday, March 21, from 12–3 p.m. Pacific time. There are approximately 20 topics for the three-hour session. The organizers received so many topics that Part II will be hosted on April 18.

Future of Work in AI

Chief Executive published an article about the takeaways from a recent board forum where two public company CEOs discussed how to make use of the latest in AI. There were three key recommendations that surfaced:

  1. Train your people now. One of the CEOs offered a series of generative AI training programs for his 65,000 employees. The training was NOT mandatory. More than 70 percent of the 65,000 employees took the training within the first three weeks.
  2. Develop critical thinking as the critical skill now. The ability to have enough broad knowledge and common sense to look at what the AI is saying or creating and know if it is real will be essential.
  3. Get your leadership thinking about responsible AI. Ensure that AI initiatives align with the company’s core values and ethical standards. Promote a culture of continuous learning and adaptability. Leverage AI for competitive advantage while mitigating risks.

A new paper was issued by the American Enterprise Institute (AEI) titled The Age of Uncertainty – and Opportunity: Work in the Age of AI. The paper’s authors, Brent Orrell and David Veldran, reviewed 10 years of research on AI’s potential and actual impacts on employment trends and demand for skills in the labor market.

The authors organize the development of AI between 2010 and today in three major periods. From 2010-2016, the development of neural networks dominated AI. From 2016-2022, new neural network models set new benchmarks in natural language processing. From 2022 to the present, generative AI and generative AI tools proliferated.

The paper has several useful charts and graphs indicating the characteristics of jobs that will thrive under AI. I recommend reading the full report.

Orrell and Vedran have four recommendations for policymakers. These are:

  1. Integrate Technical and Noncognitive Skill Development,
  2. Emphasize Flexibility in Retraining,
  3. Improve Training Guidance, and
  4. Empower Workers.

Dexter Tilo wrote a story for Human Resources Director titled Will technology really lead to job losses? The article cites a recently issued paper by Randstad Singapore that acknowledges that embracing and leveraging AI is imperative in the workplace. Employees who are willing to upskill and learn to use AI could change their fears about AI into an opportunity.

Final Thoughts

I’m no longer amazed about the creative application of AI to various tasks and assignments for people in the workplace. Somehow this latest batch of AI-related news appears to be more about issues related to regulation (EU’s AI Act) and issues related to infrastructure (electrical consumption).

I’ll likely take the time to review the EU Act after its final version is available for review. I continue to look for actions taken by our U.S. lawmakers. With the 2024 elections occurring this fall and no Congressional action about AI’s “deep fakes”, it’s no wonder that Brookings issued some guidelines for state election officials.

Subjects of Interest

EdTech

Higher Education

Independent Schools

K-12

Student Persistence

Workforce