- Guest Columnist
ESTIMATED READING TIME: 4 MINUTES
post cover image

Are You Ready for the Artificial Intelligence Revolution?

Artificial intelligence (AI) is everywhere. Already a billion-dollar industry, Forbes Technology Council estimates it will be worth USD 15.7 trillion by 2030. This growth will have a significant effect on all sectors, including higher education. Adapting to new market demands and opportunities, higher education institutions are already adopting AI to innovate and cut costs. Admissions and enrollment processes increasingly rely on AI systems, particularly in the form of chatbots. Virtual personal assistants now assist students with timetables, course selections, and extracurricular activities. Automated teaching assistants are freeing up valuable time for academics and adding value to the learning experience.

AI systems are reducing staff costs, increasing efficiency, and paving the way for market-leading innovation. Higher education institutions at the front of this race will be the market leaders of tomorrow; those who join late may never be able to catch up.

We all know what is at stake. In 2016, the Washington Post reported how university and college enrollment continued to drop across the sector.* More generally, numerous academic reports and newspaper stories have predicted structural changes to the labor force, with many of today’s jobs being filled by robotic and AI-driven technology in the near future.** As a sector with a high number of employees, higher education will certainly need to radically adapt to this restructuring. So it is easy to see why some call the mainstream adoption of AI nothing short of a revolution, and it is a revolution that no higher education institution can afford to miss.

However, new technology also brings new challenges of ethics and regulatory compliance. With the introduction of AI on campus and in the lecture hall, there are growing concerns regarding unforeseen side effects. First, AI may lead to unintended privacy invasions. A student may tell a chatbot or virtual assistant of mental health issues that should never be recorded and processed electronically. Combining data about a student’s academic progress, extracurricular activities, and location may reveal sensitive information that otherwise would not come to light. There is also real potential for inadvertent discrimination. Predictive analytics may be used to help capture and nurture a student’s academic interest but may also be used to dissuade a student from pursuing their full potential or to limit their opportunities.

A fundamental characteristic of AI, which makes it so powerful, is that it is “self-taught.” In other words, an AI algorithm does not simply perform its coded task; instead, it devises its own optimal to achieve a pre-set result. It is this “cognitive intelligence” that makes AI so powerful and mysterious and makes the outcome of AI processing ultimately unforeseeable. Thus, before a higher education institution adopts AI, it needs to ask several questions. How will the institution ensure that the AI processing and outcome will comply with pertinent federal and state laws, such as the Family Education Rights and Privacy Act of 1974 (FERPA) and data breach notification statutes? How will the institution ensure that the system does not use or combine factors that may inadvertently lead to discrimination? How will the institution safeguard its students, staff, and itself against the possibility of “rogue” AI going horribly wrong?

These are some of the broader questions explored in The Tambellini Group’s upcoming report, How Can Higher Education Prepare for the AI Revolution? By giving a practical, high-level introduction to AI and its possible application in higher education, the report charts legal and ethical conundrums now faced on campuses around the world. Moreover, the arrival of AI in higher education raises broader ethical dilemmas. What is the role of the educator? Can social interactive learning be facilitated through robots? Is it better to let a student talk through their issues with a sympathetic automated counselor than an empathetic flesh-and-blood counselor? How can we retain control of the AI and ensure that it does not control us?

There is no doubt that AI will be a boon for higher education. AI can make admissions more efficient. It can optimize operations, assist lecturers, support students, and lead to numerous exciting pedagogical methods and curricula. Powerful algorithms will enhance the experience of every single student and enable institutions to thrive. Yet, these systems come at a cost. They are expensive to develop, implement, and maintain, and there are potential threats to the civil liberties of both staff and students.

The world is changing more rapidly than before. A higher education institution with ambitions to maintain its reputation and grow for the future cannot fail to join in the revolution. But as the French revolutionary Jacques Mallet du Pan wrote in 1773, “the revolution devours its own children.” How can your institution join in and benefit from the AI revolution while making sure that you are not exposed to such risk—reputational, financial, and legal—that you may end up being eaten in the process?

Writing in the Washington Post, Jeffrey Selingo astutely observed; “For a sector that has been around since before the founding of this country, tradition is perhaps the biggest barrier to change in higher education.” AI presents new, exciting opportunities and prospects for those higher education institutions that dare to be ambitiously forward-thinking. But the revolution is not only for those who dare to join. The AI revolution is already having a profound impact on the sector overall. For many higher education institutions, the right AI strategy may even be the key to staying afloat in a competitive marketplace. Our report offers advice on how to navigate those choppy waters.

*Jeffrey J. Selingo, “The coming era of consolidation among colleges and universities”, The Washington Post, September 7, 2016

** e.g. Mary C. Lacity and Leslie P. Willcocks, “A new approach to automating services”, (2016) MIT Sloan Management Review, Fall

mm
Columnist: Ann Kristin Glenster - Guest Columnist
Ann Kristin Glenster, Lead Consultant at Glenrox Consultancy and PhD Candidate in Law at University of Cambridge, is a legal expert on the intersection between emerging technologies (e.g. big data, artificial intelligence, cloud computing, smart technologies) and privacy, data protection, cybersecurity, and intellectual property law. She has taught courses in information and technology law, cybersecurity and privacy law, data protection, and philosophy. Specializing in data privacy, Ms. Glenster has authored several research reports on the GDPR, as well as articles on privacy law and technology in the US.
CATEGORIES: Technology Leadership