Generative artificial intelligence (AI) is simultaneously going to make some instruction better than ever and some worse than it has ever been—and you might not be able to spot the difference.
It’s no secret that generative AI is here to stay, and it’s no surprise that learning management system (LMS) companies are rolling out new generative AI-based functionality as quickly as possible to be first to market and capitalize on this moment of innovation. D2L has announced generative AI content generation, including the creation of formative assessments and tonal feedback for video presentations in Brightspace. Anthology is releasing generative AI to create learning modules and content based on inputs such as course title, description, and outcomes, as well as rubric creation in Blackboard. Instructure has announced generative AI functionality to create templates that are more engaging, as well as AI-based tutoring for writing assignments for Canvas.
I believe the vendors are doing exactly what they should, and there is evidence that they’re taking concerns about AI seriously as they aim to release offerings that responsibly use the technology. However, I think there is a looming dichotomy of instructional quality as this technology starts being adopted at institutions around the world.
As a former professor, dean, and graduate student, and now as an industry analyst and advisor, I have had an extensive front-row seat to the range in quality of faculty and institutions. Here are the two extremes I have seen. On one end of the spectrum, there are the amazing faculty who are deeply invested—not only in their subject expertise, but also in the art of teaching and learning. They are constantly refining their craft and always looking for innovative ways to enhance the experience for their students. On the other end of the spectrum, there are faculty who are much less concerned about or invested in teaching and learning, which is often a result of three different reasons:
I think many institutions and faculty will initially experiment with generative AI capabilities in the LMS, and I believe it is inevitable that the technology will become a ubiquitous part of teaching and learning. In general, I believe AI is going to improve teaching and learning and allow for customized learning to finally happen at scale in a way that has only been theoretical to this point. However, it’s also going to exacerbate the existing extremes: The top-performing faculty will continue to enhance their courses and student experience, while the low-performing faculty may invest even less in their courses.
The problem is compounded because it might not be immediately obvious that the underperforming faculty are worse than before. On the surface, they may appear to be more up to date and innovative, but if they lacked the desire, time, and/or discipline before they had AI, it’s possible they may not extend that much effort with it and may pose a higher risk of irresponsibly using generative AI in their courses.
Generative AI is far from perfect. Consider AI hallucinations, for example. Either poor prompt engineering (the process of asking AI the right question from the start) or imperfections in the technology can lead AI to flat-out make up information and responses without awareness or indication to the user that it’s doing so. AI experts and engineers are aware of this issue, and efforts are being made constantly to curb the risk, but it happens.
All the LMS vendors that are releasing AI-generated content functionality are doing so with the important caveat that it is still ultimately the responsibility of a human to vet AI-generated information and use it responsibly. In fact, we’re also seeing companion analytics, found in some products such as D2L Brightspace and Blackboard Learn By Anthology, and reports for institutional administrators to document how much AI is being used and to what extent it is being reviewed and edited by faculty prior to publication to students.
Institutions should embrace this technology, but they also need to train faculty and instructional designers to use it responsibly. They also need to double down on their evaluation and reporting efforts to ensure faculty are adequately vetting AI-generated information and using it to supplement their efforts, not replace them. Failure to do so will water down teaching and learning and risk taking a step backward under the guise of innovation.
I want to leave you with one thought: Are we headed toward a future where faculty are using generative AI to teach courses and students are using generative AI to take them? Who—or, more importantly, what—is learning here again?
© Copyright 2023, The Tambellini Group. All Rights Reserved.
Higher Education Institutions
Solution Providers & Investors
Get exclusive access to higher education analysts, rich research, premium publications, and advisory services.
Weekly email featuring higher education blog articles, infographics or podcasts.