Tuesday, July 30, 2024

Okay, so... AI and eLearning

 A) Artificial Intelligence will improve everything. 

B) Artificial Intelligence will ruin everything.

 C) No one has a clue as to what AI will do to our field. 

Pick one and defend it. 

Or not. Here's the thing about Artificial Intelligence. It is always guided by human intelligence, and unless or until the Singularity occurs, it always will be. That means that wherever AI shows up doing something, somebody somewhere had that thing in mind. And they spent a lot of time and effort to make sure it would succeed. 

I'm not an engineer, much less an AI engineer, but the principles are not all that mysterious. People have invented a way to make machines, which in AI terms means software, that can be programmed to do things that were previously only possible for humans. And once you train the software, it can often do those things better (i.e., more consistently, more precisely, and with fewer coffee breaks) than human beings. But it takes smart people with a particular set of skills and a specific purpose in mind to accomplish this.  

And data. It takes lots of good, clean data to train a machine to produce human-like outputs. The best kind of results come from databases developed in fields with lots of records that have to be scrupulously accurate, like the medical field. But that kind of data is hard for developers to get their hands on. There are laws about such things. Those who have managed to do so have built some very robust AI systems for diagnosis. More often, systems rely on abundant free data, which has to be cleaned up. Like, searching the Internet for written examples of English language usage.  

And that brings us to ChatGPT. This system uses a boatload of... let's just call it unpristine data. ChatGPT had to be carefully trained, and it has to be consistently retrained, in order to generate the desired result. For ChatGPT, the desired result is text that looks, sounds, and feels like it was written by an articulate human being. And that's it. At it's root, ChatGPT is what AI engineers call, in their own highly technical terms, a "people pleaser." Its fundamental purpose is not accuracy. It succeeds when people who read its output are happy with it. "Is this good?" Yes or no. "What would make it better?" People have to answer these questions in order to keep retraining it. 

Google's search algorithm, by way of contrast, is designed for accuracy. Twitter's search has the goal of presenting you with posts that are provocative. X wants your clicks, likes, retweets. It doesn't care if you like what you get--maybe better if you don't. And it is not concerned at all about reflecting reality.

So where is all this leading us in EdTech, in online learning, eLearning, digital education? It actually gives us hope. Look at the academic integrity front. For the most part it will not be students bent on cheating who will be developing deep-learning neural networks, but their professors. In the neverending cat-and-mouse, the cat has the resources. 

More good news on the learning front comes from none other than Sal Khan of Khan Academy. He has the feel-good EdTech AI story of the hour:

Clickable Picture of Sal Khan on YouTube


Carefully designed AI can do a great job in support of teachers and learners. It just takes smart people with a particular set of skills and a higher purpose in mind to accomplish it. 


 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.