Is Artificial Intelligence the Key to Personalized Education? – Smithsonian

Posted: May 9, 2017 at 3:31 pm

For Joseph Qualls, it all started with video games.

That got him messing around with an AI program, and ultimately led to a PhD in electrical and computer engineering from the University of Memphis. Soon after, he started his own company, called RenderMatrix, which focused on using AI to help people make decisions.

Much of the companys work has been with the Defense Department, particularly during the wars in Iraq and Afghanistan, when the military was at the cutting edge in the use of sensors and seeing how AI could be used to help train soldiers to function in a hostile, unfamiliar environment.

Qualls is now a clinical assistant professor and researcher at the University of Idaho's college of engineering, and he hasnt lost any of his fascination with the potential of AI to change many aspects of modern life. While the military has been the leading edge in applying AIwhere machines learn by recognizing patterns, classifying data, and adjusting to mistakes they makethe corporate world is now pushing hard to catch up. The technology has made fewer inroads in education, but Qualls believes its only a matter of time before AI becomes a big part of how children learn.

Its often seen as being a key component of the concept of personalized education, where each student follows a unique mini-curriculum based on his or her particular interests and abilities. AI, the thinking goes, can not only help children zero in on areas where theyre most likely to succeed, but also will, based on data from thousands of other students, help teachers shape the most effective way for individual students to learn.

Smithsonian.com recently talked to Qualls about how AI could profoundly affect education, and also some of the big challenges it faces.

So, how do you see artificial intelligence affecting how kids learn?

People have already heard about personalized medicine. Thats driven by AI. Well, the same sort of thing is going to happen with personalized education. I dont think youre going to see it as much at the university level. But do I see people starting to interact with AI when theyre very young. It could be in the form of a teddy bear that begins to build a profile of you, and that profile can help guide how you learn throughout your life. From the profile, the AI could help build a better educational experience. Thats really where I think this is going to go over the next 10 to 20 years.

You have a very young daughter. How would you foresee AI affecting her education?

Its interesting because people think of them as two completely different fields, but AI and psychology are inherently linked now. Where the AI comes in is that it will start to analyze the psychology of humans. And Ill throw a wrench in here. Psychology is also starting to analyze the psychology of AI. Most the projects I work on now have a full-blown psychology team and theyre asking questions like 'Why did the AI make this decision?'

But getting back to my daughter. What AI would start doing is trying to figure out her psychology profile. Its not static; it will change over time. But as it sees how shes going to change, the AI could make predictions based on data from my daughter, but also from about 10,000 other girls her same age, with the same background. And, it begins to look at things like Are you really an artist or are you more mathematically inclined?

It can be a very complex system. This is really pie-in-the-sky artificial intelligence. Its really about trying to understand who you are as an individual and how you change over time.

More and more AI-based systems will become available over the coming years, giving my daughter faster access to a far superior education than any we ever had. My daughter will be exposed to ideas faster, and at her personalized pace, always keeping her engaged and allowing her to indirectly influence her own education.

What concerns might you have about using AI to personalize education?

The biggest issue facing artificial intelligence right now is the question of 'Why did the AI make a decision?' AI can make mistakes. It can miss the bigger picture. In terms of a student, an AI may decide that a student does not have a mathematical aptitude and never begin exposing that student to higher math concepts. That could pigeonhole them into an area where they might not excel. Interestingly enough, this is a massive problem in traditional education. Students are left behind or are not happy with the outcome after university. Something was lost.

Personalized education will require many different disciplines working together to solve many issues like the one above. The problem we have now in research and academia is the lack of collaborative research concerning AI from multiple fieldsscience, engineering, medical, arts. Truly powerful AI will require all disciplines working together.

So, AI can make mistakes?

It can be wrong. We know humans make mistakes. Were not used to AI making mistakes.

We have a hard enough time telling people why the AI made a certain decision. Now we have to try to explain why AI made a mistake. You really get down to the guts of it. AI is just a probability statistics machine.

Say, it tells me my child has a tendency to be very mathematically oriented, but she also shows an aptitude for drawing. Based on the data it has, the machine applies a weight to certain things about this person. And, we really cant explain why it does what it does. Thats why Im always telling people that we have to build this system in a way that it doesnt box a person in.

If you go back to what we were doing for the military, we were trying to be able to analyze if a person was a threat to a soldierout in the field. Say one person is carrying an AK-47 and another is carrying a rake. Whats the difference in their risk?

That seems pretty simple. But you have to ask deeper questions. Whats the likelihood of the guy carrying the rake becoming a terrorist? You have to start looking at family backgrounds, etc.

So, you still have to ask the question, 'What if the AIs wrong?' Thats the biggest issue facing AI everywhere.

How big a challenge is that?

One of the great engineering challenges now is reverse engineering the human brain. You get in and then you see just how complex the brain is. As engineers, when we look at the mechanics of it, we start to realize that there is no AI system that even comes close to the human brain and what it can do.

Were looking at the human brain and asking why humans make the decisions they do to see if that can help us understand why AI makes a decision based on a probability matrix. And were still no closer.

Actually, what drives reverse engineering of the brain and the personalization of AI is not research in academia, its more the lawyers coming in and asking 'Why is the AI making these decisions?' because they dont want to get sued.

In the past year, most of the projects Ive worked on, weve had one or two lawyers, along with psychologists, on the team. More people are asking questions like 'Whats the ethics behind that?' Another big question that gets asked is 'Whos liable?'

Does that concern you?

The greatest part of AI research now is that people are now asking that question 'Why?' Before, that question relegated to the academic halls of computer science. Now, AI research is branching out to all domains and disciplines. This excites me greatly. The more people involved in AI research and development, the better chance we have at alleviating our concerns and more importantly, our fears.

Getting back to personalized education. How does this affect teachers?

With education, whats going to happen, youre still going to have monitoring. Youre going to have teachers who will be monitoring data. Theyll become more data scientists who understand the AI and can evaluate the data about how students are learning.

Youre going to need someone whos an expert watching the data and watching the student. There will need to be a human in the loop for some time, maybe for at least 20 years. But I could be completely wrong. Technology moves so fast these days.

It really is a fascinating time in the AI world, and I think its only going to accelerate more quickly. Weve gone from programming machines to do things to letting the machines figure out what to do. That changes everything. I certainly understand the concerns that people have about AI. But when people push a lot of those fears, it tends to drive people away. You start to lose research opportunities.

It should be more about pushing a dialogue about how AI is going to change things. What are the issues? And, how are we going to push forward?

Link:

Is Artificial Intelligence the Key to Personalized Education? - Smithsonian

Related Posts