Latest AI That Learns On-The-Fly Is Raising Serious Concerns, Including For Self-Driving Cars – Forbes

Posted: December 18, 2019 at 8:44 pm

AI Machine Learning is being debated due to the "update problem" of adaptiveness.

Humans typically learn new things on-the-fly.

Lets use jigsaw puzzles to explore the learning process.

Imagine that you are asked to solve a jigsaw puzzle and youve not previously had the time nor inclination to solve jigsaw puzzles (yes, there are some people that swear they will never do a jigsaw puzzle, as though it is beneath them or otherwise a useless use of their mind).

Upon dumping out onto the table all the pieces from the box, you likely turn all the pieces right side up and do a quick visual scan of the pieces and the picture shown on the box of what you are trying to solve for.

Most people are self-motivated to try and put all the pieces together as efficiently as they can, meaning that it would be unusual for someone to purposely find pieces that fit together and yet not put them together. Reasonable people would be aiming to increasingly build toward solving the jigsaw puzzle and strive to do so in a relatively efficient manner.

A young child is bound to just jump into the task and pick pieces at random, trying to fit them together, even if the colors dont match and even if the shapes dont connect with each other. After a bit of time doing this, most children gradually realize that they ought to be looking to connect pieces that will fit together and that also matches in color as depicted on the overall picture being solved for.

All right, youve had a while to solve the jigsaw puzzle and lets assume you were able to do so.

Did you learn anything in the process of solving the jigsaw puzzle, especially something that might be applied to doing additional jigsaw puzzles later on?

Perhaps you figured out that there are some pieces that are at the edge of the puzzle. Those pieces are easy to find since they have a square edge. Furthermore, you might also divine that if you put together all the edges first, youll have an outline of the solved puzzle and can build within that outline.

It seems like a smart idea.

In recap, you cleverly noticed a pattern among the pieces, namely that there was some with a straight or squared edge. Based on that pattern, you took an additional mental step and decided that you could likely do the edge of the puzzle with less effort than the rest of the puzzle, plus by completing the overall edge it would seem to further your efforts toward completing the rest of the puzzle.

Maybe you figured this out while doing the puzzle and opted to try the approach right away, rather than simply mentally filing the discovered technique away to use on a later occasion.

I next give you a second jigsaw puzzle.

What do you do?

You might decide to use your newfound technique and proceed ahead by doing the edges first.

Suppose though that Ive played a bit of a trick and given you a so-called edgeless jigsaw puzzle. An edgeless version is one that doesnt have a straight or square edge to the puzzle and instead the edges are merely everyday pieces that appear to be perpetually unconnected.

If you are insistent on trying to first find all the straight or square-edged pieces, youll be quite disappointed and frustrated, having to then abandon the edge-first algorithm that youve devised.

Some edgeless puzzles go further by having some pieces that are within the body of the puzzle that have square or straight edges, thereby possibly fooling you into believing that those pieces are for the true edge of the jigsaw.

Overall, heres what happened as you learned to do jigsaw puzzles.

You likely started by doing things in a somewhat random way, especially for the first jigsaw, finding pieces that fit together and assembling portions or chunks of the jigsaw. While doing so, you had noticed that there were some that appeared to be the edges and so you came up with the notion that doing the edges was a keen way to more efficiently solve the puzzle. You might have employed this discovery right away, while in the act of solving the puzzle.

When you were given the second jigsaw, you tried to apply your lesson learned from the first one, but it didnt hold true.

Turns out that the edge approach doesnt always work, though you did not perhaps realize this limitation upon initial discovery of the tactic.

As this quick example showcases, learning can occur in the act of performing a task and might well be helpful for future performances of the task.

Meanwhile, what youve learned during a given task wont necessarily be applicable in future tasks, and could at times confuse you or make you less efficient, since you might be determined to apply something that youve learned and yet it no longer is applicable in other situations.

Adaptive Versus Lock-down While Learning

Learning that occurs on-the-fly is considered adaptive, implying that you are adapting as you go along.

In contrast, if you arent aiming to learn on-the-fly, you can try to lock out the learning process and seek to proceed without doing any learning. This kind of lock-down of the learning process involves inhibiting any learning and making use of only what has previously been learned.

Voila, now its time to discuss Artificial Intelligence (AI).

Todays AI systems have seemingly gotten pretty good at a number of human-like tasks (though quite constrained tasks), partially as a result of advances in Machine Learning (ML).

Machine Learning involves the computer system seeking to find patterns and then leveraging those patterns for boosting the performance of the AI.

An AI developer usually opts to try out different kinds of Machine Learning methods when they are putting together an AI system (see my piece on ensemble ML) and typically settles on a specific ML that they will then embed into their AI system.

A looming issue that society is gradually uncovering involves whether AI Machine Learning should be adaptive as it performs its efforts, or whether it is better to lock-down the adaptiveness while the ML is undertaking a task.

Lets consider why this an important point.

Such a concern has been specially raised in the MedTech space, involving AI-based medical devices and systems that are being used in medicine and healthcare.

Suppose that an inventor creates a new medical device that examines blood samples and the device while using AI tries to make predictions about the health of the patient that provided the blood.

Usually, such devices would require federal regulatory approval before it could be placed into the marketplace for usage.

If this medical device is making use of AI Machine Learning, it implies that the system could be using adaptive techniques and therefore will try to improve its predictive capability while examining blood samples.

Any federal agency that initially tested the medical device to try and ensure that it was reliable and accurate prior to it being released would have done so at a point in time prior to those adaptive acts that are going to occur while the AI ML is in everyday use.

Thus, the medical device using AI ML is going to inevitably change what it does, likely veering outside the realm of what the agency thought it was approving.

On the downside, the ML is potentially going to learn things that arent necessarily applicable, and yet not realize that those aspects are not always relevant and proceed to falsely assess a given blood sample (recall the story of believing that doing the edge of a jigsaw can be done by simply finding the straight or squared pieces, which didnt turn out to be a valid approach in all cases).

On the upside, the ML might be identifying valuable nuances by being adaptive and self-improve itself toward assessing blood samples, boosting what it does and enhancing patient care.

Yes, some argue, there is that chance of the upside, but when making potentially life-or-death assessments, do we want an AI Machine Learning algorithm being unleashed such that it could adapt in ways that arent desirable and might, in fact, be downright dangerous?

Thats the rub.

Some assert that the adaptive aspects should not be allowed on-the-fly to adjust what the AI system does, and instead in a lock-down mode merely collect and identify potential changes that they would be inspected and approved by a human, such as the AI developers that put together the system.

Furthermore, in a regulatory situation, the AI developers would need to go back to the regulatory agency and propose that the AI system is now a newly proposed updated version and get agency approval before those adaptations were used in the real-world acts of the system.

This thorny question about adaptiveness running free or being locked down is often called the update problem and is raising quite a debate.

In case you think the answer is simple, always lock-down, unfortunately, life is not always so easy.

Those that dont want the lock-down are apt to say that doing so will hamstring the AI Machine Learning, which presumably has the advantage of being able to self-adjust and get better as it undertakes its efforts.

If you force the AI ML to perform in a lock-down manner, you might as well toss out the AI ML since it no longer is free to adjust and enhance what it does.

Trying to find a suitable middle ground, some suggest that there could be guardrails that serve to keep the AI ML from going too far astray.

By putting boundaries or limits on the kinds of adjustments or adaptiveness, you could maybe get the best of both worlds, namely a form of adaptive capability that furthers the system and yet keeps it within a suitable range that wont cause the system to seemingly become unsavory.

The U.S. Food and Drug Administration (FDA) has sketched a regulatory framework for AI ML and medical devices (see link here) and is seeking input on this update problem debate.

Overall, this element of AI ML is still up for debate across all areas of application, not just the medical domain, and brings to the forefront the trade-offs involved in deploying AI ML systems.

Heres an interesting question: Do we want true self-driving cars to be able to utilize AI Machine Learning in an adaptive manner or in a lock-down manner?

Its kind of a trick question or at least a tricky question.

Lets unpack the matter.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so Im not going to include them in this discussion about AI ML (though for clarification, Level 2 and Level 3 could indeed have AI ML involved in their systems and thus this discussion overall is relevant even to semi-autonomous cars).

For semi-autonomous cars, it is equally important that I mention a disturbing aspect thats been arising, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Update Problem

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

The AI driving software is developed, tested, and loaded into the on-board computer processors that are in the driverless car. To allow for the AI software to be updated over time, the driverless car has an OTA (Over-The-Air) electronic communication capability.

When the AI developers decide its time to do an update, they will push out the latest version of the AI driving software to the vehicle. Usually, this happens while the self-driving car is parked, say in your garage, perhaps charging up if its an EV, and the OTA then takes place.

Right now, it is rare for the OTA updating to occur while the car is in motion, though there are efforts underway for enabling OTA of that nature (there is controversy about doing so, see link here).

Not only can updates be pushed into the driverless car, the OTA can be used to grab up aspects from the self-driving car. For example, the sensors on the self-driving car will have collected lots of images, video, and radar and LIDAR data, doing so during a driving journey. This data could be sent up to the cloud being used by the automaker or self-driving tech firm.

We are ready now to discuss the AI Machine Learning topic as it relates to adaptiveness versus lock-down in the use case of self-driving cars.

Should the AI ML thats on-board the driverless car be allowed to update itself, being adaptive, or should the updates only be performed via OTA from the cloud and as based on presumably the latest updates instituted and approved by the AI developers?

This might seem rather abstract, so lets use a simple example to illuminate the matter.

Consider the instance of a driverless car that encounters a dog in the roadway.

Perhaps the AI ML on-board the self-driving car detects the dog and opts to honk the horn of the car to try and prod the dog to get out of the way. Lets pretend that the horn honking succeeds and the dog scampers away.

In an adaptive mode, the AI ML might adjust to now include that honking the horn is successful at prompting an animal to get off the road.

Suppose a while later, theres a cat in the road. The AI system opts to honk the horn, and the cat scurries away (though that cat is mighty steamed!).

So far, this horn honking seems to be working out well.

The next day, theres a moose in the roadway.

The AI system honks the horn, since doing so worked previously, and the AI assumes that the moose is going to run away.

Oops, turns out that the moose opts to charge toward the driverless car, having been startled by the horn and decides that it should charge at the menacing mechanical beast.

Now, I realize this example is a bit contrived, but Im trying to quickly illustrate that the AI ML of an adaptive style could adjust in a manner that wont necessarily be right in all cases (again, recall the earlier jigsaw story).

Rather than the on-board AI ML adjusting, perhaps it would be safer to keep it in lock-down.

But, you say, the on-board AI will be forever in a static state and not be improving.

Well, recall that theres the OTA capability of updating.

Presumably, the driverless car could have provided the data about the initial instance of the dog and the horn honking up to the cloud, and the AI developers might have studied the matter. Then, upon carefully adjusting the AI system, the AI developers might, later on, push the latest animal avoidance routine down into the driverless car.

The point being that there is an open question about whether we want to have multi-ton life-or-death cars on our roadways that are being run by AI that is able to adjust itself, or whether we want the on-board AI to be on lock-down and only allow updates via OTA (which presumably would be explicitly derived and approved via human hands and minds).

Thats the crux of the update problem for driverless cars.

Conclusion

There is a plethora of trade-offs involved in the self-driving car adaptiveness dilemma.

If a self-driving car isnt adjusting on-the-fly, it might not cope well with any new situations that crop-up and will perhaps fail to make an urgent choice appropriately. Having to wait maybe hours, days, or weeks to get an OTA update might prolong the time that the AI continues to be unable to adequately handle certain roadway situations.

Human drivers adapt on-the-fly, and if we are seeking to have the AI driving system be as good or possibly better than human drivers, wouldnt we want and need to have the AI ML be adaptive on-the-fly?

Can suitable system-related guardrails be put in place to keep the AI ML from adapting in some kind of wild or untoward manner?

Though we commonly deride human drivers for their flaws and foibles, the ability of humans to learn and adjust their behavior is quite a marvel, one that continues to be somewhat elusive when it comes to achieving the same in AI and Machine Learning.

Some believe that we need to solve the jigsaw puzzle of the human mind and how it works before well have AI ML thats of any top form.

This isnt a mere edge problem and instead sits at the core of achieving true AI.

See more here:

Latest AI That Learns On-The-Fly Is Raising Serious Concerns, Including For Self-Driving Cars - Forbes

Related Posts