The Way We Train AI is Fundamentally Flawed Machine Learning Times – The Predictive Analytics Times

Its no secret that machine-learning models tuned and tweaked to near-perfect performance in the lab often fail in real settings. This is typically put down to a mismatch between the data the AI was trained and tested on and the data it encounters in the world, a problem known as data shift. For example, an AI trained to spot signs of disease in high-quality medical images will struggle with blurry or cropped images captured by a cheap camera in a busy clinic.

Now a group of 40 researchers across seven different teams at Google have identified another major cause for the common failure of machine-learning models. Called underspecification, it could be an even bigger problem than data shift. We are asking more of machine-learning models than we are able to guarantee with our current approach, says Alex DAmour, who led the study.

Visit link:
The Way We Train AI is Fundamentally Flawed Machine Learning Times - The Predictive Analytics Times

Related Posts
This entry was posted in $1$s. Bookmark the permalink.