It’s Time to Recalibrate Our AI Expectations – InformationWeek

Posted: May 28, 2022 at 8:34 pm

I can still remember the first time I used Alexa.

Its crystal clear in my head: I said, Alexa, playTake On Meand a few seconds later A-has synthy drums kicked in. It was one of the few times in my life that artificial intelligence has left me speechless.

It was a real-world version of the technology I grew up seeing in countless science fiction movies and TV shows. Giving that command, I felt like Captain Kirk talking to the computers on the USS Enterprise.

This was in 2013. I recall wondering where the technology would be in another five or 10 years -- which is roughly where we are right now. I imagined myself having full-blown conversations with personal AI assistants and giving complex voice instructions to my computer. That all seemed achievable, even probable. After all, technology advances exponentially.

With the benefit of hindsight, I can see that I was too optimistic. Were still a long way from bona fide human-to-AI conversation.

Human imaginations always outpace technology. What I can imagine in a minute takes a decade to become something tangible. Left unchecked, our perceptions race away from facts. Every so often, we have to recalibrate our expectations.

We need to swap out our science fiction dreams for technological fact.

How Advanced is AI -- Really?

For as long as I can remember, people have claimed that fully self-driving cars are just over the horizon. Tesla, Toyota, General Motors, and Google all promised us self-driving cars by the end of 2020, but were still waiting. The technology seems to always be just out of reach.

Its the same in most other industries.

Take cloud communication. People have long dreamed of autonomous AI agents that handle the bulk of contact center communication. Some have even promised theyre on the way. But like building an autonomous car, crafting an artificial agent is a big challenge. I have no doubt that we can get there, just that it will take more time than expected.

Think about two small parts: speech recognition (transcribing speech into text) and natural language processing (understanding text and spoken word).

Today, technology transcribes calls instantaneously, with far better accuracy than I could manage if I had to become a stenographer for a day. And, natural language processing technology for enterprises is good, too. It can analyze transcripts and provide some basic understanding of topics, questions, sentiment, action items, and so on.

But what AI cant do just yet is understand what a conversation is actually about. Systems can transcribe a conversation about puppies. It can pull out questions about breeds and highlight an unanswered question about Labrador veterinary care. But it doesnt know what a Labrador is or what a flea treatment entails. It doesnt even know what a dog is. Is that kind of tragic, and a little creepy? Sure, but its also true.

Todays AI systems are great for simple, repeatable functions. Because they perform those functions so well, they can give a false impression of its potential. The leap from simple function to fully autonomous agent or self-driving car is a chasm. I feel confident saying that we wont see a fully autonomous smart agent replacing a human agent in the next five to ten years.

Theres a gap between what we believe AI can do and what its capable of in the real world. Its up to companies to fix the discrepancy. Because if we let rumor run wild, itll undermine all the breakthroughs we have made.

Its tempting to tweak the truth and embellish functionality, especially when it comes to something as opaque as AI. But a lot of companies do just that. According to venture firm MMC, four in 10 European startups classified as AI companies dont use AI technology in a way thats material to their business. In a lot of cases, their AI powers things like chatbots or fraud prevention. Both are useful applications, but theyre more of an auxiliary service than a central selling point.

Small embellishments or overpromises probably help in the short term. A company can generate media buzz, win over some customers, and pad its bottom line. But after people start using their product, those small wins turn into big losses.

When you overpromise and underdeliver, people get frustrated. They complain. They cancel. They bad-mouth your company to their network. I know thats true because Ive been that consumer.

In the mid-1990s, I was captivated by an ad for a speech-to-text program. They promised the whole science fiction experience: speaking out loud, giving voice commands, and perfect transcription. It sounded amazing, so I downloaded the program and spent 60 hours training it on my voice. Prep work done, I sat down to narrate a college essay.

Lets just say it failed to live up to any semblance of expectations.

It missed commands, transcribed poorly, and was far more frustrating than just writing my college papers with a pen, paper, and Bic Wite-Out. It was all hype and no substance. I ditched the tool and never came back. Its only now, decades later -- and with the development of personal assistants -- that Im finally coming back to voice commands.

Heres the wild part: Theres no regulation around this whatsoever. Companies have to read cautionary tales like this and decide to regulate themselves. For those leaders and organizations willing to hold themselves accountable, there are some basic rules.

First, businesses should be upfront about how they source their training data. Companies like Google and Facebook have rightly caught flack for being cagey around their data gathering methods. Where does it come from? Is it representative? How do you manipulate it after collection?

If youre an AI practitioner or youre part of the go-to-market team for an AI product, you need to be open. Theres nothing sensitive you can share. What happens when you tell your competitors how you find your data? Nothing. Owning the data is the important bit, not your data gathering process.

Second, be clear about how youre using that data. Data is the lifeblood of AI systems. Its what makes them work, so theres no sidestepping the question. When youre upfront, people are usually happy to opt into sharing their anonymized data to a collective pool, especially when you tell them its to help improve the product.

Last, describe your AI products accurately and honestly. Be upfront about what you can do and, when its appropriate, what you cant. You might lose an inch to your competitors in the short term, but ethical companies stand to win out in the long term. Theyll retain happy customers, enjoy sustainable growth, and blow past organizations playing fast and loose with the truth.

The human imagination is a brilliant thing. But we cant let it rewrite our technological reality. By all means, imagine, daydream, and ponder. Think up dozens of new AI applications and products. Use those ideas to fuel your work.

But dont let your ideas write checks your technology cant cash.

Read the rest here:

It's Time to Recalibrate Our AI Expectations - InformationWeek

Related Posts