Human-Centered AI, An In-Depth Study Of The Current State Of The Artificial Intelligence Concept – Forbes

Posted: February 17, 2022 at 8:28 am

Artificial Intelligence

Human-Centered Artificial Intelligence (HCAI) is a concept that seems to put human usage and access of AI technology at the forefront. To me, it seems in opposition to the data driven vision of some pundits, though there is the ability to differentiate between goals and development. Human-Centered AI, by Ben Shneiderman, is an excellent introduction to the concepts of HCAI. Be aware, though, that this isnt a breezy, short, book aimed at quick review. This is a business school textbook. For management interested in governance and control, focus on part four of the book, discussed towards the end of this article. The section should be a must-read, even if you skim the rest.

That point is important so as not to surprise people. The books audience should be business personnel and students wanting a strong introduction to the issues of HCAI showing concepts that should then be drilled down into practice. It is for upper- and middle-management in the CIO, CTO, R&D and other more technical realms of an organization. The text is 376 pages in a font smaller than the usual business book. Give that content, this review will remain at a higher level than many of the book reviews in this column.

Theres an important thread running through the book. The author differentiates two different research lenses that can be used, that of science and innovation. The science approach is focused on what is possible from a technical view. Why it is being done doesnt matter. On the other hand, Ben Shneiderman points to the innovation view, that of understanding how a technology can provide innovation in the real world. HCAI is driven from the innovation perspective.

As much as I like this book, it isnt perfect. The big problem early is in chapter four, and that chapter should be skimmed or skipped. In it, the author presents another academic technologists view that the AI revolution is similar to the industrial revolution and makes the same mistake many do in claiming jobs wont be lost. The industrial revolution took people from farms and crafts into simple shop floors. Over generations, those shop floors became more complex, but it was a stepwise advancement. Artificial intelligence isnt that. It will take over jobs with no similar positions to fill. The gap between many of the disappearing jobs and the remaining ones are much larger than during the industrial revolution.

He also states that automation lowers cost and improves quality. The first, yes. The second is very arguable. That, however, is a discussion for another day.

Back to what I like. Chapter eight focuses on the authors two dimensional view of human and automation controls, how they will overlap. There are some excellent examples.

Chapter 12 is a key for understanding the science v innovation views mentioned above. While the discussion runs through the book, this chapter focuses on it in a clear way. It also discusses why the innovation view requires understanding and explainability of AI systems.

Social robots are described and discussed in detail in chapter 16. While it is a good survey of options, I do think the author misses one critical point. He points out that surveys over the years show people interested in anthropomorphic robots, with the feedback implying those robots arent yet good enough. Then he states, in softer words, the opinion that they will never be good enough. Too many opinions over the years have stated because AI hasnt yet reached point X, theyll never reach point X. Thats a stretch.

The same chapter points to what the author describes as supertools, functional devices as the alternative. They avoid the anthropomorphic trap to create usable devices with responses that are accepted. They are useful, and clearly a segment of devices that will remain; but chatbot research has also shown improving technology, creating more acceptance as long as people know they are talking with a chatbot.

For business managers and government employees who wish to better understand the organizational impact of AI at multiple levels, part four is the meat of the matter. The author defines four levels of governance:

Software development

Corporate policy

Industry & trade standards

Governmental regulations

While the entire book is a good overview of HCAI, much of it is aimed at a mid-tier management, and even development manager, level of focus. The explanation of the four levels is something that is important to all levels in companies, industry and government. The types of governance arent independent, and people must be aware of how they integrate. For instance, if software developers arent paying attention to social demands, they wont be prepared for government that could lay waste to expenditures of time and money. In the opposite direction, there is not much use of creating industry standards and government regulations to arent directly applicable to the technology.

That means government officials hiring people to better educate them in whats possible. Long before the recent hearings on social media, and even before the famous statement by Senator Ted Stevens about the internet being as series of tubes, legislators have wanted to help citizens but not understood the implications of the technology and how best to address it.

It also means that corporate policy is critical, as it must be the bridge between development and the real world. Human-centered AI is an important concept. This book is a heavy introduction, and many parts of it will be useful to different audiences. Students, in academia and business, can read it all, but it is still valuable to management who need to both understand how to better direct AI development and to require appropriate AI to solve market and social challenges.

Originally posted here:

Human-Centered AI, An In-Depth Study Of The Current State Of The Artificial Intelligence Concept - Forbes

Related Posts