How AI will earn your trust – JAXenter

Posted: April 9, 2020 at 6:27 pm

In the world of applying AI to IT Operations one of the major enterprise concerns is a lack of trust in the technology. This tends to be an emotional rather than intellectual response. When I evaluate the sources of distrust in relation to IT Ops, I can narrow it down to four specific causes.

The algorithms used in AIOps are fairly complex, even if you are addressing an audience which has a background in computer science. The way in which these algorithms are constructed and deployed is not covered in academia. Modern AI is mathematically intensive and many IT practitioners havent even seen this kind of mathematics before. The algorithms are outside the knowledge base of todays professional developers and IT operators.

SEE ALSO: 3 global manufacturing brands at the forefront of AI and ML

When you analyse the specific types of mathematics used in popular AI-based algorithms, deployed in an IT operations context, the maths is basically intractable. What is going on inside the algorithms cannot be teased out or reverse engineered. The mathematics generates patterns whose sources cannot be determined due to the very nature of the algorithm itself.

For example, an algorithm might tell you a number of CPUs have passed a usage threshold of 90% which will result in end user response time degrading. Consequently, the implicit instruction is to offload the usage of some servers. When you have this situation, executive decision makers will want to know why the algorithm indicates there is an issue. If you were using an expert system it could go back and show you all the invoked rules until you reverted back to the original premise. Its almost like doing a logical inference in reverse. The fact that you can trace it backwards lends credibility and validates the conclusion.

What happens in the case of AI is that things get mixed up and switched around, which means links are broken from the conclusion back to the original premise. Even if you have enormous computer power it doesnt help as the algorithm loses track of its previous steps. Youre left with a general description of the algorithm, the start and end data, but no way to link all these things together. You cant run it in reverse. Its intractable. This generates further distrust, which lives on a deeper level. Its not just about not being familiar with the mathematical logic.

Lets look at the way AI has been marketed since its inception in the late 1950s. The general marketing theme has been that AI is trying to create a human mind, when this is translated into a professional context people view it as a threat to their jobs. This notion has been resented for a long time. Scepticism is rife but it is often a tactic used to preserve livelihoods.

How AI has been marketed as an intellectual goal and a meaningful business endeavour, lends credibility to that concern. This is when scepticism starts to shade into genuine distrust. Not only is this technology that may not work, it is also my personal enemy.

IT Operations, in terms of all the various enterprise disciplines, is always being threatened with cost cutting and role reduction. Therefore, this isnt just paranoia, theres a lot of justification behind the fear.

IT Operations has had a number of bouts with commercialized AI which first emerged in the final days of the cold war when a lot of code was repackaged and sold to the IT Ops as it was a plausible use case. Many of the people who are now in senior enterprise positions, were among the first wave of people who were excited about AI and what it could achieve. Unfortunately, AI didnt initially deliver on expectations. So for these people, AI is not something new, its a false promise. Therefore, in many IT Operations circles there is a bad memory of previous hype. A historical reason for scepticism which is unique to the IT Ops world.

These are my four reasons why enterprises dont trust AIOps and AI in general. Despite these four concerns, the use of AI-based algorithms in an IT Operations context is inevitable, despite the distrust.

Take your mind back to a very influential Gartner definition of big data in 2001. Gartner came up with the idea of the 3Vs. The 3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. At the time the definition was very valuable and made a lot of sense.

The one thing Gartner missed is the issue of dimensionality i.e. how many attributes a dataset has. Traditional data has maybe four or five attributes. If you have millions of these datasets, with a few attributes, you can store them in a database and it is fairly straightforward to search on key values and conduct analytics to obtain answers from the data.

However, when youre dealing with high dimensions and a data item that has a thousand or a million attributes, suddenly your traditional statistical techniques dont work. Your traditional search methods become ungainly. It becomes impossible to formulate a query.

As our systems become more volatile and dynamic, we are unintentionally multiplying data items and attributes which leads me onto AI. Almost all of the AI techniques developed to date are attempts to handle high dimensional data structures and collapse them into a smaller number of manageable attributes.

When you go to the leading Universities, youre seeing fewer courses on Machine Learning, but more geared towards embedding Machine Learning topics into courses on high dimensional probability and statistics. Whats happening is that Machine Learning per se is starting to resemble practical oriented bootcamps, while the study of AI is now more focussed on understanding probability, geometry and statistics in relation to high dimensions.

How did we end up here? The brain uses algorithms to process high dimensional data and reduces it to low dimensional attributes, it then processes and ends up with a conclusion. This is the path AI has taken. Lets codify what the brain is doing and you end up realizing that what youre actually doing is high dimensional probability and statistics.

SEE ALSO: Facebook AIs Demucs teaches AI to hear in a more human-like way

I can see discussions about AI being repositioned around high dimensional data which will provide a much clearer vision of what is trying to be achieved. In terms of IT operations, there will soon be an acknowledgement that modern IT systems contain high volume, high velocity and high variety data, but now also high dimensional datasets. In order to cope with this were going to need high dimensional probability and statistics and model it in high dimensional geometry. This is why AIOps is inevitable.

More here:

How AI will earn your trust - JAXenter

Related Posts