Will artificial intelligence empower us or overpower us? – Investment Week

In 2019, in an article for German publication 'Trends im Asset Management', an Invesco fund manager participated in a Q&A about the future of investment.

The very first question went to the heart of a shift that is reshaping this industry and many others: "How much time do you have left before the machines take over?"

Although frivolous, the implication was undeniably relevant. The story of artificial intelligence (AI) and machine learning (ML) has long been defined by alternating bouts of optimism and dread, with the prospect of change encouraging eager expectation and the actual realisation of change prompting an alarm firmly rooted in fears of human obsolescence.

The biggestartificialintelligencethemes for the next decade

Of course, there is nothing unique about such reactions. They have been sparked by many transformative innovations in many settings through many ages.

Disruption invariably brings winners and losers, and it is this inevitability that demands the weighing of manifest benefits against negative externalities.

For several decades now the most efficient way to handle vast amounts of data has been to feed them into a machine.

Even relatively unsophisticated algorithms can make sense of huge masses of information millions of times more quickly than a human can.

Computers assist us in avoiding knee-jerk reactions, in distinguishing empiricism from quirk and coincidence and in applying a more systematic, rules-defined approach.

'A marriage between humans and machines': How AllianzGI is embracingartificialintelligence

But does this mean that they are fundamentally superior? As we continue to wrestle with both the scope and the limitations of AI and ML, asset managers are facing a challenging balancing act - so in which direction, if any, are the scales likely to tip?

Asset managers are still coming to terms with the pros and cons of AI and ML. So are their clients. So is academia. So are regulators and policymakers.

This much was clear from a recent Cambridge Judge Business School conference, which Invesco was privileged to co-host at The Royal Society.

The event explored where the AI/ML phenomenon stands at present and where it might lead in the years to come.

Scholars, industry figures and other stakeholders addressed topics such as "black box" concerns, regulatory and ethical issues - especially in relation to data privacy - and the broader "promise and pitfalls" of AI and ML.

The liveliest session resulted from a discussion about human-AI/ML interaction. Some contributors cautioned that the AI/ML revolution has barely begun, while others predicted that it will turn out to be far less impactful than widely envisaged.

The hidden tech treasures among UK growth stocks

Tellingly, everyone stressed the importance of a prudent combination of human and machine inputs.

This is an essential point, because the history of innovation and disruption in any field has repeatedly demonstrated that meaningful progress lies in a blend of what we know works well and what we know can be improved.

The relationship between humans and machines, whether in the sphere of asset management or elsewhere, presents a fascinating and recurring test of this rule of thumb.

As a report of the conference observed, it is worth recalling what is now thought of as the first-ever study of AI. Conducted more than 70 years ago by computer scientist and cryptanalyst Alan Turing, the most celebrated of World War II's codebreakers, it set out to determine whether machines can do what humans can.

The focus has long since shifted to what machines can do that humanscannot, yet it is also vital to recognise the opposite - that is, what humans can do that machines cannot.

Science-fiction has traditionally portrayed ostensibly omniscient machines as fatally flawed. Arguably the most well-known example is2001'sHAL 9000, which was forcibly disconnected after descending into what author Arthur C Clarke dubbed a "Hofstadter-Mbius loop".

Driven by "big data" and algorithms rather than by pseudo-human emotions, today's computers might not exhibit HAL's murderous tendencies; yet science-fact is increasingly evidencing their fallibility.

Google Flu Trendsoffers an interesting illustration: launched in 2008 and quietly retired three years later, it proved incapable of fulfilling its intended task - presaging flu epidemics - because neither the data suppliers nor the algorithms understood flu well enough to make the concept work.

Similarly, consider the idea of predictive policing. In theUS, as well as causing controversy around the reinforcement of biases, this algorithm-driven approach has performed well in in relation toincidents of drug-dealing, assault and gang violence but has proven of much less use in tackling crimes of passion and homicides.

Why? A reasonable explanation is that commonplace crimes produce a data set that is too enormous for a human to make sense of but which is perfectly suited to the power of algorithmic analysis.

Conversely, because they are merficully rare, other crimes generate insufficient data for a machine to exploit but enough to benefit from a seasoned police officer's intuition and experience.

In other words,there is a place for a computer's impartial, hyper-processed extrapolations; and there is also a place for a human's subjective, hard-won expertise.

The truth is that intelligence comes in many forms, not all of them the exclusive preserve of algorithms.

As philosopher AJ Ayer once remarked: "It's much easier to imagine a machine creating works of art than appreciating them."

Irving John Good was a contemporary of Turing at Bletchley Park, the top-secret home of Britain's wartime cryptanalysts.

In 1965 he wrote: "The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

Today, with technology advancing at an unprecedented rate, the threat of computers outstripping their creators might never have been more realistic.

The World Economic Forum has officially acknowledged "the singularity" - the moment when machines become infinitely more intelligent than humans - as one of the most pressing issues around AI and ML.

Tesla founder Elon Musk has even posited that machines could render humansuselessunless efforts to merge the two are dramatically stepped up.

In the absence of a "high-bandwidth interface to the brain", he says, the proliferation of computers "smarter than the smartest human on Earth" could end life as we know it. The prophecies of doom will no doubt keep coming.

Yet Invesco's aforementioned fund manager felt able to answerTrends im Asset Management's opening question thus: "I'm not worried about that."

What makes him so confident? Why is he so sure that AI and ML will not railroad their flesh-and-bones counterparts into redundancy?

His conviction, which we would all do well to share, stems from a belief that technology is not here tooverpower us: it is here toempower us.

As long as we remember this crucial caveat, AI and ML should help augment the decision-making, performance and services that asset managers can offer.

If we appreciate what we do well and what computers and their algorithms do well - and, by extension, if we accept what each does better than the other - then the outcomes, overall, should be immensely positive.

The fact is that we have an amazing opportunity to utilise the best of both worlds - human and machine - to deliver investment solutions of unprecedented effectiveness.

Henning Stein is global head of thought leadership at Invesco

See the original post:
Will artificial intelligence empower us or overpower us? - Investment Week

Related Posts
This entry was posted in $1$s. Bookmark the permalink.