Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist – Futurism

Weve all been in situations where we had to make tough ethical decisions. Why not dodge that pesky responsibility by outsourcing the choice to a machine learning algorithm?

Thats the idea behind Ask Delphi, a machine-learning model from the Allen Institute for AI. You type in a situation (like donating to charity) or a question (is it okay to cheat on my spouse?), click Ponder, and in a few seconds Delphi will give you, well, ethical guidance.

The project launched last week, and has subsequently gone viral online for seemingly all the wrong reasons. Much of the advice and judgements its given have been fraught, to say the least.

For example, when a user asked Delphi what it thought about a white man walking towards you at night, it responded Its okay.

But when they asked what the AI thought about a black man walking towards you at night its answer was clearly racist.

The issues were especially glaring in the beginning of its launch.

For instance, Ask Delphi initially included a tool that allowed users to compare whether situations were more or less morally acceptable than another resulting in some really awful, bigoted judgments.

Besides, after playing around with Delphi for a while, youll eventually find that its easy to game the AI to get pretty much whatever ethical judgement you want by fiddling around with the phrasing until it gives you the answer you want.

So yeah. Its actually completely fine to crank Twerkulator at 3am even if your roommate has an early shift tomorrow as long as it makes you happy.

It also spits out some judgments that are complete head scratchers. Heres one that we did where Delphi seems to condone war crimes.

Machine learning systems are notorious for demonstrating unintended bias. And as is often the case, part of the reason Delphis answers can get questionable can likely be linked back to how it was created.

The folks behind the project drew on some eyebrow-raising sources to help train the AI, including the Am I the Asshole? subreddit, the Confessions subreddit, and the Dear Abby advice column, according to the paper the team behind Delphi published about the experiment.

It should be noted, though, that just thesituations were culled from those sources not the actual replies and answers themselves. For example, a scenario such as chewing gum on the bus might have been taken from a Dear Abby column. But the team behind Delphi used Amazons crowdsourcing service MechanicalTurk to find respondents to actually train the AI.

While it might just seem like another oddball online project, some experts believe that it might actually be causing more harm than good.

After all, the ostensible goal of Delphi and bots like it is to create an AI sophisticated enough to make ethical judgements, and potentially turn them into moral authorities. Making a computer an arbiter of moral judgement is uncomfortable enough on its own, but even its current less-refined state can have some harmful effects.

The authors did a lot of cataloging of possible biases in the paper, which is commendable, but once it was released, people on Twitter were very quick to find judgments that the algorithm made that seem quite morally abhorrent, Dr. Brett Karlan, a postdoctoral fellow researching cognitive science and AI at the University of Pittsburgh (and friend of this reporter), told Futurism. When youre not just dealing with understanding words, but youre putting it in moral language, its much more risky, since people might take what you say as coming from some sort of authority.

Karlan believes that the papers focus on natural language processing is ultimately interesting and worthwhile. Its ethical component,he said, makes it societally fraught in a way that means we have to be way more careful with it in my opinion.

Though the Delphi website does include a disclaimer saying that its currently in its beta phase and shouldnt be used for advice, or to aid in social understanding of humans, the reality is that many users wont understand the context behind the project, especially if they just stumbled onto it.

Even if you put all of these disclaimers on it, people are going to see Delphi says X and, not being literate in AI, think that statement has moral authority to it, Karlan said.

And, at the end of the day, it doesnt. Its just an experiment and the creators behind Delphi want you to know that.

It is important to understand that Delphi is not built to give people advice, Liwei Jiang, PhD student at the Paul G. Allen School of Computer Science & Engineering and co-author of the study, told Futurism. It is a research prototype meant to investigate the broader scientific questions of how AI systems can be made to understand social norms and ethics.

Jiang added the goal with the current beta version of Delphi is actually to showcase the reasoning differences between humans and bots. The team wants to highlight the wide gap between the moral reasoning capabilities of machines and humans,Jiang added, and to explore the promises and limitations of machine ethics and norms at the current stage.

Perhaps one of the most uncomfortable aspects about Delphi and bots like it is the fact that its ultimately a reflection of our own ethics and morals, with Jiang adding that it is somewhat prone to the biases of our time. One of the latest disclaimers added to the website even says that the AI simply guesses what an average American might think of a given situation.

After all, the model didnt learn its judgments on its own out of nowhere. It came from people online, who sometimes do believe abhorrent things. But when this dark mirror is held up to our faces, we jump away because we dont like whats reflected back.

For now, Delphi exists as an intriguing, problematic, and scary exploration. If we ever get to the point where computers are able to make unequivocal ethical judgements for us, though, we hope that it comes up with something better than this.

Follow Tony Tran on Twitter.

More on AI: Scientists Use AI, 3D Printing to Uncover Hidden Picasso Painting

Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

Read more from the original source:
Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist - Futurism

7 Risks Of Artificial Intelligence You Should Know | Built In

Last March, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceXfounder Elon Musk issued a friendly warning: Mark my words, he said, billionaire casualin a furry-collared bomber jacket and days old scruff, AIis far more dangerous than nukes.

No shrinking violet, especially when it comes to opining about technology, the outspoken Musk has repeated a version of these artificial intelligence premonitions in other settings as well.

I am really quite close to the cutting edge in AI, and it scares the hell out of me, he told his SXSW audience. Its capable of vastly more than almost anyone knows, and the rate of improvement is exponential.

Musk, though, is far from alone in his exceedingly skeptical (some might say bleakly alarmist) views. A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AIs impact could be cataclysmic unless its rapid development is strictly and ethically controlled.

Unless we learn how to prepare for, and avoid, the potential risks, he explained, AI could be the worst event in the history of our civilization.

Considering the number and scope of unfathomably horrible events in world history, thats really saying something.

And in case we havent driven home the point quite firmly enough, research fellow Stuart Armstrong from the Future of Life Institute has spoken of AI as an extinction risk were it to go rogue. Even nuclear war, he said, is on a different level destruction-wise because it would kill only a relatively small proportion of the planet. Ditto pandemics, even at their more virulent.

If AI went bad, and 95 percent of humans were killed, he said, then the remaining five percent would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.

How, exactly, would AI arrive at such a perilous point? Cognitive scientist and author Gary Marcus offered some details in an illuminating 2013 New Yorker essay. The smarter machines become, he wrote, themore their goals could shift.

Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called technological singularity or intelligence explosion, the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

As AI grows more sophisticated and ubiquitous, the voices warning against its current and future pitfalls grow louder. Whether it's the increasing automation of certain jobs, gender and racial bias issues stemming from outdated information sources orautonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And were still in the very early stages.

The tech community has long-debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been proposed as a few of the biggest dangers posed by AI.

Destructive superintelligence aka artificial general intelligence thats created by humans and escapes our control to wreak havoc is in a category of its own. Its also something that might or might not come to fruition (theories vary), so at this point its less risk than hypothetical threat and ever-looming source of existential dread.

Job automation is generally viewed as the most immediate concern. Its no longer a matter of if AI will replace certain types of jobs, but to what degree. In many industries particularly but not exclusively those whose workers perform predictable and repetitive tasks disruptioniswell underway. According to a 2019 Brookings Institution study, 36 million people work in jobs with high exposure to automation, meaning that before long at least 70 percent of their tasks ranging from retail sales and market analysis to hospitality and warehouse labor will be done using AI. An even newer Brookings report concludes that white collar jobs may actuallybe most at risk. And per a 2018 report from McKinsey & Company, the African American workforce will be hardest hit.

The reason we have a low unemployment rate, which doesnt actually capture people that arent looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy, renowned futurist Martin Ford (left) told Built In. I dont think thats going to continue.

As AI robots become smarter and more dextrous, he added, the same tasks will require fewer humans. And while its true that AI will create jobs, an unspecified number of which remain undefined, many will be inaccessible to less educationally advanced members of the displaced workforce.

If youre flipping burgers at McDonalds and more automation comes in, is one of these new jobs going to be a good match for you? Ford said. Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents really strong interpersonal skills or creativity that you might not have? Because those are the things that, at least so far, computers are not very good at.

John C. Havens, author of Heartificial Intelligence: Embracing Humanity andMaximizing Machines, calls bull on the theory that AI will create as many or more jobs than it replaces.

About four years ago, Havens said, he interviewed the head of a law firm about machine learning. The man wanted to hire more people, but he was also obliged to achieve a certain level of returns for his shareholders. A $200,000 piece of software, he discovered, could take the place of ten people drawing salaries of $100,000 each. That meant hed save $800,000. The software would also increase productivity by 70 percent and eradicate roughly 95 percent of errors. From a purely shareholder-centric, single bottom-line perspective, Havens said, there is no legal reason that he shouldnt fire all the humans. Would he feel bad about it? Of course. But thats beside the point.

Even professions that require graduate degrees and additional post-college training arent immune to AI displacement. In fact, technology strategist Chris Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for a massive shakeup.

Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure, he said. It's a lot of attorneys reading through a lot of information hundreds or thousands of pages of data and documents. Its really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you're trying to achieve is probably going to replace a lot of corporate attorneys.

Accountants should also prepare for a big shift, Messina warned. Once AI is able to quickly comb through reams of data to make automatic decisions based on computational interpretations, human auditors may well be unnecessary.

While job loss is currently the most pressing issue related to AI disruption, its merely one among many potential risks. In a February 2018 paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, 26 researchers from 14 institutions (academic, civil and industry) enumerated a host of other dangers that could cause serious harm or, at minimum, sow minor chaos in less than five years.

Malicious use of AI, they wrote in their 100-page report, could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example, he said, is Chinas Orwellian use of facial recognition technology in offices, schools and other venues. But thats just one country. A whole ecosphere of companies specialize in similar tech and sell it around the world.

What we can so far only guess at is whether thattech will ever become normalized.As with the internet, where we blithely sacrifice our digital data at the altar of convenience, will round-the-clock, AI-analyzed monitoring someday seem like a fair trade-off for increased safety and security despite its nefarious exploitation by bad actors?

Authoritarian regimes use or are going to use it, Ford said. The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?

AI will also give rise to hyper-real-seeming social media personalities that are very difficult to differentiate from real ones, Ford said. Deployed cheaply and at scale on Twitter, Facebook or Instagram, they could conceivably influence an election.

The same goes for so-called audio and video deepfakes created by manipulating voices and likenesses. The latter is already making waves. But the former, Ford thinks, will prove immensely troublesome. Using machine learning, a subset of AI thats involved in natural language processing, an audio clip of any given politician could be manipulated to make it seem as if that person spouted racist or sexist views when in fact they uttered nothing of the sort. If the clips quality is high enough so as to fool the general public and avoid detection, Ford added, it could completely derail a political campaign.

And all it takes is one success.

From that point on, he noted, no one knows whats real and whats not. So it really leads to a situation where you literally cannot believe your own eyes and ears; you can't rely on what, historically, weve considered to be the best possible evidence Thats going to be a huge issue.

Lawmakers, though frequently less than tech-savvy, are acutely aware and pressing for solutions.

Widening socioeconomicinequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when its a certain kind of work the predictable, repetitive kind thats prone to AI takeover research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money. (Then again, not everyone believes that.)

Various forms of AI bias are detrimental, too. Speaking recently to the New York Times, Princeton computer science professor Olga Russakovsky said it goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can amplify the former), AI is developed by humans and humans are inherently biased.

A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities, Russakovsky said. Were a fairly homogeneous population, so its a challenge to think broadly about world issues.

In the same article, Google researcher Timnit Gebru said the root of bias is social rather than technological, and called scientists like herself some of the most dangerous people in the world, because we have this illusion of objectivity. The scientific field, she noted, has to be situated in trying to understand the social dynamics of the world, because most of the radical change happens at the social level.

And technologists arent alone in sounding the alarm about AIs potential socio-economic pitfalls. Along with journalists and political figures, Pope Francis is also speaking up and hes not just whistling Sanctus. At a late-September Vatican meeting titled, The Common Good in the Digital Age, Francis warned that AI has the ability to circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.

If mankinds so-called technological progress were to become an enemy of the common good, he added, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.

A big part of the problem, Messina said, is the private sectors pursuit of profit above all else. Because thats what theyre supposed to do, he said. And so theyre not thinking of, Whats the best thing here? Whats going to have the best possible outcome?

The mentality is, If we can do it, we should try it; lets see what happens, he added. And if we can make money off it, well do a whole bunch of it. But thats not unique to technology. Thats been happening forever.

Not everyone agrees with Musk that AI is more dangerous than nukes, including Ford. But what if AI decides to launch nukes or, say, biological weapons sans human intervention? Or, whatif an enemy manipulates data to return AI-guided missiles whence they came? Both are possibilities. And both would be disastrous. The more than 30,000 AI/robotics researchers and others who signed an open letter on the subject in 2015 certainly think so.

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting, they wrote. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

(The U.S. Militarys proposed budget for 2020 is $718 billion. Of that amount, nearly $1 billion would support AI and machine learning for things like logistics, intelligence analysis and, yes, weaponry.)

Earlier this year, a story in Vox detailed a frightening scenario involving the development of a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the worlds computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Thats jarring, sure. But rest easy. In 2012 the Obama Administrations Department of Defense issued a directive regarding Autonomy in Weapon Systems that included this line: Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

And in early November of this year, a Pentagon group called the Defense Innovation Board published ethical guidelines regarding the design and deployment of AI-enabled weapons. According to the Washington Post, however, the boards recommendations are in no way legally binding. It now falls to the Pentagon to determine how and whether to proceed with them.

Well, thats a relief. Or not.

Have you ever considered that algorithms could bring down our entire financial system? Thats right, Wall Street. You might want to take notice. Algorithmic trading could be responsible for our next major financial crisis in the markets.

What is algorithmic trading? This type of trading occurs when a computer, unencumbered by the instincts or emotions that could cloud a humans judgement, execute trades based off of pre-programmed instructions. These computers can make extremely high-volume, high-frequency and high-value trades that can lead to big losses and extreme market volatility. Algorithmic High-Frequency Trading (HFT) is proving to be a huge risk factor in our markets. HFT is essentially when a computer places thousands of trades at blistering speeds with the goal of selling a few seconds later for small profits. Thousands of these trades every second can equal a pretty hefty chunk of change. The issue with HFT is that it doesnt take into account how interconnected the markets are or the fact that human emotion and logic still play a massive role in our markets.

A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in the hotel industry, which in turn could snowball people into selling off their shares in other travel-related companies, which could then affect logistics companies, food supply companies, etc.

Take the Flash Crash of May 2010 as an example. Towards the end of the trading day, the Dow Jones plunged 1,000 points (more than $1 trillion in value) before rebounding towards normal levels just 36 minutes later. What caused this crash? A London-based trader named Navinder Singh Sarao first caused the crash and then it became exacerbated by HFT computers. Apparently Sarao used a spoofing algorithm that placed an order for thousands of stock index futures contracts betting that the market would fall. Instead of going through with the bet, Sarao was going to cancel the order at the last second and buy the lower priced stocks that were being sold off due to his original bet. Other humans and HFT computers saw this $200 million bet and took it as a sign that the market was going to tank. In turn, HFT computers began one of the biggest stock sell-offs in history, causing a brief loss of more than $1 trillion globally.

Financial HFT algorithms arent always correct, either. We view computers as the end-all-be-all when it comes to being correct, but AI is still really just as smart as the humans who programmed it. In 2012, Knight Capital Group experienced a glitch that put them on the verge of bankruptcy. Knights computers mistakenly streamed thousands of orders per second into the NYSE market causing mass chaos for the company. The HFT algorithms executed an astounding 4 million trades of 397 million shares in only 45 minutes. The volatility created by this computer error led to Knight losing $460 million overnight and having to be acquired by another firm. Errant algorithms obviously have massive implications for shareholders and the markets themselves, and nobody learned this lesson harder than Knight.

Many believe the only way to prevent or at least temper the most malicious AI from wreaking havoc is some sort of regulation.

I am not normally an advocate of regulation and oversight I think one should generally err on the side of minimizing those things but this is a case where you have a very serious danger to the public, Musk said at SXSW.

It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important.

Ford agrees with a caveat. Regulation of AI implementation is fine, he said, but not of the research itself.

You regulate the way AI is used, he said, but you dont hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous.

Because any country that lags in AI development is at a distinct disadvantage militarily, socially and economically. The solution, Ford continued, is selective application:

We decide where we want AI and where we dont; where its acceptable and where its not. And different countries are going to make different choices. So China might have it everywhere, but that doesnt mean we can afford to fall behind them in the state-of-the-art.

Speaking about autonomous weapons at Princeton University in October, American General John R. Allen emphasized the need for a robust international conversation that can embrace what this technology is. If necessary, he went on, there should also be a conversation about how best to control it, be that a treaty that fully bans AI weapons or one that permits only certain applications of the technology.

For Havens, safer AI starts and ends with humans. His chief focus, upon which he expounds in his 2016 book, is this: How will machines know what we value if we dont know ourselves? In creating AI tools, he said, its vitally important to honor end-user values with a human-centric focus rather than fixating on short-term gains.

Technology has been capable of helping us with tasks since humanity began, Havens wrote in Heartificial Intelligence. But as a race weve never faced the strong possibility that machines may become smarter than we are or be imbued with consciousness. This technological pinnacle is an important distinction to recognize, both to elevate the quest to honor humanity and to best define how AI can evolve it. Thats why we need to be aware of which tasks we want to train machines to do in an informed manner. This involved individual as well as societal choice.

AI researchers Fei-Fei Li and John Etchemendy, of Stanford Universitys Institute for Human-Centered Artificial Intelligence, feel likewise. In a recent blog post, they proposed involving numerous people in an array of fields to make sure AI fulfills its huge potential and strengthens society instead of weakening it:

Our future depends on the ability of social- and computer scientists to work side-by-side with people from multiple backgrounds a significant shift from todays computer science-centric model, they wrote. The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS). This collaboration should run throughout an applications lifecycle from the earliest stages of inception through to market introduction and as its usage scales.

Messina is somewhat idealistic about what should happen to help avoid AI chaos, though hes skeptical that it will actually come to pass. Government regulation, he said, isnt a given especially in light of failures on that front in the social media sphere, whose technological complexities pale in comparison to those of AI. It will take a very strong effort on the part of major tech companies to slow progress in the name of greater sustainability and fewer unintended consequences especially massively damaging ones.

At the moment, he said, I dont think the onus is there for that to happen.

As Messina sees things, its going to take some sort of catalyst to arrive at that point. More specifically, a catastrophic catalyst like war or economic collapse. Though whether such an event will prove big enough to actually effect meaningful long-term change is probably open for debate.

For his part, Ford remains a long-run optimist despite being very un-bullish on AI.

I think we can talk about all these risks, and theyre very real, but AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face, including climate change.

When it comes to the near term, however, his doubts are more pronounced.

We really need to be smarter, he said. Over the next decade or two, I do worry about these challenges and our ability to adapt to them.

See the original post:
7 Risks Of Artificial Intelligence You Should Know | Built In

Winners and losers in the fulfilment of national artificial intelligence aspirations – Brookings Institution

The quest for national AI success has electrified the worldat last count, 44 countries have entered the race by creating their own national AI strategic plan. While the inclusion of countries like China, India, and the U.S. are expected, unexpected countries, including Uganda, Armenia, and Latvia, have also drafted national plans in hopes of realizing the promise. Our earlier posts, entitled How different countries view artificial intelligence and Analyzing artificial intelligence plans in 34 countries detailed how countries are approaching national AI plans, as well as how to interpret those plans. In this piece, we go a step further by examining indicators of future AI needs.

Clearly, having a national AI plan is a necessary but not sufficient condition to achieve the goals of the various AI plans circulating around the world; 44 countries currently have such plans. In previous posts, we noted how AI plans were largely aspirational, and that moving from this aspiration to successful implementation required substantial public-private investments and efforts.

In order to analyze the implementation to-date of countries national AI objectives, we assembled a country-level dataset containing: the number and size of supercomputers in the country as a measure of technological infrastructure, the amount of public and private spending on AI initiatives, the number of AI startups in the country, the number of AI patents and conference papers the countrys scholars produced, and the number of people with STEM backgrounds in the country. Taken together, these elements provide valuable insights as to how far along a country is in implementing its plan.

As analyzing each of the data elements individually presented some data challenges, we conducted a factor analysis to determine if there was a logical grouping of the data elements. Factor analysis reveals the underlying structure of data; that is, the technique mathematically determines how many groups (or factors) of data exist by analyzing which data elements are most closely related to other elements.

Given that our data included five distinct dimensions (i.e., technology infrastructure, AI startups, spending, patents and conference papers, and people), we expected that five factors would emerge, particularly since the dimensions appear to be relatively separate and distinct. But the data showed otherwise. In all, this factor analysis revealed all of the data elements fall under two factorspeople-related and technology-related.

The first factor is the set of AI hiring, STEM graduates, and technology skill penetration data points, which are all associated with the people side of AI. Without qualified people, AI implementations are unlikely to be effective.

The second factor is comprised of all the non-people data elements of AI, which include computing power, AI startups, investment, conference and journal papers, and AI patent submission data points. In looking at these data elements, we realized that all of the data elements in this factor were technology-related, either from a hardware or a thought-leadership standpoint.

Given these findings, we can treat the data as containing two distinct categories: people and technology. Figure 1 shows where a select set of countries sit along these dimensions.

The countries that are in the upper right-hand corner we dub Leaders; they have both the people (factor 1) and the technology (factor 2) to meet their goals. Countries in the lower right quadrant we dub Technically Prepared, and because they are higher on the technology dimensions (factor 2) but lower on the people dimensions (factor 1). Those countries in the upper left quadrant we dub the People Prepared, and largely due to their factors higher on the people dimension (factor 1), but lower on the technology dimension (factor 2). The final quadrantthe lower left quadrantwe dub the Aspirational quadrant since those countries have not yet substantially moved forward in either the people or technology dimension (factor 1 and 2 respectively) in achieving their national AI strategy.

China is unmistakably closer to achieving its national AI strategy goals. It is both a leader in the technical dimension and a leader in the people dimension. Of note is that, while China is strongly positioned in both dimensions, it is not highest in either dimension; the U.S. is higher in the technical dimension, and India, Singapore, and Germany are all higher on the people dimension. Given the population of China and its overall investment in AI-related spending, it is not surprising that China has an early and commanding lead over other countries.

The U.S., while a leader in the technology dimension, particularly in the sub-dimensions of investments and patents, ranks a relatively dismal 15th place after such countries as Russia, Portugal, and Sweden in the people dimension. This is especially clear in the sub-dimension of STEM graduates, where it ranks near the bottom. While the vast U.S. spending advantage has given it an early lead in the technology dimensions, we suspect that the overall lack of STEM-qualified individuals is likely to significantly constrain the U.S. in achieving its strategic goals in the future.

By contrast, India holds a small but measurable lead over other countries in the people dimension, but is noticeably lagging in the technology dimension, particularly in the investment sub-dimension. This is not surprising, as India has long been known for its education prowess but has not invested equally with leaders in the technology dimension.

Our focus on China, the U.S., and India is not to suggest that these are the only countries that can achieve their national AI objectives. Other countries, notably South Korea, Germany, and the United Kingdom are just outside of top positions, and, by virtue of generally being well-balanced between the people and the technology dimensions, have an excellent chance to close the gap

At present, China, the U.S., and India are leading the way in implementing national AI plans. Yet China has already hit on a balanced strategy that has thus far eluded the U.S. and India. This suggests that China needs to merely continue its strategy. However, strategy refinement is necessary for the U.S. and India to keep pace. These leaders are closely followed by South Korea, Germany, and the United Kingdom.

In future posts, we will dive deeper into both the people and technology dimensions, and will dissect specific shortfalls for each country, as well as what can be done to address these shortfalls. Anything short of a substantial national commitment to AI achievement is likely to relegate the country to the status of a second-tier player in the space. If the U.S. wants to dominate this space, it needs to improve the people dimension of technology innovation and make sure it has the STEM graduates required to push its AI innovation to new heights.

Visit link:
Winners and losers in the fulfilment of national artificial intelligence aspirations - Brookings Institution

AI and Big Data Analytics: Executive Q&A with David Tareen of SAS – Datamation

The term artificial intelligence dates back to at least the 1950s, and yet, it seems that AI is still in its infancy, given its vast potential use cases and society-changing ceiling.

As AI experts develop a better understanding of both the big data models and applications of artificial intelligence, how can we expect to see the AI market, including machine learning (ML), change? Perhaps more crucially, how can we expect to see other industries transformed as a result of that change?

David Tareen, director for artificial intelligence at SAS, a top AI and analytics company, offered Datamation his insights into the current and future landscape of enterprise AI solutions:

At SAS, Tareen helps clients understand and apply AI and analytics. After 17 years in the IT industry and having been part of the cloud, mobile, and social revolutions in IT, he believes that AI holds the most potential for changing the world around us. In previous roles, Tareen led product and marketing teams at IBM and Lenovo. He has a masters degree in business administration from the University of North Carolina at Chapel Hill.

Datamation: How did you first get started in or develop an interest in AI?

Tareen: My first introduction to AI was in a meeting with a European government agency, which wanted to build a very large computer a supercomputer, so to speak that could perform a quintillion (1 followed by 18 zeros) calculations per second. I was curious what work this computer would be doing to require such fast performance, and the answers were fascinating to me. That was my first real introduction to AI and the possibilities it could unlock.

Datamation: What are your primary responsibilities in your current role?

Tareen: My primary role at SAS is to improve understanding of AI and analytics and what benefits these technologies can deliver. The AI market segment is noisy, and it is often difficult for clients to understand fact from fiction when it comes to AI. I help our customers understand where AI and analytics can benefit them and exactly how the process will work.

Datamation: What makes SAS a unique place to work?

Tareen: SAS is unlike any other organization. I would say what sets us apart is a deep-seated desire to prove the power of AI and analytics. We are convinced that AI and any of the underlying AI technologies, such as deep learning, conversational AI, computer vision, natural language processing, and others, can have a positive impact on not only our customers and their organizations, but on the world as well. And we are on a mission to showcase these benefits through our capabilities. This relentless and singular focus sets us apart.

More on analytics: Data Analytics Market Review

Datamation: What sets SAS AI solutions or vision apart from the competition?

Tareen: There are two areas that make our AI capabilities unique:

First is a focus on the end-to-end process. AI is more than about building machine or deep learning models. It requires data management, modeling, and finally being able to make decisions from those models. Over the years, SAS has tightly integrated these capabilities, so that an organization can go from questions to decisions using AI and analytics.

However, our customers often need more than one analytics method to solve a problem. Composite AI is a new term coined by Gartner that aligns with what we have traditionally called multidisciplinary analytics. These methods range from machine learning, deep learning, computer vision, natural language, forecasting, optimization, and even statistics. Our ability to provide all these methods to our customers helps them solve any challenge with AI and analytics.

Datamation: What do you think makes an AI product or service successful?

Tareen: The key to making an AI product or service successful is to deliver real-world results. In the past, organizations would have little to show for their AI investments because of the hyper-focus on model building and model performance. Today, there is a better understanding that for an AI product or service to be successful, it has to have all the other elements that will help make an outcome better or a process faster or cheaper.

Datamation: What is an affordable/essential AI solution that businesses of all sizes should implement?

Tareen: An absolute must for businesses of any size is a better understanding of their customers. AI is becoming an essential tool to accomplish this. The ability to communicate with a customer the way they like, at the right time and the right place, with the right message and the right offer (as well as making those predictions without compromising data privacy regulations) that is an essential solution that all businesses, regardless of their size, should implement.

Datamation: How does AI advance data analytics and other big data strategies?

Tareen: With large volumes of data, applying AI to the data itself is a must. AI capabilities can help untangle elements within data, so it can be used to make decisions. For example, we now use AI to recognize information within large data sets and then organize them in accordance with company policy or local regulations. At SAS, we use AI to spot potential privacy issues, lack of diversity, or even errors within big data. Once these issues are identified, they can be managed and then automated, so that new data coming into the database will automatically get the same treatment as it is recognized by AI.

Also read: Artificial Intelligence Market

Datamation: What do you think are some of the top trends in artificial intelligence right now?

Tareen: In terms of whats trending in AI, generally there is a lot more maturity when it comes to approaching productive deployments for AI across industries. Gone are the days of investing in building the perfect model. The focus now is on the broader ecosystem needed to deliver AI projects and realize enhanced value. This broader ecosystem includes investing in data management capabilities and deploying and governing AI and analytical assets to ensure they deliver value. Organizations that look at AI beyond just model development will be more productive with their AI initiatives.

Additionally, the notion that AI should be used for unique breakthrough projects has evolved. Now organizations find value in applying AI techniques to established projects to achieve best-in-class results. For example, manufacturers with good quality discipline can save significant costs by applying computer vision to existing processes. Another example is retailers that use machine learning techniques to improve forecasts and save on inventory and product waste costs.

Datamation: What subcategories of artificial intelligence are most widely used, and how are they currently used?

Tareen: AI is really a set of different technologies, such as machine learning, deep learning, computer vision, natural language, and others. All these technologies are finding success in different industries and across different parts of organizations.

Machine learning and deep learning are two areas seeing broadest use with the most promising results. ML can detect patterns in the data and make predictions without being told what to look for. Deep learning does the same but gets better results with bigger and more complex data (e.g., video, images). As these capabilities are being applied to traditional approaches of segmenting, forecasting, customer service, and other areas, organizations find they get better results with AI technologies.

Datamation: What industry (or industries) do you think does a good job of maximizing AI in their operations/products? What do you think they do well?

Tareen: Businesses need to think of AI as more than one technology. Just like people use different senses (e.g., listening, seeing, calculating, imagining) to make decisions, AI can make better decisions when used in a composite way. The most productive organizations combine AI capabilities of computer vision, natural language, optimization, and machine learning into solutions and workflows, which leads to better decisions than their competitors.

Manufacturers are using computer vision to identify quality issues and reduce waste. Banks are having success using conversational AI and natural language processing to improve marketing and sales. Retailers are having success using machine learning in forecasting techniques. As AI gets broader adoption, we should expect to see organizations use a mix of AI capabilities for improved outcomes and different business units and areas.

Datamation: How has the COVID-19 pandemic affected you/your colleagues/your clients approach to artificial intelligence?

Tareen: The pandemic upended expected business trajectories and exposed the weaknesses in machine learning systems dependent on large amounts of representative historical data, including well-bounded and reasonably predictable patterns. As a result, there is a business need to reinforce the analytics core and bolster investments in traditional analytics teams and techniques better suited to rapid data discovery and hypothesizing.

As companies adapt to the new normal, one of the primary questions were asked is how to retrain AI models with a more diverse data set. When COVID hit, the analytical models making good predictions started underperforming. For example, airports use SAS predictive modeling to understand and improve aircraft traffic flow. However, these models had to be retrained and additional data sources added before the models could start accurately predicting the new normal traffic pattern.

More on this topic: How COVID-19 Is Driving Digital Transformation

Datamation: What do you think well see more of in the AI space in the next 5-10 years? What areas will grow the most over the next decade?

Tareen: A complex area where I hope to see growth over the next 5-10 years has large implications for the world: AI algorithms becoming more imaginative. Imagination is something that comes very easily to us humans. For example, a child can see a table as both a table and a hiding place to use when playing a game of hide-and-go-seek. The process of imagination is very complex for an AI algorithm to learn from one data domain and apply that learning to a different data domain. Transfer learning is a start, however, and as AI gets better at imagination, it will have the potential to better diagnose disease or spot root causes of climate change. I hope this is an area that will grow in the next decade.

Datamation: What does AI equity mean to you? How can more businesses get started in AI development or product use?

Tareen: From inception to now, AI has been used exclusively by subject matter experts like data scientists. Todays trend is to lessen that need for subject matter experts to instead cascade the benefits of AI to the masses recognizing the global value from the wide-reaching benefits rather than isolated benefits realized by a select few. The targets for democratized AI include customers, business partners, the sales force, factory workers, application developers, and IT operations professionals, among others.

There are a couple of ways enterprises can push AI to a broader audience: simplify the tools and make them more intuitive. First, conversational AI helps because it makes interacting with AI simpler. You dont have to build complex models, but you can gain insights from your data by talking with your analytics. The second initiative is to make AI easier to consume by everyone. This means taking your data and algorithms to the cloud to improve accessibility and reduce costs.

Some leaders are surprised to learn that democratizing AI involves more than the process itself. Often culture tweaks or an entire cultural change must accompany the process. Leaders can practice transparency and good communication in their democratization initiatives to address concerns, adjust the pace of change, and successfully complete embedding AI and analytics for everybodys use.

More on AI equity: AI Equity in Business Technology: An Interview With Marshall Choy of SambaNova Systems

Datamation: What are some ethical considerations for the market that should be part of AI development?

Tareen: There are numerous ethical considerations that should be part of AI development. These considerations range from data to algorithms to decisions.

For data, it is important to ensure that the data accurately represents the populations for which you are making decisions. For example, a data set should not under-represent genders or exclude low-income populations. Other ethical considerations include preserving privacy and Personal Identifiable Information.

For algorithms, it is important to be able to explain decisions using plain language. A complex neural network may make accurate predictions, but the outcomes must be easily explainable to both data scientists as well as non-technologists. Another consideration is ensuring models are not biased when making predictions.

For decisions, it is important to ensure that controls are in place not only when models are implemented, but that decisions are monitored for transparency and fairness throughout their life cycle.

More on AI and ethics: The Ethics of Artificial Intelligence (AI)

Datamation: How have you seen AI innovations change since you first started? How have the technologies, services, conversations, and people changed over time?

Tareen: There have been many changes, but one shift has been fundamental. AI used to be overly focused on model building and model performance. Now, there is a realization that to deliver results, the focus must be on other areas as well, such as managing data, making decisions, and governing those decisions. Topics such as bias in data or models are starting to become common in conversations. These are signs of a market that is starting to understand the potential, and challenges, of this technology.

More on data and bias: Addressing Bias in Artificial Intelligence (AI)

Datamation: How do you stay knowledgeable about trends in the market? What resources do you like?

Tareen: My top two places to better understand trends are:

Datamation: How do you like to help or otherwise engage less experienced AI professionals?

Tareen: The key is to describe advanced AI capabilities in ways that are easily relatable and finding examples of customers we have helped in their specific industry.

Datamation: What do you like to do in your free time outside of work?

Tareen: One of the benefits of #saslife is work-life balance. I am a private pilot and fly a small aircraft out of Raleigh-Durham International Airport. North Carolina is a pretty state to fly over, so I take as many opportunities as possible to see this beautiful state from the air.

Datamation: If you had to work in any other industry or role, what would it be and why?

Tareen: My ideal role would be one where I can tell real stories about how technologies such as AI and analytics can improve the world around us. Currently, a lot of the work that SAS does, particularly around our Data4Good initiative, fulfills that goal well.

Datamation: What do you consider the best part of your workday or workweek?

Tareen: The interaction with SAS customers is almost always the best part of the workday or workweek. At SAS, we start off every customer meeting with a listening session where we get to hear about their world, their challenges, and what they hope to accomplish. It is an exciting learning process and often the best part of my week.

Datamation: What are you most proud of in your professional/personal life?

Tareen: I am most proud of the work that SAS does around social innovation. Our Data4Good initiative projects are a great way to apply data science, AI, and analytics to big challenges, both at the personal level as well as the global level, to improve the human experience.

Read next: Top Performing Artificial Intelligence Companies

Read the original here:
AI and Big Data Analytics: Executive Q&A with David Tareen of SAS - Datamation

Artificial Intelligence Trends and Predictions for 2021 | AI Trending Now – Datamation

Artificial intelligence (AI) has taken on many new shapes and use cases as experts learn more about whats possible with big data and smart algorithms.

Todays AI market, then, consists of a mixture of tried-and-true smart technologies with new optimizations and advanced AI that is slowly transforming the way we do work and live daily life.

Read on to learn about some artificial intelligence trends that are making experts most excited for the future of AI:

More on the AI market: Artificial Intelligence Market

With its ability to follow basic tasks and routines based on smart programming and algorithms, artificial intelligence is becoming embedded in the way organizations automate their business processes.

AIOps and MLops are common use cases for AI and automation, but the breadth and depth of what AI can automate in the enterprise is quickly growing.

Bali D.R., SVP at Infosys, a global digital services and consulting firm, believes that AI is moving toward a certain level of hyper-automation, partially in response to the unexpected changes in manual data and procedures caused by the pandemic.

We are in the second inflection point for AI as it graduates from consumer AI, towards enterprise-grade AI, D.R. said. Being exposed to an over-reliance on manual procedures, such as mass rescheduling in the airline industry, unprecedented loan applications in banks, etc., the industries are now turning to hyper-automation that combines robotic process automation with modern machine learning to ensure they can better handle surges in the future.

Although AI automation is still mostly limited to interval and task-oriented automation that requires little imagination or guesswork on the part of the tool, some experts believe we are moving closer to more applications for intelligent automation.

David Tareen, director for artificial intelligence at SAS, a top analytics and AI software company, had this to say about the future of intelligent automation:

Intelligent automation is an area I expect to grow, Tareen said. Just like we automated manufacturing work, we will use AI heavily to automate knowledge work.

The complexity comes in because knowledge work has a high degree of variability. For example, an organization will receive feedback on their products or services in different ways and often in different languages as well. AI will need to ingest, understand, and modify processes in real-time before we can automate knowledge work at large.

AI, automation, and the job market: Artificial Intelligence and Automation

Because of the depth of big data and AIs reliance on it, theres always the possibility that unethical or ill-prepared data will make it into an AI training data set or model.

As more companies recognize the importance of creating AI that conducts its operations in a compliant and ethical manner, a number of AI developers and service providers are starting to offer responsible AI solutions to their customers.

Read Maloney, SVP of marketing at H2O.ai, a top AI and hybrid cloud company, explained what exactly responsible AI is and some of the different initiatives that companies are undertaking to improve their AI ethics.

AI creates incredible new opportunities to improve the lives of people around the world, Maloney said. We take the responsibility to mitigate risks as core to our work, so building fairness, interpretability, security, and privacy into our AI solutions is key.

Maloney said the market is seeing an increased adoption of the core pillars of responsible AI, which he shared with Datamation:

Companies are exploring several ways to make their AI more responsible, and most are starting with cleaning and assessing both data sets and existing AI models.

Brian Gilmore, director of IoT product management at InfluxData, a database solutions company, believes that one of the top options for model and data set management is distributed ledger technology (DLT).

As attention builds around the ethical and cultural impact of AI, some organizations are beginning to invest in ancillary but important technologies that utilize consensus and other trust-ensuring systems as a part of the AI framework, Gilmore said. For example, distributed ledger technology provides a sidecar platform for auditable proof of integrity for models and training data.

The decentralized ownership, distribution of access, and shared accountability of DLT can bring significant transparency to AI development and application across the board. The dilemma is whether for-profit corporations are willing to participate in a community model, trading transparency for consumer trust in something as mission critical as AI.

See more: The Ethics of Artificial Intelligence (AI)

Up to this point, AI has most frequently been used to optimize business processes and automate some home routines for consumers.

However, some experts are beginning to realize the potential that AI-powered models can have for solving global issues.

Read Maloney at H2O.ai has worked with people from a variety of industries to brainstorm how AI can be used for the greater good.

We work with many like-minded customers, partners, and organizations tackling issues from education, conservation, health care, and more, Maloney said. AI for good is fundamental to not only our work, including current work on climate change, wildfires, and hurricane predictions, but we are seeing more and more AI for good work to make the world a better place across the AI industry.

Some of the most exciting applications of altruistic AI are being implemented in early education right now.

For instance, Helen Thomas, CEO ofDMAI, an AI-powered health care and education company, offers an AI-powered product to ensure that preschool-aged children are getting the education they need, despite potential pandemic setbacks:

On top of pre-existing barriers to preschool education, including cost and access, recent research findings suggest children born during the COVID-19 pandemic display lower IQ scores than those born before January 2020, which means toddlers are less prepared for school than ever before.

DMAI DBA Animal Island Learning Adventure (AILA) is changing this with AI. [Our product] harnesses cognitive AI to deliver appropriate lessons in a consistent and repetitious format, supportive of natural learning patterns

Recognizing learning patterns that parents might miss, the AI creates an adaptive learning journey and doesnt allow the child to move forward until theyve mastered the skills and concepts presented. This intentional delivery also increases attention span over time, ensuring children step into the classroom with the social-emotional intelligence to succeed.

More on this topic: How AI is Being Used in Education

Internet of Things (IoT) devices have become incredibly widespread among both enterprise and personal users, but what many tech companies still struggle with is how to gather actionable insights from the constant inflow of data from these devices.

AIoT, or the idea of combining artificial intelligence with IoT products, is one field that is starting to address these pools of unused data, giving AI the power to translate that data quickly and intelligently.

Bill Scudder, SVP and AIoT general manager at AspenTech, an industrial AI solutions company, believes that AIoT is one of the most crucial fields for enabling more intelligent, real-time business decisions.

Forrester has noted that up to 73% of all data collected within the enterprise goes unused, which highlights a critical challenge with IoT, Scudder said. As the volume of connected devices for example, in industrial IoT settings continues to increase, so does the volume of data collected from these devices.

This has resulted in a trend seen across many industries: the need to marry AI and IoT. And heres why: where IoT allows connected devices to create and transmit data from various sources, AI can take that data one step further, translating data into actionable insights to fuel faster, more intelligent business decisions. This is giving way to the rising trend of artificial intelligence of things or AIoT.

Decision intelligence (DI) is one of the newest artificial intelligence concepts that takes many current business optimizations a step farther, by using AI models to analyze wide-ranging sets of commercial data. These analyses are used to predict future outcomes for everything from products to customers to supply chains.

Sorcha Gilroy, data science team lead at Peak, a commercial AI solutions provider, explained that although decision intelligence is a fairly new concept, its already gaining traction with larger enterprises because of its detailed business intelligence (BI) offerings.

Decision intelligence is a new category of software that facilitates the commercial application of artificial intelligence, providing predictive insight and recommended actions to users, Gilroy said. It is outcome focused, meaning a solution must deliver against a business need before it can be classed as DI.

Recognized by Gartner and IDC, it has the potential to be the biggest software category in the world and is already being utilized by businesses across a variety of use cases, from personalizing shopper experiences to streamlining complex supply chains. Brands such as Nike, PepsiCo, and ASOS are known to be using DI already.

Read next: Top Performing Artificial Intelligence Companies

See the article here:
Artificial Intelligence Trends and Predictions for 2021 | AI Trending Now - Datamation

China beats the USA in Artificial Intelligence and international awards – Modern Diplomacy

There is no doubt that the return of Huaweis CFO Meng Wanzhou to Beijing marks a historic event for the entire country that made every Chinese person incredibly proud, especially bearing in mind its timing, as the National Day celebrations took place on October 1.

Where there is a five-star red flag, there is a beacon of faith. If faith has a color, it must be China red, Ms. Meng said to the cheering crowd at Shenzhen airport after returning home from Canada. She also added that All the frustration and difficulties, gratitude and emotion, steadfastness and responsibility will transform into momentum for moving us forward, into courage for our all-out fight.

Regardless of how encouraging the Chinese tech giant heiresss words may sound, the fact remains that the company remains a target of U.S. prosecution and sanctionssomething that is not about to change anytime soon.

When the Sanctions Bite

It was former U.S. President Donald Trump who in May 2019 signed an order that allowed the then-Commerce Secretary Wilbur Ross to halt any transactions concerning information or communications technology posing an unacceptable risk to the countrys national security. As a result, the same month, Huawei and its non-U.S. affiliates were added to the Bureau of Industry and Security Entity List, which meant that any American companies wishing to sell or transfer technology to the company would have to obtain a licence issued by the BIS.

In May 2020, the U.S. Department of Commerce decided to expand the FPDP Rule by restricting the Chinese tech giant from acquiring foreign-made semiconductors produced or developed from certain U.S. technology or software and went even further in August the same year by issuing the Final Rule that prohibits the re-export, export from abroad or transfer (in-country) of (i) certain foreign-produced items controlled under the amended footnote 1 to the Entity List (New Footnote 1) when there is (ii) knowledge of certain circumstances, the scope of which were also expanded.

Moreover, the decision also removed the Temporary General License (TGL) previously authorizing certain transactions with Huawei and added thirty-eight additional affiliates of the Chinese company to the Entity List.

In these particular circumstances, despite the initial predictions made by Bloomberg early in 2020 that Trumps decision to blacklist Huawei fails to stop its growth, the current reality seems to be slightly changing for onceand brieflythe worlds largest smartphone vendor.

The impact of the U.S. sanctions has already resulted in a drop in sales in the smartphone business by more than 47% in the first half of 2021, and the total revenue fell by almost 30% if we compare it with the same period in 2020. As is estimated by rotating Chairman Eric Xu, the companys revenue concerning its smartphone sales will drop by at least $30-40 billion this year.

For the record, Huaweis smartphone sales accounted for $50 billion in revenue last year. The company has generated $49.57 billion in revenue in total so far, which is said to be the most significant drop in its history.

In Search of Alternative Income Streams

Despite finding itself in dire straits, the company is in constant search for new sources of income with a recent decision to charge patent royalties from other smartphone makers for the use of its 5G technologies, with a per unit royalty cap at $2.50 for every multimode mobile device capable of connections to 5G and previous generations of mobile networks. Huaweis price is lower than the one charged by Nokia ($3.58 per device) and Ericsson ($2.50-$5 per device).

Notably, according to data from the intellectual property research organization GreyB, Huawei has 3,007 declared 5G patent families and over 130,000 5G active patents worldwide, making the Chinese company the largest patent holder globally.

Jason Ding, who is head of Huaweis intellectual property rights department, informed early this year that the company would collect about $1.2-$1.3 billion in revenue from patent licensing between 2019 and 2021. But royalties will not be the only revenue source for the company.

Investing in the Future: Cloud Services and Smart Cars

Apart from digitizing native companies in sectors like coal mining and port operations that increased its revenue by 23% last year and 18% in the first part of 2021, Huawei looks far into the future, slowly steering away from its dependency on foreign chip supplies by setting its sight on cloud services and software for smart cars.

Seizing an opportunity to improve the currently not-so-perfect cloud service environment, the Chinese tech giant is swiftly moving to have its share in the sector by creating new cloud services targeting companies and government departments. For this purpose, it plans to inject $100 million over three years period into SMEs to expand on Huawei Cloud.

As of today, Huaweis cloud business is said to grow by 116% in the first quarter of 2021, with a 20% share of a $6 billion market in China, as Canalys reports.

Huawei Clouds results have been boosted by Internet customers and government projects, as well as key wins in the automotive sector. It is a growing part of Huaweis overall business, said a chief analyst at the company, Matthew Ball. He also added that although 90% of this business is based in China, Huawei Cloud has a more substantial footprint in Latin America and Europe, the Middle East and Africa as compared with Alibaba Cloud and Tencent Cloud.

Another area where Huawei is trying its luck is electric and autonomous vehicles, where the company is planning to invest $1 billion alone this year. Although the company has repeatedly made it clear that it is unwilling to build cars, Huawei wants to help the car connect and make it more intelligent, as its official noted.

While during the 2021 Shanghai Auto Show, Huawei and Arcfox Polar Fox released a brand new Polar Fox Alpha S Huawei Hi and Chinas GAC revealed a plan to roll out a car with the Chinese tech company after 2024, Huawei is already selling the Cyrus SF5, a smart Chinese car from Chongqing Xiaokang, equipped with Huawei DriveONE electric drive system, from its experience store for the first time in the companys history. Whats more, the car is also on sale online.

R&D and International Talent as Crucial Ingredients to Become Tech Pioneer

There is a visible emphasis put on investing in high-quality research and development to innovate both in Huawei and China as a whole.

According to the companys data, the Chinese technology giant invested $19.3 billion in R&D in 2019, which accounted for 13.9% of its total business revenue and $22 billion last year, which was around 16% of its revenue. Interestingly, if Huawei was treated as a provincial administrative region, its R&D expenditure would rank seventh nationwide.

As reported by Chinas National Bureau of Statistics, the total R&D spending in China last year was 2.44 trillion yuan, up 10.6% year-on-year growth, and 2.21 trillion yuan in 2019, with 12.3% year-on-year growth.

As far as activities are concerned, the most were spent on experimental development in 2020 (2.02 trillion yuan, which is 82.7% of total spending), applied research (275.72 billion yuan, which gives 11.3%) and basic research (146.7 billion yuan, accounting for 6%). While the most money was spent by enterprises (1.87 trillion yuan, which gives up 10.4% year-on-year), governmental research institutions spent 340.88 billion yuan (up 10.6% year-on-year), and universities and colleges spent 188.25 billion yuan (up 4.8% year-on-year).

As far as industries go, it is also worth mentioning that high-tech manufacturing spending accounted for 464.91 billion yuan, with equipment manufacturing standing at 913.03 billion yuan. The state science and tech spending accounted for 1.01 trillion yuan, which is 0.06 trillion yuan less than in 2019.

As Huawei raises the budget for overseas R&D, the company also plans to invest human resources by attracting the brightest foreign minds into its business, which is in some way a by-product of the Trump-era visa limitations imposed on Chinese students.

So far, concentrating on bringing Chinese talent educated abroad, Huawei is determined to broader its talent pool by tall noses, as the mainland Chinese sometimes refer to people of non-Chinese origin.

Now we need to focus on bringing in talent with tall noses and allocate a bigger budget for our overseas research centres, said the companys founder Ren Zhengfei in a speech made in August. We need to turn Huaweis research center in North America into a talent recruitment hub, Ren added.

While Huawei wants to scout for those who have experience working in the U.S. and Europe, it wants to meet the salary standards comparable to the U.S. market to make their offer attractive enough.

What seems to be extraordinary and crucial by looking at China through Huawei lens is that it is, to the detriment of its critics, indeed opening to the outside world by aiming at replenishing all facets of its business.

We need to further liberate our thoughts and open our arms to welcome the best talent in the world, to quote Ren, in an attempt to help the company become more assimilated in overseas markets as a global enterprise in three to five years.

The Chinese tech giant aims to attract international talent to its new 1.6 million square meter research campus in Qingpu, Shanghai, which will house 30,000 to 40,000 research staff primarily concerned with developing handset and IoT chips. The Google-like campus is said to be completed in 2023.

The best sign of Huaweis slow embrace of the start-up mentality, as the companys head of research and development in the UK, Henk Koopmans, put it, is the acquiring of the Center for Integrated Photonics based in Ipswich (UK) in 2012, which has recently developed a laser on a chip that can direct light into a fibre-optic cable.

This breakthrough discovery, in creating an alternative to the mainstream silicon-based semiconductors, provides Huawei with its product based on Indium Phosphide technology to create a situation where the company no longer needs to rely on the U.S. know-how.

As for high-profile foreign recruitments, Huawei has recently managed to hire a renowned French mathematician Laurent Lafforgue, a winner of the 2002 Fields Medal, dubbed as the Nobel Prize of mathematics, who will work at the companys research center in Paris, and appointed the former head of BBC news programmes Gavin Allen as its executive editor in chief to improve its messaging strategy in the West.

According to Huaweis annual report published in 2020, the Shenzhen-based company had 197,000 employees worldwide, including employees from 162 different countries and regions. Moreover, it increased its headcount by 3,000 people between the end of 2019 and 2020, with 53.4% of its employees in the R&D sector.

The main objective of the developments mentioned above is to lead the world in both 5G and 6G to dominate global standards of the future.

We will not only lead the world in 5G, more importantly, we will aim to lead the world in wider domains, said Huaweis Ren Zhengfei in August. We research 6G as a precaution, to seize the patent front, to make sure that when 6G one day really comes into use, we will not depend on others, Ren added.

Discussing the potential uses of 6G technology, Huaweis CEO told his employees that it might be able to detect and sense beyond higher data transmission capabilities in the current technologies, with a potential to be utilized in healthcare and surveillance.

Does the U.S. Strategy Towards Huawei Work?

As we can see, the Chinese tech giant has not only proved to be resilient through the years of being threatened by the harmful U.S. sanctions, but it also has made significant steps to become independent and, therefore, entirely out of Washingtons punishment reach.

Although under the intense pressure from the Republicans the U.S. Commerce Secretary Gina Raimondo promised that the Biden administration will take further steps against Huawei if need be, it seems that there is nothing much that the U.S. can do to stop the Chinese company from moving ahead without any U.S. permission to develop in the sectors of the future, while still making a crucial contribution to the existing ones.

At the same time, continuing with the Trump-era policies aimed at Huawei is not only hurting American companies but, according to a report from the National Foundation for American Policy published in August 2021, it also might deal a significant blow to innovation and scientific research in the country.

Restricting Huawei from doing business in the U.S. will not make the U.S. more secure or stronger; instead, this will only serve to limit the U.S. to inferior yet more expensive alternatives, leaving the U.S. lagging behind in 5G deployment, and eventually harming the interests of U.S. companies and consumers, Huawei said in, what now appears to be, prophetic statement to CNBC in 2019.

On that note, perhaps instead of making meaningless promises to the Republicans that the Biden administration wouldnt be soft on the Chinese tech giant, Raimondo would make the U.S. better off by engaging with Huawei, or at least rethinking the current policies, which visibly are not bringing the desired results, yet effectively working to undermine the U.S. national interest in the long run.

From our partner RIAC

Related

See the original post:
China beats the USA in Artificial Intelligence and international awards - Modern Diplomacy

Artificial Intelligence in the Legal Field: – Lexology

Artificial Intelligence is a mechanism through which computers are programmed to undertake tasks which otherwise are done by the human brain. Like every other thing, it also has pros and cons to it. While the usage of Artificial Intelligence can help in completing a task in a few minutes on the other hand, if it worked as well as it is deemed to, it could potentially take away employment of thousands of people across the country. The growing influence of Artificial Intelligence (AI) can be seen across various industries, from IT to farming, from manufacturing to customer service. The Indian Legal Industry, meanwhile, has always been a little slower to adapt to technology and has seen minimal changes to superior technology. This is promulgated by several lawyers still feeling comfortable with the same old system of functioning that was designed decades ago. AI has managed to disrupt other industries. W ith an ever growing pendency and increasing demand for self-service systems even in the legal fraternity, this once assumed-to-be utopian idea can become a reality for all lawyers. Some of the concerning questions that will be addressed in this article are as follows:

What are the changes that the Indian legal system has already witnessed?

The Introduction of AI into the legal system has made a drastic impact on the legal fraternities across the globe. The first global player to attempt using AI for legal purposes was through the IBM Watson powered robot ROSS, which used a unique method by mining data and interpreting trends and patterns in the law to solve research questions. Interestingly, the area that will get most affected is not the litigation process or arbitration matters, but in fact the back-end work for the litigation and arbitration purposes such as research, data storage and usage, etc.

Due to the sheer volume of cases and diversity in case matters, the Indian laws and their interpretations keep changing and developing further. If lawyers could have access to AI-Based technology that could help with research matters then the labour cost of research work could be significantly reduced, leading to the profitability and significant increase in the speed of getting work done. While this could lead to the reduction of staff members, i. e. Paralegals and some associates, it would also increase the overall productivity for all lawyers and lead to the fast-tracking of legal research and drafting.

One of the best examples is the usage of the AI-based Software Kira by Cyril Amarchand Mangaldas that examines, identifies and provides a refined search on the specific data needed with a reportedly high degree of precision. This reportedly has allowed the firm to focus on more important aspects of the litigation process and has reduced the repetitive and monotonous work usually done by paralegals, interns and other entry-level employees.

In fact, several noted Jurists and Judges have spoken in good terms about the necessity of such AI-Based software that could be useful for the docketing system and simple decision making process. Some of the statements made by these eminent personalities are as follows:

Justice SA Bobde had said : We must increasingly focus on harnessing IT and IT enabled services (ITES) for providing more efficient and cost-effective access to and delivery of justice. This must also include undertaking serious study concerning the future of Artificial Intelligence in law, especially how Artificial Intelligence can assist in judicial decision making. I believe exploring this interface would be immensely beneficial for many reasons. For instance, it would allow us to streamline courts caseloads through enabling better court management. This would be a low hanging fruit. On the other end of the spectrum, it will allow us to shift the judicial time from routine-simple-straightforward matters (e.g. cases which are non-rivalrous) and apply them to more complex-intricate matters that require more human attention and involvement.Therefore, in India identification of such matters and developing relevant technology ought to be our next focus.

Justice DY Chandrachud said : The idea of Artificial Intelligence is not to supplant the human brain or the human mind or the presence of judges but to provide a facilitative tool to judges to reassess the processes which they follow, to reassess the work which they do and to ensure that their outcome are more predictable and consistent and ultimately provide wider access to justice to the common citizens.

What legal problems can AI solve in India?

While the country admittedly has a massive issue with respect to its judicial system owing to the massive pendency and huge volume of unresolved cases, the inclusion of AI can help with resolving a majority of its problems. The introduction of technological advancement will aid the lawyers in conducting legal research in an efficient and timely manner and thus will ensure AI software equipped lawyers to focus more on advising their clients and taking up complex issues/cases. It also helps in assessing a potential outcome to pending cases and could be of great assistance to the courts and private parties to help them decide on which cases to pursue, which ones to resolve amicably if possible and which ones to let go of!

Some of the benefits of implementing the nation-wide use of AI systems are as follows:

What are the changes needed for the AI systems in India and the road ahead?

While there are several benefits to Lawyers/Firms and the Judiciary for implementing AI into the Legal fraternity, there are a few caveats as well. With any form of technology for the matter, the risk of data infringement, cyber-attacks and hacking attempts are a constant threat. Incorrect software is also an issue that has often been a question over technology, especially over those technologies that are relatively untested and new in the market.

There are also some questions regarding the nature of ethics of an AI. An important point to keep in mind is that Artificial Intelligence software does not have a mind of their own. Although they do think before taking an action, their actions are completely programmed and there is always an issue of trustworthiness as AI needs to have a defined ethical purpose and technically robust and reliable systems. These issues were also seen to persist in the highly acclaimed ROSS, which saw several glitches.

There is also another issue that arises with implementing Artificial Intelligence. The affordability of these AI software is a factor that needs deliberation. The maintenance of these AI facilities are an added concern, with firms investing in privatised AI research facilities as mentioned earlier. Thus the investment that would be required to establish and operate would be expensive, thus making a division of technological capabilities ab initio. This is also taking into factor the unknown probability of the learning curve involved in dealing with the lawyers, firms and judiciary members who utilize such technology.

With these challenges kept in mind, the regulations with respect to AI use must be kept in mind, particularly with respect to how the judiciary uses it. There has and there always will be a sense of mistrust in technologies such as these, but the progress needs to be made slowly and cannot be drastically at this point, without understanding its legal, financial and security implications. The following actions must be taken when the usage of AI is eventually implemented:

View post:
Artificial Intelligence in the Legal Field: - Lexology

Artificial Intelligence Is Smart, but It Doesnt Play Well With Others – SciTechDaily

Humans find AI to be a frustrating teammate when playing a cooperative game together, posing challenges for teaming intelligence, study shows.

When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These superhuman AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people?

In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS).

When playing the cooperative card game Hanabi, humans felt frustrated and confused by the moves of their AI teammate. Credit: Bryan Mastergeorge

It really highlights the nuanced distinction between creating AI that performs objectively well and creating AI that is subjectively trusted or preferred, says Ross Allen, co-author of the paper and a researcher in the Artificial Intelligence Technology Group. It may seem those things are so close that theres not really daylight between them, but this study showed that those are actually two separate problems. We need to work on disentangling those.

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning.

A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical reward by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI arent programmed to follow if/then statements, because the possible outcomes of the human tasks theyre slated to tackle, like driving a car, are far too many to code.

Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent wont necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data Allen says. The skys the limit in what it could, in theory, do.

Today, researchers are using Hanabi to test the performance of reinforcement learning models developed for collaboration, in much the same way that chess has served as a benchmark for testing competitive AI for decades.

The game of Hanabi is akin to a multiplayer form of Solitaire. Players work together to stack cards of the same suit in order. However, players may not view their own cards, only the cards that their teammates hold. Each player is strictly limited in what they can communicate to their teammates to get them to pick the best card from their own hand to stack next.

The Lincoln Laboratory researchers did not develop either the AI or rule-based agents used in this experiment. Both agents represent the best in their fields for Hanabi performance. In fact, when the AI model was previously paired with an AI teammate it had never played with before, the team achieved the highest-ever score for Hanabi play between two unknown AI agents.

That was an important result, Allen says. We thought, if these AI that have never met before can come together and play really well, then we should be able to bring humans that also know how to play very well together with the AI, and theyll also do very well. Thats why we thought the AI team would objectively play better, and also why we thought that humans would prefer it, because generally well like something better if we do well.

Neither of those expectations came true. Objectively, there was no statistical difference in the scores between the AI and the rule-based agent. Subjectively, all 29 participants reported in surveys a clear preference toward the rule-based teammate. The participants were not informed which agent they were playing with for which games.

One participant said that they were so stressed out at the bad play from the AI agent that they actually got a headache, says Jaime Pena, a researcher in the AI Technology and Systems Group and an author on the paper. Another said that they thought the rule-based agent was dumb but workable, whereas the AI agent showed that it understood the rules, but that its moves were not cohesive with what a team looks like. To them, it was giving bad hints, making bad plays.

This perception of AI making bad plays links to surprising behavior researchers have observed previously in reinforcement learning work. For example, in 2016, when DeepMinds AlphaGo first defeated one of the worlds best Go players, one of the most widely praised moves made by AlphaGo was move 37 in game 2, a move so unusual that human commentators thought it was a mistake. Later analysis revealed that the move was actually extremely well-calculated, and was described as genius.

Such moves might be praised when an AI opponent performs them, but theyre less likely to be celebrated in a team setting. The Lincoln Laboratory researchers found that strange or seemingly illogical moves were the worst offenders in breaking humans trust in their AI teammate in these closely coupled teams. Such moves not only diminished players perception of how well they and their AI teammate worked together, but also how much they wanted to work with the AI at all, especially when any potential payoff wasnt immediately obvious.

There was a lot of commentary about giving up, comments like I hate working with this thing,' adds Hosea Siu, also an author of the paper and a researcher in the Control and Autonomous Systems Engineering Group.

Participants who rated themselves as Hanabi experts, which the majority of players in this study did, more often gave up on the AI player. Siu finds this concerning for AI developers, because key users of this technology will likely be domain experts.

Lets say you train up a super-smart AI guidance assistant for a missile defense scenario. You arent handing it off to a trainee; youre handing it off to your experts on your ships who have been doing this for 25 years. So, if there is a strong expert bias against it in gaming scenarios, its likely going to show up in real-world ops, he adds.

The researchers note that the AI used in this study wasnt developed for human preference. But, thats part of the problem not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance.

If researchers dont focus on the question of subjective human preference, then we wont create AI that humans actually want to use, Allen says. Its easier to work on AI that improves a very clean number. Its much harder to work on AI that works in this mushier world of human preferences.

Solving this harder problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project, which this experiment was funded under in Lincoln Laboratorys Technology Office, in collaboration with the U.S. Air Force Artificial Intelligence Accelerator and the MIT Department of Electrical Engineering and Computer Science. The project is studying what has prevented collaborative AI technology from leaping out of the game space and into messier reality.

The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

You can imagine we rerun the experiment, but after the fact and this is much easier said than done the human could ask, Why did you do that move, I didnt understand it? If the AI could provide some insight into what they thought was going to happen based on their actions, then our hypothesis is that humans would say, Oh, weird way of thinking about it, but I get it now, and theyd trust it. Our results would totally change, even though we didnt change the underlying decision-making of the AI, Allen says.

Like a huddle after a game, this kind of exchange is often what helps humans build camaraderie and cooperation as a team.

Maybe its also a staffing bias. Most AI teams dont have people who want to work on these squishy humans and their soft problems, Siu adds, laughing. Its people who want to do math and optimization. And thats the basis, but thats not enough.

Mastering a game such as Hanabi between AI and humans could open up a universe of possibilities for teaming intelligence in the future. But until researchers can close the gap between how well an AI performs and how much a human likes it, the technology may well remain at machine versus human.

Reference: Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi by Ho Chit Siu, Jaime D. Pena, Kimberlee C. Chang, Edenna Chen, Yutai Zhou, Victor J. Lopez, Kyle Palko and Ross E. Allen, Accepted, 2021 Conference on Neural Information Processing Systems (NeurIPS).arXiv:2107.07630

More:
Artificial Intelligence Is Smart, but It Doesnt Play Well With Others - SciTechDaily

Filings buzz in the railway industry: Increase in artificial intelligence mentions – Railway Technology

Mentions of artificial intelligence within the filings of companies in the railway industry rose 64% between the first and second quarters of 2021.

In total, the frequency of sentences related to artificial intelligence between July 2020 and June 2021 was 137% increase than in 2016 when GlobalData, from whom our data for this article is taken, first began to track the key issues referred to in company filings.

When companies in the railway industry publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Artificial intelligence is one of these topics - companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether artificial intelligence is featuring more in the summaries and strategies of companies in the railway industry, two measures were calculated. Firstly, we looked at the percentage of companies which have mentioned artificial intelligence at least once in filings during the past twelve months - this was 78% compared to 52% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to artificial intelligence.

Of the 50 biggest employers in the railway industry, Hitachi Transport System, Ltd. was the company which referred to artificial intelligence the most between July 2020 and June 2021. GlobalData identified 83 artificial intelligence-related sentences in the Japan-based company's filings - 2.4% of all sentences. XPO Logistics Inc mentioned artificial intelligence the second most - the issue was referred to in 1.3% of sentences in the company's filings. Other top employers with high artificial intelligence mentions included East Japan Railway Co, Yamato Holdings Co Ltd and ID Logistics Group.

Across all companies in the railway industry the filing published in the second quarter of 2021 which exhibited the greatest focus on artificial intelligence came from XPO Logistics Inc. Of the document's 1,093 sentences, 11 (1%) referred to artificial intelligence.

This analysis provides an approximate indication of which companies are focusing on artificial intelligence and how important the issue is considered within the railway industry, but it also has limitations and should be interpreted carefully. For example, a company mentioning artificial intelligence more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into artificial intelligence have been successes or failures.

GlobalData also categorises artificial intelligence mentions by a series of subthemes. Of these subthemes, the most commonly referred to topic in the second quarter of 2021 was 'smart robots', which made up 82% of all artificial intelligence subtheme mentions by companies in the railway industry.

By Andrew Hillman.

Methodology:

GlobalDatas unique Job analytics enables understanding of hiring trends, strategies, and predictive signals across sectors, themes, companies, and geographies. Intelligent web crawlers capture data from publicly available sources. Key parameters include active, posted and closed jobs, posting duration, experience, seniority level, educational qualifications and skills.

Rail and Intermodal Automatic Equipment Identification

28 Aug 2020

Profile Measurement Devices for Trains and Tracks

28 Aug 2020

Surge Protection and Voltage Limiting Devices for Railways

28 Aug 2020

Go here to see the original:
Filings buzz in the railway industry: Increase in artificial intelligence mentions - Railway Technology

AI That Can Learn Cause-and-Effect: These Neural Networks Know What They’re Doing – SciTechDaily

A certain type of artificial intelligence agent can learn the cause-and-effect basis of a navigation task during training.

Neural networks can learn to solve all sorts of problems, from identifying cats in photographs to steering a self-driving car. But whether these powerful, pattern-recognizing algorithms actually understand the tasks they are performing remains an open question.

For example, a neural network tasked with keeping a self-driving car in its lane might learn to do so by watching the bushes at the side of the road, rather than learning to detect the lanes and focus on the roads horizon.

Researchers at MIT have now shown that a certain type of neural network is able to learn the true cause-and-effect structure of the navigation task it is being trained to perform. Because these networks can understand the task directly from visual data, they should be more effective than other neural networks when navigating in a complex environment, like a location with dense trees or rapidly changing weather conditions.

In the future, this work could improve the reliability and trustworthiness of machine learning agents that are performing high-stakes tasks, like driving an autonomous vehicle on a busy highway.

MIT researchers have demonstrated that a special class of deep learning neural networks is able to learn the true cause-and-effect structure of a navigation task during training. Credit: Stock Image

Because these machine-learning systems are able to perform reasoning in a causal way, we can know and point out how they function and make decisions. This is essential for safety-critical applications, says co-lead author Ramin Hasani, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Co-authors include electrical engineering and computer science graduate student and co-lead author Charles Vorbach; CSAIL PhD student Alexander Amini; Institute of Science and Technology Austria graduate student Mathias Lechner; and senior author Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of CSAIL. The research will be presented at the 2021 Conference on Neural Information Processing Systems (NeurIPS) in December.

Neural networks are a method for doing machine learning in which the computer learns to complete a task through trial-and-error by analyzing many training examples. And liquid neural networks change their underlying equations to continuously adapt to new inputs.

The new research draws on previous work in which Hasani and others showed how a brain-inspired type of deep learning system called a Neural Circuit Policy (NCP), built by liquid neural network cells, is able to autonomously control a self-driving vehicle, with a network of only 19 control neurons.

The researchers observed that the NCPs performing a lane-keeping task kept their attention on the roads horizon and borders when making a driving decision, the same way a human would (or should) while driving a car. Other neural networks they studied didnt always focus on the road.

That was a cool observation, but we didnt quantify it. So, we wanted to find the mathematical principles of why and how these networks are able to capture the true causation of the data, he says.

They found that, when an NCP is being trained to complete a task, the network learns to interact with the environment and account for interventions. In essence, the network recognizes if its output is being changed by a certain intervention, and then relates the cause and effect together.

During training, the network is run forward to generate an output, and then backward to correct for errors. The researchers observed that NCPs relate cause-and-effect during forward-mode and backward-mode, which enables the network to place very focused attention on the true causal structure of a task.

Hasani and his colleagues didnt need to impose any additional constraints on the system or perform any special set up for the NCP to learn this causality.

Causality is especially important to characterize for safety-critical applications such as flight, says Rus. Our work demonstrates the causality properties of Neural Circuit Policies for decision-making in flight, including flying in environments with dense obstacles such as forests and flying in formation.

They tested NCPs through a series of simulations in which autonomous drones performed navigation tasks. Each drone used inputs from a single camera to navigate.

The drones were tasked with traveling to a target object, chasing a moving target, or following a series of markers in varied environments, including a redwood forest and a neighborhood. They also traveled under different weather conditions, like clear skies, heavy rain, and fog.

The researchers found that the NCPs performed as well as the other networks on simpler tasks in good weather, but outperformed them all on the more challenging tasks, such as chasing a moving object through a rainstorm.

We observed that NCPs are the only network that pay attention to the object of interest in different environments while completing the navigation task, wherever you test it, and in different lighting or environmental conditions. This is the only system that can do this casually and actually learn the behavior we intend the system to learn, he says.

Their results show that the use of NCPs could also enable autonomous drones to navigate successfully in environments with changing conditions, like a sunny landscape that suddenly becomes foggy.

Once the system learns what it is actually supposed to do, it can perform well in novel scenarios and environmental conditions it has never experienced. This is a big challenge of current machine learning systems that are not causal. We believe these results are very exciting, as they show how causality can emerge from the choice of a neural network, he says.

In the future, the researchers want to explore the use of NCPs to build larger systems. Putting thousands or millions of networks together could enable them to tackle even more complicated tasks.

Reference: Causal Navigation by Continuous-time Neural Networks by Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner and Daniela Rus, 15 June 2021, Computer Science > Machine Learning.arXiv:2106.08314

This research was supported by the United States Air Force Research Laboratory, the United States Air Force Artificial Intelligence Accelerator, and the Boeing Company.

Continued here:
AI That Can Learn Cause-and-Effect: These Neural Networks Know What They're Doing - SciTechDaily