Winners and losers in the fulfilment of national artificial intelligence aspirations – Brookings Institution

The quest for national AI success has electrified the worldat last count, 44 countries have entered the race by creating their own national AI strategic plan. While the inclusion of countries like China, India, and the U.S. are expected, unexpected countries, including Uganda, Armenia, and Latvia, have also drafted national plans in hopes of realizing the promise. Our earlier posts, entitled How different countries view artificial intelligence and Analyzing artificial intelligence plans in 34 countries detailed how countries are approaching national AI plans, as well as how to interpret those plans. In this piece, we go a step further by examining indicators of future AI needs.

Clearly, having a national AI plan is a necessary but not sufficient condition to achieve the goals of the various AI plans circulating around the world; 44 countries currently have such plans. In previous posts, we noted how AI plans were largely aspirational, and that moving from this aspiration to successful implementation required substantial public-private investments and efforts.

In order to analyze the implementation to-date of countries national AI objectives, we assembled a country-level dataset containing: the number and size of supercomputers in the country as a measure of technological infrastructure, the amount of public and private spending on AI initiatives, the number of AI startups in the country, the number of AI patents and conference papers the countrys scholars produced, and the number of people with STEM backgrounds in the country. Taken together, these elements provide valuable insights as to how far along a country is in implementing its plan.

As analyzing each of the data elements individually presented some data challenges, we conducted a factor analysis to determine if there was a logical grouping of the data elements. Factor analysis reveals the underlying structure of data; that is, the technique mathematically determines how many groups (or factors) of data exist by analyzing which data elements are most closely related to other elements.

Given that our data included five distinct dimensions (i.e., technology infrastructure, AI startups, spending, patents and conference papers, and people), we expected that five factors would emerge, particularly since the dimensions appear to be relatively separate and distinct. But the data showed otherwise. In all, this factor analysis revealed all of the data elements fall under two factorspeople-related and technology-related.

The first factor is the set of AI hiring, STEM graduates, and technology skill penetration data points, which are all associated with the people side of AI. Without qualified people, AI implementations are unlikely to be effective.

The second factor is comprised of all the non-people data elements of AI, which include computing power, AI startups, investment, conference and journal papers, and AI patent submission data points. In looking at these data elements, we realized that all of the data elements in this factor were technology-related, either from a hardware or a thought-leadership standpoint.

Given these findings, we can treat the data as containing two distinct categories: people and technology. Figure 1 shows where a select set of countries sit along these dimensions.

The countries that are in the upper right-hand corner we dub Leaders; they have both the people (factor 1) and the technology (factor 2) to meet their goals. Countries in the lower right quadrant we dub Technically Prepared, and because they are higher on the technology dimensions (factor 2) but lower on the people dimensions (factor 1). Those countries in the upper left quadrant we dub the People Prepared, and largely due to their factors higher on the people dimension (factor 1), but lower on the technology dimension (factor 2). The final quadrantthe lower left quadrantwe dub the Aspirational quadrant since those countries have not yet substantially moved forward in either the people or technology dimension (factor 1 and 2 respectively) in achieving their national AI strategy.

China is unmistakably closer to achieving its national AI strategy goals. It is both a leader in the technical dimension and a leader in the people dimension. Of note is that, while China is strongly positioned in both dimensions, it is not highest in either dimension; the U.S. is higher in the technical dimension, and India, Singapore, and Germany are all higher on the people dimension. Given the population of China and its overall investment in AI-related spending, it is not surprising that China has an early and commanding lead over other countries.

The U.S., while a leader in the technology dimension, particularly in the sub-dimensions of investments and patents, ranks a relatively dismal 15th place after such countries as Russia, Portugal, and Sweden in the people dimension. This is especially clear in the sub-dimension of STEM graduates, where it ranks near the bottom. While the vast U.S. spending advantage has given it an early lead in the technology dimensions, we suspect that the overall lack of STEM-qualified individuals is likely to significantly constrain the U.S. in achieving its strategic goals in the future.

By contrast, India holds a small but measurable lead over other countries in the people dimension, but is noticeably lagging in the technology dimension, particularly in the investment sub-dimension. This is not surprising, as India has long been known for its education prowess but has not invested equally with leaders in the technology dimension.

Our focus on China, the U.S., and India is not to suggest that these are the only countries that can achieve their national AI objectives. Other countries, notably South Korea, Germany, and the United Kingdom are just outside of top positions, and, by virtue of generally being well-balanced between the people and the technology dimensions, have an excellent chance to close the gap

At present, China, the U.S., and India are leading the way in implementing national AI plans. Yet China has already hit on a balanced strategy that has thus far eluded the U.S. and India. This suggests that China needs to merely continue its strategy. However, strategy refinement is necessary for the U.S. and India to keep pace. These leaders are closely followed by South Korea, Germany, and the United Kingdom.

In future posts, we will dive deeper into both the people and technology dimensions, and will dissect specific shortfalls for each country, as well as what can be done to address these shortfalls. Anything short of a substantial national commitment to AI achievement is likely to relegate the country to the status of a second-tier player in the space. If the U.S. wants to dominate this space, it needs to improve the people dimension of technology innovation and make sure it has the STEM graduates required to push its AI innovation to new heights.

Visit link:
Winners and losers in the fulfilment of national artificial intelligence aspirations - Brookings Institution

China beats the USA in Artificial Intelligence and international awards – Modern Diplomacy

There is no doubt that the return of Huaweis CFO Meng Wanzhou to Beijing marks a historic event for the entire country that made every Chinese person incredibly proud, especially bearing in mind its timing, as the National Day celebrations took place on October 1.

Where there is a five-star red flag, there is a beacon of faith. If faith has a color, it must be China red, Ms. Meng said to the cheering crowd at Shenzhen airport after returning home from Canada. She also added that All the frustration and difficulties, gratitude and emotion, steadfastness and responsibility will transform into momentum for moving us forward, into courage for our all-out fight.

Regardless of how encouraging the Chinese tech giant heiresss words may sound, the fact remains that the company remains a target of U.S. prosecution and sanctionssomething that is not about to change anytime soon.

When the Sanctions Bite

It was former U.S. President Donald Trump who in May 2019 signed an order that allowed the then-Commerce Secretary Wilbur Ross to halt any transactions concerning information or communications technology posing an unacceptable risk to the countrys national security. As a result, the same month, Huawei and its non-U.S. affiliates were added to the Bureau of Industry and Security Entity List, which meant that any American companies wishing to sell or transfer technology to the company would have to obtain a licence issued by the BIS.

In May 2020, the U.S. Department of Commerce decided to expand the FPDP Rule by restricting the Chinese tech giant from acquiring foreign-made semiconductors produced or developed from certain U.S. technology or software and went even further in August the same year by issuing the Final Rule that prohibits the re-export, export from abroad or transfer (in-country) of (i) certain foreign-produced items controlled under the amended footnote 1 to the Entity List (New Footnote 1) when there is (ii) knowledge of certain circumstances, the scope of which were also expanded.

Moreover, the decision also removed the Temporary General License (TGL) previously authorizing certain transactions with Huawei and added thirty-eight additional affiliates of the Chinese company to the Entity List.

In these particular circumstances, despite the initial predictions made by Bloomberg early in 2020 that Trumps decision to blacklist Huawei fails to stop its growth, the current reality seems to be slightly changing for onceand brieflythe worlds largest smartphone vendor.

The impact of the U.S. sanctions has already resulted in a drop in sales in the smartphone business by more than 47% in the first half of 2021, and the total revenue fell by almost 30% if we compare it with the same period in 2020. As is estimated by rotating Chairman Eric Xu, the companys revenue concerning its smartphone sales will drop by at least $30-40 billion this year.

For the record, Huaweis smartphone sales accounted for $50 billion in revenue last year. The company has generated $49.57 billion in revenue in total so far, which is said to be the most significant drop in its history.

In Search of Alternative Income Streams

Despite finding itself in dire straits, the company is in constant search for new sources of income with a recent decision to charge patent royalties from other smartphone makers for the use of its 5G technologies, with a per unit royalty cap at $2.50 for every multimode mobile device capable of connections to 5G and previous generations of mobile networks. Huaweis price is lower than the one charged by Nokia ($3.58 per device) and Ericsson ($2.50-$5 per device).

Notably, according to data from the intellectual property research organization GreyB, Huawei has 3,007 declared 5G patent families and over 130,000 5G active patents worldwide, making the Chinese company the largest patent holder globally.

Jason Ding, who is head of Huaweis intellectual property rights department, informed early this year that the company would collect about $1.2-$1.3 billion in revenue from patent licensing between 2019 and 2021. But royalties will not be the only revenue source for the company.

Investing in the Future: Cloud Services and Smart Cars

Apart from digitizing native companies in sectors like coal mining and port operations that increased its revenue by 23% last year and 18% in the first part of 2021, Huawei looks far into the future, slowly steering away from its dependency on foreign chip supplies by setting its sight on cloud services and software for smart cars.

Seizing an opportunity to improve the currently not-so-perfect cloud service environment, the Chinese tech giant is swiftly moving to have its share in the sector by creating new cloud services targeting companies and government departments. For this purpose, it plans to inject $100 million over three years period into SMEs to expand on Huawei Cloud.

As of today, Huaweis cloud business is said to grow by 116% in the first quarter of 2021, with a 20% share of a $6 billion market in China, as Canalys reports.

Huawei Clouds results have been boosted by Internet customers and government projects, as well as key wins in the automotive sector. It is a growing part of Huaweis overall business, said a chief analyst at the company, Matthew Ball. He also added that although 90% of this business is based in China, Huawei Cloud has a more substantial footprint in Latin America and Europe, the Middle East and Africa as compared with Alibaba Cloud and Tencent Cloud.

Another area where Huawei is trying its luck is electric and autonomous vehicles, where the company is planning to invest $1 billion alone this year. Although the company has repeatedly made it clear that it is unwilling to build cars, Huawei wants to help the car connect and make it more intelligent, as its official noted.

While during the 2021 Shanghai Auto Show, Huawei and Arcfox Polar Fox released a brand new Polar Fox Alpha S Huawei Hi and Chinas GAC revealed a plan to roll out a car with the Chinese tech company after 2024, Huawei is already selling the Cyrus SF5, a smart Chinese car from Chongqing Xiaokang, equipped with Huawei DriveONE electric drive system, from its experience store for the first time in the companys history. Whats more, the car is also on sale online.

R&D and International Talent as Crucial Ingredients to Become Tech Pioneer

There is a visible emphasis put on investing in high-quality research and development to innovate both in Huawei and China as a whole.

According to the companys data, the Chinese technology giant invested $19.3 billion in R&D in 2019, which accounted for 13.9% of its total business revenue and $22 billion last year, which was around 16% of its revenue. Interestingly, if Huawei was treated as a provincial administrative region, its R&D expenditure would rank seventh nationwide.

As reported by Chinas National Bureau of Statistics, the total R&D spending in China last year was 2.44 trillion yuan, up 10.6% year-on-year growth, and 2.21 trillion yuan in 2019, with 12.3% year-on-year growth.

As far as activities are concerned, the most were spent on experimental development in 2020 (2.02 trillion yuan, which is 82.7% of total spending), applied research (275.72 billion yuan, which gives 11.3%) and basic research (146.7 billion yuan, accounting for 6%). While the most money was spent by enterprises (1.87 trillion yuan, which gives up 10.4% year-on-year), governmental research institutions spent 340.88 billion yuan (up 10.6% year-on-year), and universities and colleges spent 188.25 billion yuan (up 4.8% year-on-year).

As far as industries go, it is also worth mentioning that high-tech manufacturing spending accounted for 464.91 billion yuan, with equipment manufacturing standing at 913.03 billion yuan. The state science and tech spending accounted for 1.01 trillion yuan, which is 0.06 trillion yuan less than in 2019.

As Huawei raises the budget for overseas R&D, the company also plans to invest human resources by attracting the brightest foreign minds into its business, which is in some way a by-product of the Trump-era visa limitations imposed on Chinese students.

So far, concentrating on bringing Chinese talent educated abroad, Huawei is determined to broader its talent pool by tall noses, as the mainland Chinese sometimes refer to people of non-Chinese origin.

Now we need to focus on bringing in talent with tall noses and allocate a bigger budget for our overseas research centres, said the companys founder Ren Zhengfei in a speech made in August. We need to turn Huaweis research center in North America into a talent recruitment hub, Ren added.

While Huawei wants to scout for those who have experience working in the U.S. and Europe, it wants to meet the salary standards comparable to the U.S. market to make their offer attractive enough.

What seems to be extraordinary and crucial by looking at China through Huawei lens is that it is, to the detriment of its critics, indeed opening to the outside world by aiming at replenishing all facets of its business.

We need to further liberate our thoughts and open our arms to welcome the best talent in the world, to quote Ren, in an attempt to help the company become more assimilated in overseas markets as a global enterprise in three to five years.

The Chinese tech giant aims to attract international talent to its new 1.6 million square meter research campus in Qingpu, Shanghai, which will house 30,000 to 40,000 research staff primarily concerned with developing handset and IoT chips. The Google-like campus is said to be completed in 2023.

The best sign of Huaweis slow embrace of the start-up mentality, as the companys head of research and development in the UK, Henk Koopmans, put it, is the acquiring of the Center for Integrated Photonics based in Ipswich (UK) in 2012, which has recently developed a laser on a chip that can direct light into a fibre-optic cable.

This breakthrough discovery, in creating an alternative to the mainstream silicon-based semiconductors, provides Huawei with its product based on Indium Phosphide technology to create a situation where the company no longer needs to rely on the U.S. know-how.

As for high-profile foreign recruitments, Huawei has recently managed to hire a renowned French mathematician Laurent Lafforgue, a winner of the 2002 Fields Medal, dubbed as the Nobel Prize of mathematics, who will work at the companys research center in Paris, and appointed the former head of BBC news programmes Gavin Allen as its executive editor in chief to improve its messaging strategy in the West.

According to Huaweis annual report published in 2020, the Shenzhen-based company had 197,000 employees worldwide, including employees from 162 different countries and regions. Moreover, it increased its headcount by 3,000 people between the end of 2019 and 2020, with 53.4% of its employees in the R&D sector.

The main objective of the developments mentioned above is to lead the world in both 5G and 6G to dominate global standards of the future.

We will not only lead the world in 5G, more importantly, we will aim to lead the world in wider domains, said Huaweis Ren Zhengfei in August. We research 6G as a precaution, to seize the patent front, to make sure that when 6G one day really comes into use, we will not depend on others, Ren added.

Discussing the potential uses of 6G technology, Huaweis CEO told his employees that it might be able to detect and sense beyond higher data transmission capabilities in the current technologies, with a potential to be utilized in healthcare and surveillance.

Does the U.S. Strategy Towards Huawei Work?

As we can see, the Chinese tech giant has not only proved to be resilient through the years of being threatened by the harmful U.S. sanctions, but it also has made significant steps to become independent and, therefore, entirely out of Washingtons punishment reach.

Although under the intense pressure from the Republicans the U.S. Commerce Secretary Gina Raimondo promised that the Biden administration will take further steps against Huawei if need be, it seems that there is nothing much that the U.S. can do to stop the Chinese company from moving ahead without any U.S. permission to develop in the sectors of the future, while still making a crucial contribution to the existing ones.

At the same time, continuing with the Trump-era policies aimed at Huawei is not only hurting American companies but, according to a report from the National Foundation for American Policy published in August 2021, it also might deal a significant blow to innovation and scientific research in the country.

Restricting Huawei from doing business in the U.S. will not make the U.S. more secure or stronger; instead, this will only serve to limit the U.S. to inferior yet more expensive alternatives, leaving the U.S. lagging behind in 5G deployment, and eventually harming the interests of U.S. companies and consumers, Huawei said in, what now appears to be, prophetic statement to CNBC in 2019.

On that note, perhaps instead of making meaningless promises to the Republicans that the Biden administration wouldnt be soft on the Chinese tech giant, Raimondo would make the U.S. better off by engaging with Huawei, or at least rethinking the current policies, which visibly are not bringing the desired results, yet effectively working to undermine the U.S. national interest in the long run.

From our partner RIAC

Related

See the original post:
China beats the USA in Artificial Intelligence and international awards - Modern Diplomacy

Artificial Intelligence in the Legal Field: – Lexology

Artificial Intelligence is a mechanism through which computers are programmed to undertake tasks which otherwise are done by the human brain. Like every other thing, it also has pros and cons to it. While the usage of Artificial Intelligence can help in completing a task in a few minutes on the other hand, if it worked as well as it is deemed to, it could potentially take away employment of thousands of people across the country. The growing influence of Artificial Intelligence (AI) can be seen across various industries, from IT to farming, from manufacturing to customer service. The Indian Legal Industry, meanwhile, has always been a little slower to adapt to technology and has seen minimal changes to superior technology. This is promulgated by several lawyers still feeling comfortable with the same old system of functioning that was designed decades ago. AI has managed to disrupt other industries. W ith an ever growing pendency and increasing demand for self-service systems even in the legal fraternity, this once assumed-to-be utopian idea can become a reality for all lawyers. Some of the concerning questions that will be addressed in this article are as follows:

What are the changes that the Indian legal system has already witnessed?

The Introduction of AI into the legal system has made a drastic impact on the legal fraternities across the globe. The first global player to attempt using AI for legal purposes was through the IBM Watson powered robot ROSS, which used a unique method by mining data and interpreting trends and patterns in the law to solve research questions. Interestingly, the area that will get most affected is not the litigation process or arbitration matters, but in fact the back-end work for the litigation and arbitration purposes such as research, data storage and usage, etc.

Due to the sheer volume of cases and diversity in case matters, the Indian laws and their interpretations keep changing and developing further. If lawyers could have access to AI-Based technology that could help with research matters then the labour cost of research work could be significantly reduced, leading to the profitability and significant increase in the speed of getting work done. While this could lead to the reduction of staff members, i. e. Paralegals and some associates, it would also increase the overall productivity for all lawyers and lead to the fast-tracking of legal research and drafting.

One of the best examples is the usage of the AI-based Software Kira by Cyril Amarchand Mangaldas that examines, identifies and provides a refined search on the specific data needed with a reportedly high degree of precision. This reportedly has allowed the firm to focus on more important aspects of the litigation process and has reduced the repetitive and monotonous work usually done by paralegals, interns and other entry-level employees.

In fact, several noted Jurists and Judges have spoken in good terms about the necessity of such AI-Based software that could be useful for the docketing system and simple decision making process. Some of the statements made by these eminent personalities are as follows:

Justice SA Bobde had said : We must increasingly focus on harnessing IT and IT enabled services (ITES) for providing more efficient and cost-effective access to and delivery of justice. This must also include undertaking serious study concerning the future of Artificial Intelligence in law, especially how Artificial Intelligence can assist in judicial decision making. I believe exploring this interface would be immensely beneficial for many reasons. For instance, it would allow us to streamline courts caseloads through enabling better court management. This would be a low hanging fruit. On the other end of the spectrum, it will allow us to shift the judicial time from routine-simple-straightforward matters (e.g. cases which are non-rivalrous) and apply them to more complex-intricate matters that require more human attention and involvement.Therefore, in India identification of such matters and developing relevant technology ought to be our next focus.

Justice DY Chandrachud said : The idea of Artificial Intelligence is not to supplant the human brain or the human mind or the presence of judges but to provide a facilitative tool to judges to reassess the processes which they follow, to reassess the work which they do and to ensure that their outcome are more predictable and consistent and ultimately provide wider access to justice to the common citizens.

What legal problems can AI solve in India?

While the country admittedly has a massive issue with respect to its judicial system owing to the massive pendency and huge volume of unresolved cases, the inclusion of AI can help with resolving a majority of its problems. The introduction of technological advancement will aid the lawyers in conducting legal research in an efficient and timely manner and thus will ensure AI software equipped lawyers to focus more on advising their clients and taking up complex issues/cases. It also helps in assessing a potential outcome to pending cases and could be of great assistance to the courts and private parties to help them decide on which cases to pursue, which ones to resolve amicably if possible and which ones to let go of!

Some of the benefits of implementing the nation-wide use of AI systems are as follows:

What are the changes needed for the AI systems in India and the road ahead?

While there are several benefits to Lawyers/Firms and the Judiciary for implementing AI into the Legal fraternity, there are a few caveats as well. With any form of technology for the matter, the risk of data infringement, cyber-attacks and hacking attempts are a constant threat. Incorrect software is also an issue that has often been a question over technology, especially over those technologies that are relatively untested and new in the market.

There are also some questions regarding the nature of ethics of an AI. An important point to keep in mind is that Artificial Intelligence software does not have a mind of their own. Although they do think before taking an action, their actions are completely programmed and there is always an issue of trustworthiness as AI needs to have a defined ethical purpose and technically robust and reliable systems. These issues were also seen to persist in the highly acclaimed ROSS, which saw several glitches.

There is also another issue that arises with implementing Artificial Intelligence. The affordability of these AI software is a factor that needs deliberation. The maintenance of these AI facilities are an added concern, with firms investing in privatised AI research facilities as mentioned earlier. Thus the investment that would be required to establish and operate would be expensive, thus making a division of technological capabilities ab initio. This is also taking into factor the unknown probability of the learning curve involved in dealing with the lawyers, firms and judiciary members who utilize such technology.

With these challenges kept in mind, the regulations with respect to AI use must be kept in mind, particularly with respect to how the judiciary uses it. There has and there always will be a sense of mistrust in technologies such as these, but the progress needs to be made slowly and cannot be drastically at this point, without understanding its legal, financial and security implications. The following actions must be taken when the usage of AI is eventually implemented:

View post:
Artificial Intelligence in the Legal Field: - Lexology

Filings buzz in the railway industry: Increase in artificial intelligence mentions – Railway Technology

Mentions of artificial intelligence within the filings of companies in the railway industry rose 64% between the first and second quarters of 2021.

In total, the frequency of sentences related to artificial intelligence between July 2020 and June 2021 was 137% increase than in 2016 when GlobalData, from whom our data for this article is taken, first began to track the key issues referred to in company filings.

When companies in the railway industry publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Artificial intelligence is one of these topics - companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether artificial intelligence is featuring more in the summaries and strategies of companies in the railway industry, two measures were calculated. Firstly, we looked at the percentage of companies which have mentioned artificial intelligence at least once in filings during the past twelve months - this was 78% compared to 52% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to artificial intelligence.

Of the 50 biggest employers in the railway industry, Hitachi Transport System, Ltd. was the company which referred to artificial intelligence the most between July 2020 and June 2021. GlobalData identified 83 artificial intelligence-related sentences in the Japan-based company's filings - 2.4% of all sentences. XPO Logistics Inc mentioned artificial intelligence the second most - the issue was referred to in 1.3% of sentences in the company's filings. Other top employers with high artificial intelligence mentions included East Japan Railway Co, Yamato Holdings Co Ltd and ID Logistics Group.

Across all companies in the railway industry the filing published in the second quarter of 2021 which exhibited the greatest focus on artificial intelligence came from XPO Logistics Inc. Of the document's 1,093 sentences, 11 (1%) referred to artificial intelligence.

This analysis provides an approximate indication of which companies are focusing on artificial intelligence and how important the issue is considered within the railway industry, but it also has limitations and should be interpreted carefully. For example, a company mentioning artificial intelligence more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into artificial intelligence have been successes or failures.

GlobalData also categorises artificial intelligence mentions by a series of subthemes. Of these subthemes, the most commonly referred to topic in the second quarter of 2021 was 'smart robots', which made up 82% of all artificial intelligence subtheme mentions by companies in the railway industry.

By Andrew Hillman.

Methodology:

GlobalDatas unique Job analytics enables understanding of hiring trends, strategies, and predictive signals across sectors, themes, companies, and geographies. Intelligent web crawlers capture data from publicly available sources. Key parameters include active, posted and closed jobs, posting duration, experience, seniority level, educational qualifications and skills.

Rail and Intermodal Automatic Equipment Identification

28 Aug 2020

Profile Measurement Devices for Trains and Tracks

28 Aug 2020

Surge Protection and Voltage Limiting Devices for Railways

28 Aug 2020

Go here to see the original:
Filings buzz in the railway industry: Increase in artificial intelligence mentions - Railway Technology

Artificial Intelligence Is Smart, but It Doesnt Play Well With Others – SciTechDaily

Humans find AI to be a frustrating teammate when playing a cooperative game together, posing challenges for teaming intelligence, study shows.

When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These superhuman AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people?

In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS).

When playing the cooperative card game Hanabi, humans felt frustrated and confused by the moves of their AI teammate. Credit: Bryan Mastergeorge

It really highlights the nuanced distinction between creating AI that performs objectively well and creating AI that is subjectively trusted or preferred, says Ross Allen, co-author of the paper and a researcher in the Artificial Intelligence Technology Group. It may seem those things are so close that theres not really daylight between them, but this study showed that those are actually two separate problems. We need to work on disentangling those.

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning.

A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical reward by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI arent programmed to follow if/then statements, because the possible outcomes of the human tasks theyre slated to tackle, like driving a car, are far too many to code.

Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent wont necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data Allen says. The skys the limit in what it could, in theory, do.

Today, researchers are using Hanabi to test the performance of reinforcement learning models developed for collaboration, in much the same way that chess has served as a benchmark for testing competitive AI for decades.

The game of Hanabi is akin to a multiplayer form of Solitaire. Players work together to stack cards of the same suit in order. However, players may not view their own cards, only the cards that their teammates hold. Each player is strictly limited in what they can communicate to their teammates to get them to pick the best card from their own hand to stack next.

The Lincoln Laboratory researchers did not develop either the AI or rule-based agents used in this experiment. Both agents represent the best in their fields for Hanabi performance. In fact, when the AI model was previously paired with an AI teammate it had never played with before, the team achieved the highest-ever score for Hanabi play between two unknown AI agents.

That was an important result, Allen says. We thought, if these AI that have never met before can come together and play really well, then we should be able to bring humans that also know how to play very well together with the AI, and theyll also do very well. Thats why we thought the AI team would objectively play better, and also why we thought that humans would prefer it, because generally well like something better if we do well.

Neither of those expectations came true. Objectively, there was no statistical difference in the scores between the AI and the rule-based agent. Subjectively, all 29 participants reported in surveys a clear preference toward the rule-based teammate. The participants were not informed which agent they were playing with for which games.

One participant said that they were so stressed out at the bad play from the AI agent that they actually got a headache, says Jaime Pena, a researcher in the AI Technology and Systems Group and an author on the paper. Another said that they thought the rule-based agent was dumb but workable, whereas the AI agent showed that it understood the rules, but that its moves were not cohesive with what a team looks like. To them, it was giving bad hints, making bad plays.

This perception of AI making bad plays links to surprising behavior researchers have observed previously in reinforcement learning work. For example, in 2016, when DeepMinds AlphaGo first defeated one of the worlds best Go players, one of the most widely praised moves made by AlphaGo was move 37 in game 2, a move so unusual that human commentators thought it was a mistake. Later analysis revealed that the move was actually extremely well-calculated, and was described as genius.

Such moves might be praised when an AI opponent performs them, but theyre less likely to be celebrated in a team setting. The Lincoln Laboratory researchers found that strange or seemingly illogical moves were the worst offenders in breaking humans trust in their AI teammate in these closely coupled teams. Such moves not only diminished players perception of how well they and their AI teammate worked together, but also how much they wanted to work with the AI at all, especially when any potential payoff wasnt immediately obvious.

There was a lot of commentary about giving up, comments like I hate working with this thing,' adds Hosea Siu, also an author of the paper and a researcher in the Control and Autonomous Systems Engineering Group.

Participants who rated themselves as Hanabi experts, which the majority of players in this study did, more often gave up on the AI player. Siu finds this concerning for AI developers, because key users of this technology will likely be domain experts.

Lets say you train up a super-smart AI guidance assistant for a missile defense scenario. You arent handing it off to a trainee; youre handing it off to your experts on your ships who have been doing this for 25 years. So, if there is a strong expert bias against it in gaming scenarios, its likely going to show up in real-world ops, he adds.

The researchers note that the AI used in this study wasnt developed for human preference. But, thats part of the problem not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance.

If researchers dont focus on the question of subjective human preference, then we wont create AI that humans actually want to use, Allen says. Its easier to work on AI that improves a very clean number. Its much harder to work on AI that works in this mushier world of human preferences.

Solving this harder problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project, which this experiment was funded under in Lincoln Laboratorys Technology Office, in collaboration with the U.S. Air Force Artificial Intelligence Accelerator and the MIT Department of Electrical Engineering and Computer Science. The project is studying what has prevented collaborative AI technology from leaping out of the game space and into messier reality.

The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

You can imagine we rerun the experiment, but after the fact and this is much easier said than done the human could ask, Why did you do that move, I didnt understand it? If the AI could provide some insight into what they thought was going to happen based on their actions, then our hypothesis is that humans would say, Oh, weird way of thinking about it, but I get it now, and theyd trust it. Our results would totally change, even though we didnt change the underlying decision-making of the AI, Allen says.

Like a huddle after a game, this kind of exchange is often what helps humans build camaraderie and cooperation as a team.

Maybe its also a staffing bias. Most AI teams dont have people who want to work on these squishy humans and their soft problems, Siu adds, laughing. Its people who want to do math and optimization. And thats the basis, but thats not enough.

Mastering a game such as Hanabi between AI and humans could open up a universe of possibilities for teaming intelligence in the future. But until researchers can close the gap between how well an AI performs and how much a human likes it, the technology may well remain at machine versus human.

Reference: Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi by Ho Chit Siu, Jaime D. Pena, Kimberlee C. Chang, Edenna Chen, Yutai Zhou, Victor J. Lopez, Kyle Palko and Ross E. Allen, Accepted, 2021 Conference on Neural Information Processing Systems (NeurIPS).arXiv:2107.07630

More:
Artificial Intelligence Is Smart, but It Doesnt Play Well With Others - SciTechDaily

AI That Can Learn Cause-and-Effect: These Neural Networks Know What They’re Doing – SciTechDaily

A certain type of artificial intelligence agent can learn the cause-and-effect basis of a navigation task during training.

Neural networks can learn to solve all sorts of problems, from identifying cats in photographs to steering a self-driving car. But whether these powerful, pattern-recognizing algorithms actually understand the tasks they are performing remains an open question.

For example, a neural network tasked with keeping a self-driving car in its lane might learn to do so by watching the bushes at the side of the road, rather than learning to detect the lanes and focus on the roads horizon.

Researchers at MIT have now shown that a certain type of neural network is able to learn the true cause-and-effect structure of the navigation task it is being trained to perform. Because these networks can understand the task directly from visual data, they should be more effective than other neural networks when navigating in a complex environment, like a location with dense trees or rapidly changing weather conditions.

In the future, this work could improve the reliability and trustworthiness of machine learning agents that are performing high-stakes tasks, like driving an autonomous vehicle on a busy highway.

MIT researchers have demonstrated that a special class of deep learning neural networks is able to learn the true cause-and-effect structure of a navigation task during training. Credit: Stock Image

Because these machine-learning systems are able to perform reasoning in a causal way, we can know and point out how they function and make decisions. This is essential for safety-critical applications, says co-lead author Ramin Hasani, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Co-authors include electrical engineering and computer science graduate student and co-lead author Charles Vorbach; CSAIL PhD student Alexander Amini; Institute of Science and Technology Austria graduate student Mathias Lechner; and senior author Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of CSAIL. The research will be presented at the 2021 Conference on Neural Information Processing Systems (NeurIPS) in December.

Neural networks are a method for doing machine learning in which the computer learns to complete a task through trial-and-error by analyzing many training examples. And liquid neural networks change their underlying equations to continuously adapt to new inputs.

The new research draws on previous work in which Hasani and others showed how a brain-inspired type of deep learning system called a Neural Circuit Policy (NCP), built by liquid neural network cells, is able to autonomously control a self-driving vehicle, with a network of only 19 control neurons.

The researchers observed that the NCPs performing a lane-keeping task kept their attention on the roads horizon and borders when making a driving decision, the same way a human would (or should) while driving a car. Other neural networks they studied didnt always focus on the road.

That was a cool observation, but we didnt quantify it. So, we wanted to find the mathematical principles of why and how these networks are able to capture the true causation of the data, he says.

They found that, when an NCP is being trained to complete a task, the network learns to interact with the environment and account for interventions. In essence, the network recognizes if its output is being changed by a certain intervention, and then relates the cause and effect together.

During training, the network is run forward to generate an output, and then backward to correct for errors. The researchers observed that NCPs relate cause-and-effect during forward-mode and backward-mode, which enables the network to place very focused attention on the true causal structure of a task.

Hasani and his colleagues didnt need to impose any additional constraints on the system or perform any special set up for the NCP to learn this causality.

Causality is especially important to characterize for safety-critical applications such as flight, says Rus. Our work demonstrates the causality properties of Neural Circuit Policies for decision-making in flight, including flying in environments with dense obstacles such as forests and flying in formation.

They tested NCPs through a series of simulations in which autonomous drones performed navigation tasks. Each drone used inputs from a single camera to navigate.

The drones were tasked with traveling to a target object, chasing a moving target, or following a series of markers in varied environments, including a redwood forest and a neighborhood. They also traveled under different weather conditions, like clear skies, heavy rain, and fog.

The researchers found that the NCPs performed as well as the other networks on simpler tasks in good weather, but outperformed them all on the more challenging tasks, such as chasing a moving object through a rainstorm.

We observed that NCPs are the only network that pay attention to the object of interest in different environments while completing the navigation task, wherever you test it, and in different lighting or environmental conditions. This is the only system that can do this casually and actually learn the behavior we intend the system to learn, he says.

Their results show that the use of NCPs could also enable autonomous drones to navigate successfully in environments with changing conditions, like a sunny landscape that suddenly becomes foggy.

Once the system learns what it is actually supposed to do, it can perform well in novel scenarios and environmental conditions it has never experienced. This is a big challenge of current machine learning systems that are not causal. We believe these results are very exciting, as they show how causality can emerge from the choice of a neural network, he says.

In the future, the researchers want to explore the use of NCPs to build larger systems. Putting thousands or millions of networks together could enable them to tackle even more complicated tasks.

Reference: Causal Navigation by Continuous-time Neural Networks by Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner and Daniela Rus, 15 June 2021, Computer Science > Machine Learning.arXiv:2106.08314

This research was supported by the United States Air Force Research Laboratory, the United States Air Force Artificial Intelligence Accelerator, and the Boeing Company.

Continued here:
AI That Can Learn Cause-and-Effect: These Neural Networks Know What They're Doing - SciTechDaily

Is it True that the USA Has Already Lost the Artificial Intelligence Battle with China? – BBN Times

China is overtaking the U.S. in artificial intelligence (AI), setting off alarm bells on the other side of the Pacific as the world's two largest economies are battling for world supremacy.

Artificial intelligence is widely used in a range of industries and greatly affects a nation's competitiveness and security.

The United States of America is losing the artificial intelligence supremacy to China.

The increasing importance of information inmilitaryand warfare is making digital technology and its applications, such as analytics, AI, and augmented reality, indispensable to future conflicts.

It is fascinating and at the same time scary to see what the future of war may look like, and how devastating the aftermath can be.

Artifiicial intelligence weapons can attack with increased speed and precision compared to existing military weapons.

Chinas global share of research papers in the field of AI has vaulted from 4.26%(1,086) in 1997 to 27.68% in 2017 (37,343), surpassing any other country in the world, including the U.S. a position itcontinues to hold.

Beijing also consistently files more AI patents than any other country. As of March 2019, the number of Chinese AI firms has reached1,189, second only to the U.S., which has more than 2,000 active AI firms. These firms focus more on speech (e.g., speech recognition, speech synthesis) and vision (e.g., image recognition, video recognition) than their overseas counterparts.

It is very active with weaponizing artificial intelligence, machine learning and deep learning technology.

China's military application of AI includes unmanned intelligent combat systems, enhancing battlefield situational awareness and decision-making, conducting multi- domain offense and defense, and facilitating advanced training, simulation, and wargaming practices.

As an example, the launch in August of nuclear-capable rocket that circled the globe took US intelligence by surprise.

AFP via Getty Images

China recently tested a nuclear-capable manoeuvrable missile and Russia and the US have their own programmes. China's large population gives it advantages in generating and utilizing big data, and its decades-long effort in promoting technology and engineering gives it a rich supply of high-quality computer scientists and engineers.

US National Intelligence has recently reported that a superpower needs to lead in five technologies:

Source: Forbes

Beijing has won the artificial intelligence battle with Washington and is heading towards global dominance because of its technological advances.

China is likely to dominate many of the key emerging technologies, particularly artificial intelligence, synthetic biology and genetics within a decade.

The country has a vibrant market that is receptive to these new AI-based products, and Chinese firms are relatively fast in bringing AI products and services to the market.

Chinese consumers are also fast in adopting such products and services. As such, the environment supports rapid refinement of AI technologies and AI-powered products.

Beijings market is conducive to the adoption and improvement of artificial intelligence.

Read this article:
Is it True that the USA Has Already Lost the Artificial Intelligence Battle with China? - BBN Times

How Will Health Care Regulators Address Artificial Intelligence? – The Regulatory Review

Policymakers around the world are developing guidelines for use of artificial intelligence in health care.

Baymax, the robotic health aide and unlikely hero from the movie Big Hero 6, is an adorable cartoon character, an outlandish vision of a high-tech future. But underlying Baymaxs character is the very realistic concept of an artificial intelligence (AI) system that can be applied to health care.

As AI technology advances, how will regulators encourage innovation while protecting patient safety?

AI does not have a precise definition, but the term generally describes machines that have the capacity to process and respond to stimulation in a manner similar to human thought processes. Many industriessuch as the military, academia, and health carerely on AI today.

For decades, health care professionals have used AI to increase efficiency and enhance the quality of patient care. For example, radiologists employ AI to identify signs of certain diseases in medical imaging. Tech companies are also partnering with health care providers to develop AI-based predictive models to increase the accuracy of diagnoses. A recent study applied AI to predict COVID-19 based on self-reported symptoms.

In the wake of the COVID-19 pandemic and the rise of telemedicine, experts predict that AI technology will continue to be used to prevent and treat illness and will become more prevalent in the health care industry.

The use of AI in health care may improve patient care, but it also raises issues of data privacy and health equity. Although the health care sector is heavily regulated, no regulations target the use of AI in health care settings. Several countries and organizations, including the United States, have proposed regulations addressing the use of AI in health care, but no regulations have been adopted.

Even beyond the context of health care, policymakers have only begun to develop rules for the use of AI. Some existing data privacy laws and industry-specific regulations do apply to the use of AI, but no country has enacted AI-specific regulations. In January 2021, the European Union released its proposal for the first regulatory framework for the use of AI. The proposal establishes a procedure for new AI products entering the market and imposes heightened standards for applications of AI that are considered high risk.

The EUs suggested framework provides some examples of high-risk applications of AI that are related to health care such as the use of AI to triage emergency aid. Although the EUs proposal does not focus on the health care industry in particular, experts predict that the EU regulations will serve as a framework for future, more specific guidelines.

The EUs proposal strikes a balance between ensuring the safety and security of the AI market, while also continuing to promote innovation and investment in AI. These conflicting values also appear in U.S. proposals to address AI in health care. Both the U.S. Food and Drug Administration (FDA) and the U.S. Department of Health and Human Services (HHS) more broadly have begun to develop guidelines on the use of AI in the health industry.

In 2019, FDA published a discussion paper outlining a proposed regulatory framework for modifications to AI-based software as a medical device (SaMD). FDA defines AI-based SaMD as software intended to treat, diagnose, cure, mitigate, or prevent disease. In the agencys discussion paper, FDA asserts its commitment to ensure that AI-based SaMD will deliver safe and effective software functionality that improves the quality of care that patients receive. FDA outlines the regulatory approval cycle for AI-based SaMD, which requires a holistic evaluation of the product and the maker of the product.

Earlier this year, FDA released an action plan for the regulation of AI-based SaMD that reaffirmed its commitment to encourage the development of AI best practices. HHS has also announced its strategy for the regulation of AI applied in health care settings. As with FDA and the EU, HHS balances the health and well-being of patients with the continued innovation of AI technology.

The United States is not alone in its attempt to monitor and govern the use of AI in health care. Countries such as China, Japan, and South Korea have also released guidelines and proposals seeking to ensure patient safety. In June 2021, the World Health Organization (WHO) issued a report on the use of AI in health care and offered six guiding principles for AI regulation: protecting autonomy; promoting safety; ensuring transparency; fostering responsibility; ensuring equity; and promoting sustainable AI.

Scholars are also discussing the use of AI in health care. Some experts have urged policymakers to develop AI systems designed to advance health equity. Others warn that algorithmic bias and unequal data collection in AI can exacerbate existing health inequalities. Experts argue that, to mitigate the risk of discriminatory AI practices, policymakers should consider the unintended consequences of the use of AI.

For example, AI systems must be trained to recognize patterns in data, and the training data may reflect historical discrimination. One study showed that women are less likely to receive certain treatments than men even though they are more likely to need them. Similarly biased data would train an AI system to perpetuate this pattern of discrimination. Health care regulators must address the need to protect patients from potential inequalities without discouraging the development of life-saving innovation in AI.

As the use of AI becomes more prominent in health care, regulators in the United States and elsewhere find themselves considering more robust regulations to ensure quality of care.

Read more here:
How Will Health Care Regulators Address Artificial Intelligence? - The Regulatory Review

UC adopts recommendations for the responsible use of Artificial Intelligence – Preuss School Ucsd

Camille Nebeker, Ed.D., associate professor with appointments in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science and the Design Lab

The University of California Presidential Working Group on Artificial Intelligence was launched in 2020 by University of California President Michael V. Drake and former UC President Janet Napolitano to assist UC in determining a set of responsible principles to guide procurement, development, implementation, and monitoring of artificial intelligence (AI) in UC operations.

To support these goals, the working group developed a set of UC Responsible AI Principles and explored four high-risk application areas: health, human resources, policing, and student experience. The working group has published a final report that explores current and future applications of AI in these areas and provides recommendations for how to operationalize the UC Responsible AI Principles. The report concludes with overarching recommendations to help guide UCs strategy for determining whether and how to responsibly implement AI in its operations.

Camille Nebeker, Ed.D., associate professor with appointments in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science and the Design Lab, and two researchers in theDepartment of Computer Science and Engineering,Nadia Henninger, Ph.D., associateprofessor whose work focuses on cryptography and security, and Lawrence Saul, Ph.D., professor whose research interests are machine learning and data analysis, were members of the working group.

The use of artificial intelligence within the UC campuses cuts across human resources, procurement, policing, student experience and healthcare. We, as an organization, did not have guiding principles to support responsible decision-making around AI, said Nebeker, who co-founded and directs the Research Center for Optimal Digital Ethics Health at UC San Diego, a multidisciplinary group that conducts research and provides education to support ethical digital health study practices.

The UC Presidential Working Group on AI has met over the past year to develop principles to advance responsible practices specific to the selection, implementation and management of AI systems.

With universities increasingly turning to AI-enabled tools to support greater efficiency and effectiveness, UC is setting an important precedent as one of the first universities, and the largest public university system, to develop governance processes for the responsible use of AI. More info is available on the UC Newsroom.

View post:
UC adopts recommendations for the responsible use of Artificial Intelligence - Preuss School Ucsd

Artificial Intelligence project aims to improve standards and development of AI systems – University of Birmingham

A new project has been launched in partnership with the University of Birmingham aiming to address racial and ethical health inequalities using artificial intelligence (AI).

STANDING Together, being led by University Hospitals Birmingham NHS Foundation Trust (UHB), aims to develop standards for datasets that AI systems use, to ensure they are diverse, inclusive and work across all demographic groups. The resulting standards will help regulators, commissioners, policymakers and health data institutions assess whether AI systems are underpinned by datasets that represent everyone, and dont leave underrepresented or minority groups behind.

Xiao Liu, Clinical Researcher in Artificial Intelligence and Digital Healthcare at the University of Birmingham and UHB, and STANDING Together project co-leader, said: Were looking forward to starting work on our project, and developing standards that we hope will improve the use of AI both in the UK and around the world. We believe AI has enormous potential to improve patient care, but through our earlier work on producing AI guidelines, we also know that there is still lots of work to do to make sure AI is a success stories for all patients. Through the STANDING Together project, we will work to ensure AI benefits all patients and not just the majority.

NHSX NHS AI Lab, the NIHR, and the Health Foundation have awarded in total 1.4m to four projects, including STANDING Together. The other organisations working with UHB and the University of Birmingham on STANDING Together are the Massachusetts Institute of Technology, Health Data Research UK, Oxford University Hospitals NHS Foundation Trust, and The Hospital for Sick Children (Sickkids, Toronto).

The NHS AI Lab introduced the AI Ethics Initiative to support research and practical interventions that complement existing efforts to validate, evaluate and regulate AI-driven technologies in health and care, with a focus on countering health inequalities. Todays announcement is the result of the Initiatives partnership with The Health Foundation on a research competition, enabled by NIHR, to understand and enable opportunities to use AI to address inequalities and to optimise datasets and improve AI development, testing and deployment.

Brhmie Balaram, Head of AI Research and Ethics at NHSX, said: We're excited to support innovative projects that demonstrate the power of applying AI to address some of our most pressing challenges; in this case, we're keen to prove that AI can potentially be used to close gaps in minority ethnic health outcomes. Artificial intelligence has the potential to revolutionise care for patients, and we are committed to ensuring that this potential is realised for all patients by accounting for the health needs of diverse communities."

Dr Indra Joshi, Director of the NHS AI Lab at NHSX, added: As we strive to ensure NHS patients are amongst the first in the world to benefit from leading AI, we also have a responsibility to ensure those technologies dont exacerbate existing health inequalities.These projects will ensure the NHS can deploy safe and ethical Artificial Intelligence tools that meet the needs of minority communities and help our workforce deliver patient-centred and inclusive care to all.

Excerpt from:
Artificial Intelligence project aims to improve standards and development of AI systems - University of Birmingham