AI and Big Data Analytics: Executive Q&A with David Tareen of SAS – Datamation

The term artificial intelligence dates back to at least the 1950s, and yet, it seems that AI is still in its infancy, given its vast potential use cases and society-changing ceiling.

As AI experts develop a better understanding of both the big data models and applications of artificial intelligence, how can we expect to see the AI market, including machine learning (ML), change? Perhaps more crucially, how can we expect to see other industries transformed as a result of that change?

David Tareen, director for artificial intelligence at SAS, a top AI and analytics company, offered Datamation his insights into the current and future landscape of enterprise AI solutions:

At SAS, Tareen helps clients understand and apply AI and analytics. After 17 years in the IT industry and having been part of the cloud, mobile, and social revolutions in IT, he believes that AI holds the most potential for changing the world around us. In previous roles, Tareen led product and marketing teams at IBM and Lenovo. He has a masters degree in business administration from the University of North Carolina at Chapel Hill.

Datamation: How did you first get started in or develop an interest in AI?

Tareen: My first introduction to AI was in a meeting with a European government agency, which wanted to build a very large computer a supercomputer, so to speak that could perform a quintillion (1 followed by 18 zeros) calculations per second. I was curious what work this computer would be doing to require such fast performance, and the answers were fascinating to me. That was my first real introduction to AI and the possibilities it could unlock.

Datamation: What are your primary responsibilities in your current role?

Tareen: My primary role at SAS is to improve understanding of AI and analytics and what benefits these technologies can deliver. The AI market segment is noisy, and it is often difficult for clients to understand fact from fiction when it comes to AI. I help our customers understand where AI and analytics can benefit them and exactly how the process will work.

Datamation: What makes SAS a unique place to work?

Tareen: SAS is unlike any other organization. I would say what sets us apart is a deep-seated desire to prove the power of AI and analytics. We are convinced that AI and any of the underlying AI technologies, such as deep learning, conversational AI, computer vision, natural language processing, and others, can have a positive impact on not only our customers and their organizations, but on the world as well. And we are on a mission to showcase these benefits through our capabilities. This relentless and singular focus sets us apart.

More on analytics: Data Analytics Market Review

Datamation: What sets SAS AI solutions or vision apart from the competition?

Tareen: There are two areas that make our AI capabilities unique:

First is a focus on the end-to-end process. AI is more than about building machine or deep learning models. It requires data management, modeling, and finally being able to make decisions from those models. Over the years, SAS has tightly integrated these capabilities, so that an organization can go from questions to decisions using AI and analytics.

However, our customers often need more than one analytics method to solve a problem. Composite AI is a new term coined by Gartner that aligns with what we have traditionally called multidisciplinary analytics. These methods range from machine learning, deep learning, computer vision, natural language, forecasting, optimization, and even statistics. Our ability to provide all these methods to our customers helps them solve any challenge with AI and analytics.

Datamation: What do you think makes an AI product or service successful?

Tareen: The key to making an AI product or service successful is to deliver real-world results. In the past, organizations would have little to show for their AI investments because of the hyper-focus on model building and model performance. Today, there is a better understanding that for an AI product or service to be successful, it has to have all the other elements that will help make an outcome better or a process faster or cheaper.

Datamation: What is an affordable/essential AI solution that businesses of all sizes should implement?

Tareen: An absolute must for businesses of any size is a better understanding of their customers. AI is becoming an essential tool to accomplish this. The ability to communicate with a customer the way they like, at the right time and the right place, with the right message and the right offer (as well as making those predictions without compromising data privacy regulations) that is an essential solution that all businesses, regardless of their size, should implement.

Datamation: How does AI advance data analytics and other big data strategies?

Tareen: With large volumes of data, applying AI to the data itself is a must. AI capabilities can help untangle elements within data, so it can be used to make decisions. For example, we now use AI to recognize information within large data sets and then organize them in accordance with company policy or local regulations. At SAS, we use AI to spot potential privacy issues, lack of diversity, or even errors within big data. Once these issues are identified, they can be managed and then automated, so that new data coming into the database will automatically get the same treatment as it is recognized by AI.

Also read: Artificial Intelligence Market

Datamation: What do you think are some of the top trends in artificial intelligence right now?

Tareen: In terms of whats trending in AI, generally there is a lot more maturity when it comes to approaching productive deployments for AI across industries. Gone are the days of investing in building the perfect model. The focus now is on the broader ecosystem needed to deliver AI projects and realize enhanced value. This broader ecosystem includes investing in data management capabilities and deploying and governing AI and analytical assets to ensure they deliver value. Organizations that look at AI beyond just model development will be more productive with their AI initiatives.

Additionally, the notion that AI should be used for unique breakthrough projects has evolved. Now organizations find value in applying AI techniques to established projects to achieve best-in-class results. For example, manufacturers with good quality discipline can save significant costs by applying computer vision to existing processes. Another example is retailers that use machine learning techniques to improve forecasts and save on inventory and product waste costs.

Datamation: What subcategories of artificial intelligence are most widely used, and how are they currently used?

Tareen: AI is really a set of different technologies, such as machine learning, deep learning, computer vision, natural language, and others. All these technologies are finding success in different industries and across different parts of organizations.

Machine learning and deep learning are two areas seeing broadest use with the most promising results. ML can detect patterns in the data and make predictions without being told what to look for. Deep learning does the same but gets better results with bigger and more complex data (e.g., video, images). As these capabilities are being applied to traditional approaches of segmenting, forecasting, customer service, and other areas, organizations find they get better results with AI technologies.

Datamation: What industry (or industries) do you think does a good job of maximizing AI in their operations/products? What do you think they do well?

Tareen: Businesses need to think of AI as more than one technology. Just like people use different senses (e.g., listening, seeing, calculating, imagining) to make decisions, AI can make better decisions when used in a composite way. The most productive organizations combine AI capabilities of computer vision, natural language, optimization, and machine learning into solutions and workflows, which leads to better decisions than their competitors.

Manufacturers are using computer vision to identify quality issues and reduce waste. Banks are having success using conversational AI and natural language processing to improve marketing and sales. Retailers are having success using machine learning in forecasting techniques. As AI gets broader adoption, we should expect to see organizations use a mix of AI capabilities for improved outcomes and different business units and areas.

Datamation: How has the COVID-19 pandemic affected you/your colleagues/your clients approach to artificial intelligence?

Tareen: The pandemic upended expected business trajectories and exposed the weaknesses in machine learning systems dependent on large amounts of representative historical data, including well-bounded and reasonably predictable patterns. As a result, there is a business need to reinforce the analytics core and bolster investments in traditional analytics teams and techniques better suited to rapid data discovery and hypothesizing.

As companies adapt to the new normal, one of the primary questions were asked is how to retrain AI models with a more diverse data set. When COVID hit, the analytical models making good predictions started underperforming. For example, airports use SAS predictive modeling to understand and improve aircraft traffic flow. However, these models had to be retrained and additional data sources added before the models could start accurately predicting the new normal traffic pattern.

More on this topic: How COVID-19 Is Driving Digital Transformation

Datamation: What do you think well see more of in the AI space in the next 5-10 years? What areas will grow the most over the next decade?

Tareen: A complex area where I hope to see growth over the next 5-10 years has large implications for the world: AI algorithms becoming more imaginative. Imagination is something that comes very easily to us humans. For example, a child can see a table as both a table and a hiding place to use when playing a game of hide-and-go-seek. The process of imagination is very complex for an AI algorithm to learn from one data domain and apply that learning to a different data domain. Transfer learning is a start, however, and as AI gets better at imagination, it will have the potential to better diagnose disease or spot root causes of climate change. I hope this is an area that will grow in the next decade.

Datamation: What does AI equity mean to you? How can more businesses get started in AI development or product use?

Tareen: From inception to now, AI has been used exclusively by subject matter experts like data scientists. Todays trend is to lessen that need for subject matter experts to instead cascade the benefits of AI to the masses recognizing the global value from the wide-reaching benefits rather than isolated benefits realized by a select few. The targets for democratized AI include customers, business partners, the sales force, factory workers, application developers, and IT operations professionals, among others.

There are a couple of ways enterprises can push AI to a broader audience: simplify the tools and make them more intuitive. First, conversational AI helps because it makes interacting with AI simpler. You dont have to build complex models, but you can gain insights from your data by talking with your analytics. The second initiative is to make AI easier to consume by everyone. This means taking your data and algorithms to the cloud to improve accessibility and reduce costs.

Some leaders are surprised to learn that democratizing AI involves more than the process itself. Often culture tweaks or an entire cultural change must accompany the process. Leaders can practice transparency and good communication in their democratization initiatives to address concerns, adjust the pace of change, and successfully complete embedding AI and analytics for everybodys use.

More on AI equity: AI Equity in Business Technology: An Interview With Marshall Choy of SambaNova Systems

Datamation: What are some ethical considerations for the market that should be part of AI development?

Tareen: There are numerous ethical considerations that should be part of AI development. These considerations range from data to algorithms to decisions.

For data, it is important to ensure that the data accurately represents the populations for which you are making decisions. For example, a data set should not under-represent genders or exclude low-income populations. Other ethical considerations include preserving privacy and Personal Identifiable Information.

For algorithms, it is important to be able to explain decisions using plain language. A complex neural network may make accurate predictions, but the outcomes must be easily explainable to both data scientists as well as non-technologists. Another consideration is ensuring models are not biased when making predictions.

For decisions, it is important to ensure that controls are in place not only when models are implemented, but that decisions are monitored for transparency and fairness throughout their life cycle.

More on AI and ethics: The Ethics of Artificial Intelligence (AI)

Datamation: How have you seen AI innovations change since you first started? How have the technologies, services, conversations, and people changed over time?

Tareen: There have been many changes, but one shift has been fundamental. AI used to be overly focused on model building and model performance. Now, there is a realization that to deliver results, the focus must be on other areas as well, such as managing data, making decisions, and governing those decisions. Topics such as bias in data or models are starting to become common in conversations. These are signs of a market that is starting to understand the potential, and challenges, of this technology.

More on data and bias: Addressing Bias in Artificial Intelligence (AI)

Datamation: How do you stay knowledgeable about trends in the market? What resources do you like?

Tareen: My top two places to better understand trends are:

Datamation: How do you like to help or otherwise engage less experienced AI professionals?

Tareen: The key is to describe advanced AI capabilities in ways that are easily relatable and finding examples of customers we have helped in their specific industry.

Datamation: What do you like to do in your free time outside of work?

Tareen: One of the benefits of #saslife is work-life balance. I am a private pilot and fly a small aircraft out of Raleigh-Durham International Airport. North Carolina is a pretty state to fly over, so I take as many opportunities as possible to see this beautiful state from the air.

Datamation: If you had to work in any other industry or role, what would it be and why?

Tareen: My ideal role would be one where I can tell real stories about how technologies such as AI and analytics can improve the world around us. Currently, a lot of the work that SAS does, particularly around our Data4Good initiative, fulfills that goal well.

Datamation: What do you consider the best part of your workday or workweek?

Tareen: The interaction with SAS customers is almost always the best part of the workday or workweek. At SAS, we start off every customer meeting with a listening session where we get to hear about their world, their challenges, and what they hope to accomplish. It is an exciting learning process and often the best part of my week.

Datamation: What are you most proud of in your professional/personal life?

Tareen: I am most proud of the work that SAS does around social innovation. Our Data4Good initiative projects are a great way to apply data science, AI, and analytics to big challenges, both at the personal level as well as the global level, to improve the human experience.

Read next: Top Performing Artificial Intelligence Companies

Read the original here:
AI and Big Data Analytics: Executive Q&A with David Tareen of SAS - Datamation

China beats the USA in Artificial Intelligence and international awards – Modern Diplomacy

There is no doubt that the return of Huaweis CFO Meng Wanzhou to Beijing marks a historic event for the entire country that made every Chinese person incredibly proud, especially bearing in mind its timing, as the National Day celebrations took place on October 1.

Where there is a five-star red flag, there is a beacon of faith. If faith has a color, it must be China red, Ms. Meng said to the cheering crowd at Shenzhen airport after returning home from Canada. She also added that All the frustration and difficulties, gratitude and emotion, steadfastness and responsibility will transform into momentum for moving us forward, into courage for our all-out fight.

Regardless of how encouraging the Chinese tech giant heiresss words may sound, the fact remains that the company remains a target of U.S. prosecution and sanctionssomething that is not about to change anytime soon.

When the Sanctions Bite

It was former U.S. President Donald Trump who in May 2019 signed an order that allowed the then-Commerce Secretary Wilbur Ross to halt any transactions concerning information or communications technology posing an unacceptable risk to the countrys national security. As a result, the same month, Huawei and its non-U.S. affiliates were added to the Bureau of Industry and Security Entity List, which meant that any American companies wishing to sell or transfer technology to the company would have to obtain a licence issued by the BIS.

In May 2020, the U.S. Department of Commerce decided to expand the FPDP Rule by restricting the Chinese tech giant from acquiring foreign-made semiconductors produced or developed from certain U.S. technology or software and went even further in August the same year by issuing the Final Rule that prohibits the re-export, export from abroad or transfer (in-country) of (i) certain foreign-produced items controlled under the amended footnote 1 to the Entity List (New Footnote 1) when there is (ii) knowledge of certain circumstances, the scope of which were also expanded.

Moreover, the decision also removed the Temporary General License (TGL) previously authorizing certain transactions with Huawei and added thirty-eight additional affiliates of the Chinese company to the Entity List.

In these particular circumstances, despite the initial predictions made by Bloomberg early in 2020 that Trumps decision to blacklist Huawei fails to stop its growth, the current reality seems to be slightly changing for onceand brieflythe worlds largest smartphone vendor.

The impact of the U.S. sanctions has already resulted in a drop in sales in the smartphone business by more than 47% in the first half of 2021, and the total revenue fell by almost 30% if we compare it with the same period in 2020. As is estimated by rotating Chairman Eric Xu, the companys revenue concerning its smartphone sales will drop by at least $30-40 billion this year.

For the record, Huaweis smartphone sales accounted for $50 billion in revenue last year. The company has generated $49.57 billion in revenue in total so far, which is said to be the most significant drop in its history.

In Search of Alternative Income Streams

Despite finding itself in dire straits, the company is in constant search for new sources of income with a recent decision to charge patent royalties from other smartphone makers for the use of its 5G technologies, with a per unit royalty cap at $2.50 for every multimode mobile device capable of connections to 5G and previous generations of mobile networks. Huaweis price is lower than the one charged by Nokia ($3.58 per device) and Ericsson ($2.50-$5 per device).

Notably, according to data from the intellectual property research organization GreyB, Huawei has 3,007 declared 5G patent families and over 130,000 5G active patents worldwide, making the Chinese company the largest patent holder globally.

Jason Ding, who is head of Huaweis intellectual property rights department, informed early this year that the company would collect about $1.2-$1.3 billion in revenue from patent licensing between 2019 and 2021. But royalties will not be the only revenue source for the company.

Investing in the Future: Cloud Services and Smart Cars

Apart from digitizing native companies in sectors like coal mining and port operations that increased its revenue by 23% last year and 18% in the first part of 2021, Huawei looks far into the future, slowly steering away from its dependency on foreign chip supplies by setting its sight on cloud services and software for smart cars.

Seizing an opportunity to improve the currently not-so-perfect cloud service environment, the Chinese tech giant is swiftly moving to have its share in the sector by creating new cloud services targeting companies and government departments. For this purpose, it plans to inject $100 million over three years period into SMEs to expand on Huawei Cloud.

As of today, Huaweis cloud business is said to grow by 116% in the first quarter of 2021, with a 20% share of a $6 billion market in China, as Canalys reports.

Huawei Clouds results have been boosted by Internet customers and government projects, as well as key wins in the automotive sector. It is a growing part of Huaweis overall business, said a chief analyst at the company, Matthew Ball. He also added that although 90% of this business is based in China, Huawei Cloud has a more substantial footprint in Latin America and Europe, the Middle East and Africa as compared with Alibaba Cloud and Tencent Cloud.

Another area where Huawei is trying its luck is electric and autonomous vehicles, where the company is planning to invest $1 billion alone this year. Although the company has repeatedly made it clear that it is unwilling to build cars, Huawei wants to help the car connect and make it more intelligent, as its official noted.

While during the 2021 Shanghai Auto Show, Huawei and Arcfox Polar Fox released a brand new Polar Fox Alpha S Huawei Hi and Chinas GAC revealed a plan to roll out a car with the Chinese tech company after 2024, Huawei is already selling the Cyrus SF5, a smart Chinese car from Chongqing Xiaokang, equipped with Huawei DriveONE electric drive system, from its experience store for the first time in the companys history. Whats more, the car is also on sale online.

R&D and International Talent as Crucial Ingredients to Become Tech Pioneer

There is a visible emphasis put on investing in high-quality research and development to innovate both in Huawei and China as a whole.

According to the companys data, the Chinese technology giant invested $19.3 billion in R&D in 2019, which accounted for 13.9% of its total business revenue and $22 billion last year, which was around 16% of its revenue. Interestingly, if Huawei was treated as a provincial administrative region, its R&D expenditure would rank seventh nationwide.

As reported by Chinas National Bureau of Statistics, the total R&D spending in China last year was 2.44 trillion yuan, up 10.6% year-on-year growth, and 2.21 trillion yuan in 2019, with 12.3% year-on-year growth.

As far as activities are concerned, the most were spent on experimental development in 2020 (2.02 trillion yuan, which is 82.7% of total spending), applied research (275.72 billion yuan, which gives 11.3%) and basic research (146.7 billion yuan, accounting for 6%). While the most money was spent by enterprises (1.87 trillion yuan, which gives up 10.4% year-on-year), governmental research institutions spent 340.88 billion yuan (up 10.6% year-on-year), and universities and colleges spent 188.25 billion yuan (up 4.8% year-on-year).

As far as industries go, it is also worth mentioning that high-tech manufacturing spending accounted for 464.91 billion yuan, with equipment manufacturing standing at 913.03 billion yuan. The state science and tech spending accounted for 1.01 trillion yuan, which is 0.06 trillion yuan less than in 2019.

As Huawei raises the budget for overseas R&D, the company also plans to invest human resources by attracting the brightest foreign minds into its business, which is in some way a by-product of the Trump-era visa limitations imposed on Chinese students.

So far, concentrating on bringing Chinese talent educated abroad, Huawei is determined to broader its talent pool by tall noses, as the mainland Chinese sometimes refer to people of non-Chinese origin.

Now we need to focus on bringing in talent with tall noses and allocate a bigger budget for our overseas research centres, said the companys founder Ren Zhengfei in a speech made in August. We need to turn Huaweis research center in North America into a talent recruitment hub, Ren added.

While Huawei wants to scout for those who have experience working in the U.S. and Europe, it wants to meet the salary standards comparable to the U.S. market to make their offer attractive enough.

What seems to be extraordinary and crucial by looking at China through Huawei lens is that it is, to the detriment of its critics, indeed opening to the outside world by aiming at replenishing all facets of its business.

We need to further liberate our thoughts and open our arms to welcome the best talent in the world, to quote Ren, in an attempt to help the company become more assimilated in overseas markets as a global enterprise in three to five years.

The Chinese tech giant aims to attract international talent to its new 1.6 million square meter research campus in Qingpu, Shanghai, which will house 30,000 to 40,000 research staff primarily concerned with developing handset and IoT chips. The Google-like campus is said to be completed in 2023.

The best sign of Huaweis slow embrace of the start-up mentality, as the companys head of research and development in the UK, Henk Koopmans, put it, is the acquiring of the Center for Integrated Photonics based in Ipswich (UK) in 2012, which has recently developed a laser on a chip that can direct light into a fibre-optic cable.

This breakthrough discovery, in creating an alternative to the mainstream silicon-based semiconductors, provides Huawei with its product based on Indium Phosphide technology to create a situation where the company no longer needs to rely on the U.S. know-how.

As for high-profile foreign recruitments, Huawei has recently managed to hire a renowned French mathematician Laurent Lafforgue, a winner of the 2002 Fields Medal, dubbed as the Nobel Prize of mathematics, who will work at the companys research center in Paris, and appointed the former head of BBC news programmes Gavin Allen as its executive editor in chief to improve its messaging strategy in the West.

According to Huaweis annual report published in 2020, the Shenzhen-based company had 197,000 employees worldwide, including employees from 162 different countries and regions. Moreover, it increased its headcount by 3,000 people between the end of 2019 and 2020, with 53.4% of its employees in the R&D sector.

The main objective of the developments mentioned above is to lead the world in both 5G and 6G to dominate global standards of the future.

We will not only lead the world in 5G, more importantly, we will aim to lead the world in wider domains, said Huaweis Ren Zhengfei in August. We research 6G as a precaution, to seize the patent front, to make sure that when 6G one day really comes into use, we will not depend on others, Ren added.

Discussing the potential uses of 6G technology, Huaweis CEO told his employees that it might be able to detect and sense beyond higher data transmission capabilities in the current technologies, with a potential to be utilized in healthcare and surveillance.

Does the U.S. Strategy Towards Huawei Work?

As we can see, the Chinese tech giant has not only proved to be resilient through the years of being threatened by the harmful U.S. sanctions, but it also has made significant steps to become independent and, therefore, entirely out of Washingtons punishment reach.

Although under the intense pressure from the Republicans the U.S. Commerce Secretary Gina Raimondo promised that the Biden administration will take further steps against Huawei if need be, it seems that there is nothing much that the U.S. can do to stop the Chinese company from moving ahead without any U.S. permission to develop in the sectors of the future, while still making a crucial contribution to the existing ones.

At the same time, continuing with the Trump-era policies aimed at Huawei is not only hurting American companies but, according to a report from the National Foundation for American Policy published in August 2021, it also might deal a significant blow to innovation and scientific research in the country.

Restricting Huawei from doing business in the U.S. will not make the U.S. more secure or stronger; instead, this will only serve to limit the U.S. to inferior yet more expensive alternatives, leaving the U.S. lagging behind in 5G deployment, and eventually harming the interests of U.S. companies and consumers, Huawei said in, what now appears to be, prophetic statement to CNBC in 2019.

On that note, perhaps instead of making meaningless promises to the Republicans that the Biden administration wouldnt be soft on the Chinese tech giant, Raimondo would make the U.S. better off by engaging with Huawei, or at least rethinking the current policies, which visibly are not bringing the desired results, yet effectively working to undermine the U.S. national interest in the long run.

From our partner RIAC

Related

See the original post:
China beats the USA in Artificial Intelligence and international awards - Modern Diplomacy

Artificial Intelligence in the Legal Field: – Lexology

Artificial Intelligence is a mechanism through which computers are programmed to undertake tasks which otherwise are done by the human brain. Like every other thing, it also has pros and cons to it. While the usage of Artificial Intelligence can help in completing a task in a few minutes on the other hand, if it worked as well as it is deemed to, it could potentially take away employment of thousands of people across the country. The growing influence of Artificial Intelligence (AI) can be seen across various industries, from IT to farming, from manufacturing to customer service. The Indian Legal Industry, meanwhile, has always been a little slower to adapt to technology and has seen minimal changes to superior technology. This is promulgated by several lawyers still feeling comfortable with the same old system of functioning that was designed decades ago. AI has managed to disrupt other industries. W ith an ever growing pendency and increasing demand for self-service systems even in the legal fraternity, this once assumed-to-be utopian idea can become a reality for all lawyers. Some of the concerning questions that will be addressed in this article are as follows:

What are the changes that the Indian legal system has already witnessed?

The Introduction of AI into the legal system has made a drastic impact on the legal fraternities across the globe. The first global player to attempt using AI for legal purposes was through the IBM Watson powered robot ROSS, which used a unique method by mining data and interpreting trends and patterns in the law to solve research questions. Interestingly, the area that will get most affected is not the litigation process or arbitration matters, but in fact the back-end work for the litigation and arbitration purposes such as research, data storage and usage, etc.

Due to the sheer volume of cases and diversity in case matters, the Indian laws and their interpretations keep changing and developing further. If lawyers could have access to AI-Based technology that could help with research matters then the labour cost of research work could be significantly reduced, leading to the profitability and significant increase in the speed of getting work done. While this could lead to the reduction of staff members, i. e. Paralegals and some associates, it would also increase the overall productivity for all lawyers and lead to the fast-tracking of legal research and drafting.

One of the best examples is the usage of the AI-based Software Kira by Cyril Amarchand Mangaldas that examines, identifies and provides a refined search on the specific data needed with a reportedly high degree of precision. This reportedly has allowed the firm to focus on more important aspects of the litigation process and has reduced the repetitive and monotonous work usually done by paralegals, interns and other entry-level employees.

In fact, several noted Jurists and Judges have spoken in good terms about the necessity of such AI-Based software that could be useful for the docketing system and simple decision making process. Some of the statements made by these eminent personalities are as follows:

Justice SA Bobde had said : We must increasingly focus on harnessing IT and IT enabled services (ITES) for providing more efficient and cost-effective access to and delivery of justice. This must also include undertaking serious study concerning the future of Artificial Intelligence in law, especially how Artificial Intelligence can assist in judicial decision making. I believe exploring this interface would be immensely beneficial for many reasons. For instance, it would allow us to streamline courts caseloads through enabling better court management. This would be a low hanging fruit. On the other end of the spectrum, it will allow us to shift the judicial time from routine-simple-straightforward matters (e.g. cases which are non-rivalrous) and apply them to more complex-intricate matters that require more human attention and involvement.Therefore, in India identification of such matters and developing relevant technology ought to be our next focus.

Justice DY Chandrachud said : The idea of Artificial Intelligence is not to supplant the human brain or the human mind or the presence of judges but to provide a facilitative tool to judges to reassess the processes which they follow, to reassess the work which they do and to ensure that their outcome are more predictable and consistent and ultimately provide wider access to justice to the common citizens.

What legal problems can AI solve in India?

While the country admittedly has a massive issue with respect to its judicial system owing to the massive pendency and huge volume of unresolved cases, the inclusion of AI can help with resolving a majority of its problems. The introduction of technological advancement will aid the lawyers in conducting legal research in an efficient and timely manner and thus will ensure AI software equipped lawyers to focus more on advising their clients and taking up complex issues/cases. It also helps in assessing a potential outcome to pending cases and could be of great assistance to the courts and private parties to help them decide on which cases to pursue, which ones to resolve amicably if possible and which ones to let go of!

Some of the benefits of implementing the nation-wide use of AI systems are as follows:

What are the changes needed for the AI systems in India and the road ahead?

While there are several benefits to Lawyers/Firms and the Judiciary for implementing AI into the Legal fraternity, there are a few caveats as well. With any form of technology for the matter, the risk of data infringement, cyber-attacks and hacking attempts are a constant threat. Incorrect software is also an issue that has often been a question over technology, especially over those technologies that are relatively untested and new in the market.

There are also some questions regarding the nature of ethics of an AI. An important point to keep in mind is that Artificial Intelligence software does not have a mind of their own. Although they do think before taking an action, their actions are completely programmed and there is always an issue of trustworthiness as AI needs to have a defined ethical purpose and technically robust and reliable systems. These issues were also seen to persist in the highly acclaimed ROSS, which saw several glitches.

There is also another issue that arises with implementing Artificial Intelligence. The affordability of these AI software is a factor that needs deliberation. The maintenance of these AI facilities are an added concern, with firms investing in privatised AI research facilities as mentioned earlier. Thus the investment that would be required to establish and operate would be expensive, thus making a division of technological capabilities ab initio. This is also taking into factor the unknown probability of the learning curve involved in dealing with the lawyers, firms and judiciary members who utilize such technology.

With these challenges kept in mind, the regulations with respect to AI use must be kept in mind, particularly with respect to how the judiciary uses it. There has and there always will be a sense of mistrust in technologies such as these, but the progress needs to be made slowly and cannot be drastically at this point, without understanding its legal, financial and security implications. The following actions must be taken when the usage of AI is eventually implemented:

View post:
Artificial Intelligence in the Legal Field: - Lexology

Filings buzz in the railway industry: Increase in artificial intelligence mentions – Railway Technology

Mentions of artificial intelligence within the filings of companies in the railway industry rose 64% between the first and second quarters of 2021.

In total, the frequency of sentences related to artificial intelligence between July 2020 and June 2021 was 137% increase than in 2016 when GlobalData, from whom our data for this article is taken, first began to track the key issues referred to in company filings.

When companies in the railway industry publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Artificial intelligence is one of these topics - companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether artificial intelligence is featuring more in the summaries and strategies of companies in the railway industry, two measures were calculated. Firstly, we looked at the percentage of companies which have mentioned artificial intelligence at least once in filings during the past twelve months - this was 78% compared to 52% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to artificial intelligence.

Of the 50 biggest employers in the railway industry, Hitachi Transport System, Ltd. was the company which referred to artificial intelligence the most between July 2020 and June 2021. GlobalData identified 83 artificial intelligence-related sentences in the Japan-based company's filings - 2.4% of all sentences. XPO Logistics Inc mentioned artificial intelligence the second most - the issue was referred to in 1.3% of sentences in the company's filings. Other top employers with high artificial intelligence mentions included East Japan Railway Co, Yamato Holdings Co Ltd and ID Logistics Group.

Across all companies in the railway industry the filing published in the second quarter of 2021 which exhibited the greatest focus on artificial intelligence came from XPO Logistics Inc. Of the document's 1,093 sentences, 11 (1%) referred to artificial intelligence.

This analysis provides an approximate indication of which companies are focusing on artificial intelligence and how important the issue is considered within the railway industry, but it also has limitations and should be interpreted carefully. For example, a company mentioning artificial intelligence more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into artificial intelligence have been successes or failures.

GlobalData also categorises artificial intelligence mentions by a series of subthemes. Of these subthemes, the most commonly referred to topic in the second quarter of 2021 was 'smart robots', which made up 82% of all artificial intelligence subtheme mentions by companies in the railway industry.

By Andrew Hillman.

Methodology:

GlobalDatas unique Job analytics enables understanding of hiring trends, strategies, and predictive signals across sectors, themes, companies, and geographies. Intelligent web crawlers capture data from publicly available sources. Key parameters include active, posted and closed jobs, posting duration, experience, seniority level, educational qualifications and skills.

Rail and Intermodal Automatic Equipment Identification

28 Aug 2020

Profile Measurement Devices for Trains and Tracks

28 Aug 2020

Surge Protection and Voltage Limiting Devices for Railways

28 Aug 2020

Go here to see the original:
Filings buzz in the railway industry: Increase in artificial intelligence mentions - Railway Technology

Artificial Intelligence Is Smart, but It Doesnt Play Well With Others – SciTechDaily

Humans find AI to be a frustrating teammate when playing a cooperative game together, posing challenges for teaming intelligence, study shows.

When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These superhuman AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people?

In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS).

When playing the cooperative card game Hanabi, humans felt frustrated and confused by the moves of their AI teammate. Credit: Bryan Mastergeorge

It really highlights the nuanced distinction between creating AI that performs objectively well and creating AI that is subjectively trusted or preferred, says Ross Allen, co-author of the paper and a researcher in the Artificial Intelligence Technology Group. It may seem those things are so close that theres not really daylight between them, but this study showed that those are actually two separate problems. We need to work on disentangling those.

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning.

A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical reward by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI arent programmed to follow if/then statements, because the possible outcomes of the human tasks theyre slated to tackle, like driving a car, are far too many to code.

Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent wont necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data Allen says. The skys the limit in what it could, in theory, do.

Today, researchers are using Hanabi to test the performance of reinforcement learning models developed for collaboration, in much the same way that chess has served as a benchmark for testing competitive AI for decades.

The game of Hanabi is akin to a multiplayer form of Solitaire. Players work together to stack cards of the same suit in order. However, players may not view their own cards, only the cards that their teammates hold. Each player is strictly limited in what they can communicate to their teammates to get them to pick the best card from their own hand to stack next.

The Lincoln Laboratory researchers did not develop either the AI or rule-based agents used in this experiment. Both agents represent the best in their fields for Hanabi performance. In fact, when the AI model was previously paired with an AI teammate it had never played with before, the team achieved the highest-ever score for Hanabi play between two unknown AI agents.

That was an important result, Allen says. We thought, if these AI that have never met before can come together and play really well, then we should be able to bring humans that also know how to play very well together with the AI, and theyll also do very well. Thats why we thought the AI team would objectively play better, and also why we thought that humans would prefer it, because generally well like something better if we do well.

Neither of those expectations came true. Objectively, there was no statistical difference in the scores between the AI and the rule-based agent. Subjectively, all 29 participants reported in surveys a clear preference toward the rule-based teammate. The participants were not informed which agent they were playing with for which games.

One participant said that they were so stressed out at the bad play from the AI agent that they actually got a headache, says Jaime Pena, a researcher in the AI Technology and Systems Group and an author on the paper. Another said that they thought the rule-based agent was dumb but workable, whereas the AI agent showed that it understood the rules, but that its moves were not cohesive with what a team looks like. To them, it was giving bad hints, making bad plays.

This perception of AI making bad plays links to surprising behavior researchers have observed previously in reinforcement learning work. For example, in 2016, when DeepMinds AlphaGo first defeated one of the worlds best Go players, one of the most widely praised moves made by AlphaGo was move 37 in game 2, a move so unusual that human commentators thought it was a mistake. Later analysis revealed that the move was actually extremely well-calculated, and was described as genius.

Such moves might be praised when an AI opponent performs them, but theyre less likely to be celebrated in a team setting. The Lincoln Laboratory researchers found that strange or seemingly illogical moves were the worst offenders in breaking humans trust in their AI teammate in these closely coupled teams. Such moves not only diminished players perception of how well they and their AI teammate worked together, but also how much they wanted to work with the AI at all, especially when any potential payoff wasnt immediately obvious.

There was a lot of commentary about giving up, comments like I hate working with this thing,' adds Hosea Siu, also an author of the paper and a researcher in the Control and Autonomous Systems Engineering Group.

Participants who rated themselves as Hanabi experts, which the majority of players in this study did, more often gave up on the AI player. Siu finds this concerning for AI developers, because key users of this technology will likely be domain experts.

Lets say you train up a super-smart AI guidance assistant for a missile defense scenario. You arent handing it off to a trainee; youre handing it off to your experts on your ships who have been doing this for 25 years. So, if there is a strong expert bias against it in gaming scenarios, its likely going to show up in real-world ops, he adds.

The researchers note that the AI used in this study wasnt developed for human preference. But, thats part of the problem not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance.

If researchers dont focus on the question of subjective human preference, then we wont create AI that humans actually want to use, Allen says. Its easier to work on AI that improves a very clean number. Its much harder to work on AI that works in this mushier world of human preferences.

Solving this harder problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project, which this experiment was funded under in Lincoln Laboratorys Technology Office, in collaboration with the U.S. Air Force Artificial Intelligence Accelerator and the MIT Department of Electrical Engineering and Computer Science. The project is studying what has prevented collaborative AI technology from leaping out of the game space and into messier reality.

The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

You can imagine we rerun the experiment, but after the fact and this is much easier said than done the human could ask, Why did you do that move, I didnt understand it? If the AI could provide some insight into what they thought was going to happen based on their actions, then our hypothesis is that humans would say, Oh, weird way of thinking about it, but I get it now, and theyd trust it. Our results would totally change, even though we didnt change the underlying decision-making of the AI, Allen says.

Like a huddle after a game, this kind of exchange is often what helps humans build camaraderie and cooperation as a team.

Maybe its also a staffing bias. Most AI teams dont have people who want to work on these squishy humans and their soft problems, Siu adds, laughing. Its people who want to do math and optimization. And thats the basis, but thats not enough.

Mastering a game such as Hanabi between AI and humans could open up a universe of possibilities for teaming intelligence in the future. But until researchers can close the gap between how well an AI performs and how much a human likes it, the technology may well remain at machine versus human.

Reference: Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi by Ho Chit Siu, Jaime D. Pena, Kimberlee C. Chang, Edenna Chen, Yutai Zhou, Victor J. Lopez, Kyle Palko and Ross E. Allen, Accepted, 2021 Conference on Neural Information Processing Systems (NeurIPS).arXiv:2107.07630

More:
Artificial Intelligence Is Smart, but It Doesnt Play Well With Others - SciTechDaily

AI That Can Learn Cause-and-Effect: These Neural Networks Know What They’re Doing – SciTechDaily

A certain type of artificial intelligence agent can learn the cause-and-effect basis of a navigation task during training.

Neural networks can learn to solve all sorts of problems, from identifying cats in photographs to steering a self-driving car. But whether these powerful, pattern-recognizing algorithms actually understand the tasks they are performing remains an open question.

For example, a neural network tasked with keeping a self-driving car in its lane might learn to do so by watching the bushes at the side of the road, rather than learning to detect the lanes and focus on the roads horizon.

Researchers at MIT have now shown that a certain type of neural network is able to learn the true cause-and-effect structure of the navigation task it is being trained to perform. Because these networks can understand the task directly from visual data, they should be more effective than other neural networks when navigating in a complex environment, like a location with dense trees or rapidly changing weather conditions.

In the future, this work could improve the reliability and trustworthiness of machine learning agents that are performing high-stakes tasks, like driving an autonomous vehicle on a busy highway.

MIT researchers have demonstrated that a special class of deep learning neural networks is able to learn the true cause-and-effect structure of a navigation task during training. Credit: Stock Image

Because these machine-learning systems are able to perform reasoning in a causal way, we can know and point out how they function and make decisions. This is essential for safety-critical applications, says co-lead author Ramin Hasani, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Co-authors include electrical engineering and computer science graduate student and co-lead author Charles Vorbach; CSAIL PhD student Alexander Amini; Institute of Science and Technology Austria graduate student Mathias Lechner; and senior author Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of CSAIL. The research will be presented at the 2021 Conference on Neural Information Processing Systems (NeurIPS) in December.

Neural networks are a method for doing machine learning in which the computer learns to complete a task through trial-and-error by analyzing many training examples. And liquid neural networks change their underlying equations to continuously adapt to new inputs.

The new research draws on previous work in which Hasani and others showed how a brain-inspired type of deep learning system called a Neural Circuit Policy (NCP), built by liquid neural network cells, is able to autonomously control a self-driving vehicle, with a network of only 19 control neurons.

The researchers observed that the NCPs performing a lane-keeping task kept their attention on the roads horizon and borders when making a driving decision, the same way a human would (or should) while driving a car. Other neural networks they studied didnt always focus on the road.

That was a cool observation, but we didnt quantify it. So, we wanted to find the mathematical principles of why and how these networks are able to capture the true causation of the data, he says.

They found that, when an NCP is being trained to complete a task, the network learns to interact with the environment and account for interventions. In essence, the network recognizes if its output is being changed by a certain intervention, and then relates the cause and effect together.

During training, the network is run forward to generate an output, and then backward to correct for errors. The researchers observed that NCPs relate cause-and-effect during forward-mode and backward-mode, which enables the network to place very focused attention on the true causal structure of a task.

Hasani and his colleagues didnt need to impose any additional constraints on the system or perform any special set up for the NCP to learn this causality.

Causality is especially important to characterize for safety-critical applications such as flight, says Rus. Our work demonstrates the causality properties of Neural Circuit Policies for decision-making in flight, including flying in environments with dense obstacles such as forests and flying in formation.

They tested NCPs through a series of simulations in which autonomous drones performed navigation tasks. Each drone used inputs from a single camera to navigate.

The drones were tasked with traveling to a target object, chasing a moving target, or following a series of markers in varied environments, including a redwood forest and a neighborhood. They also traveled under different weather conditions, like clear skies, heavy rain, and fog.

The researchers found that the NCPs performed as well as the other networks on simpler tasks in good weather, but outperformed them all on the more challenging tasks, such as chasing a moving object through a rainstorm.

We observed that NCPs are the only network that pay attention to the object of interest in different environments while completing the navigation task, wherever you test it, and in different lighting or environmental conditions. This is the only system that can do this casually and actually learn the behavior we intend the system to learn, he says.

Their results show that the use of NCPs could also enable autonomous drones to navigate successfully in environments with changing conditions, like a sunny landscape that suddenly becomes foggy.

Once the system learns what it is actually supposed to do, it can perform well in novel scenarios and environmental conditions it has never experienced. This is a big challenge of current machine learning systems that are not causal. We believe these results are very exciting, as they show how causality can emerge from the choice of a neural network, he says.

In the future, the researchers want to explore the use of NCPs to build larger systems. Putting thousands or millions of networks together could enable them to tackle even more complicated tasks.

Reference: Causal Navigation by Continuous-time Neural Networks by Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner and Daniela Rus, 15 June 2021, Computer Science > Machine Learning.arXiv:2106.08314

This research was supported by the United States Air Force Research Laboratory, the United States Air Force Artificial Intelligence Accelerator, and the Boeing Company.

Continued here:
AI That Can Learn Cause-and-Effect: These Neural Networks Know What They're Doing - SciTechDaily

Is it True that the USA Has Already Lost the Artificial Intelligence Battle with China? – BBN Times

China is overtaking the U.S. in artificial intelligence (AI), setting off alarm bells on the other side of the Pacific as the world's two largest economies are battling for world supremacy.

Artificial intelligence is widely used in a range of industries and greatly affects a nation's competitiveness and security.

The United States of America is losing the artificial intelligence supremacy to China.

The increasing importance of information inmilitaryand warfare is making digital technology and its applications, such as analytics, AI, and augmented reality, indispensable to future conflicts.

It is fascinating and at the same time scary to see what the future of war may look like, and how devastating the aftermath can be.

Artifiicial intelligence weapons can attack with increased speed and precision compared to existing military weapons.

Chinas global share of research papers in the field of AI has vaulted from 4.26%(1,086) in 1997 to 27.68% in 2017 (37,343), surpassing any other country in the world, including the U.S. a position itcontinues to hold.

Beijing also consistently files more AI patents than any other country. As of March 2019, the number of Chinese AI firms has reached1,189, second only to the U.S., which has more than 2,000 active AI firms. These firms focus more on speech (e.g., speech recognition, speech synthesis) and vision (e.g., image recognition, video recognition) than their overseas counterparts.

It is very active with weaponizing artificial intelligence, machine learning and deep learning technology.

China's military application of AI includes unmanned intelligent combat systems, enhancing battlefield situational awareness and decision-making, conducting multi- domain offense and defense, and facilitating advanced training, simulation, and wargaming practices.

As an example, the launch in August of nuclear-capable rocket that circled the globe took US intelligence by surprise.

AFP via Getty Images

China recently tested a nuclear-capable manoeuvrable missile and Russia and the US have their own programmes. China's large population gives it advantages in generating and utilizing big data, and its decades-long effort in promoting technology and engineering gives it a rich supply of high-quality computer scientists and engineers.

US National Intelligence has recently reported that a superpower needs to lead in five technologies:

Source: Forbes

Beijing has won the artificial intelligence battle with Washington and is heading towards global dominance because of its technological advances.

China is likely to dominate many of the key emerging technologies, particularly artificial intelligence, synthetic biology and genetics within a decade.

The country has a vibrant market that is receptive to these new AI-based products, and Chinese firms are relatively fast in bringing AI products and services to the market.

Chinese consumers are also fast in adopting such products and services. As such, the environment supports rapid refinement of AI technologies and AI-powered products.

Beijings market is conducive to the adoption and improvement of artificial intelligence.

Read this article:
Is it True that the USA Has Already Lost the Artificial Intelligence Battle with China? - BBN Times

How Will Health Care Regulators Address Artificial Intelligence? – The Regulatory Review

Policymakers around the world are developing guidelines for use of artificial intelligence in health care.

Baymax, the robotic health aide and unlikely hero from the movie Big Hero 6, is an adorable cartoon character, an outlandish vision of a high-tech future. But underlying Baymaxs character is the very realistic concept of an artificial intelligence (AI) system that can be applied to health care.

As AI technology advances, how will regulators encourage innovation while protecting patient safety?

AI does not have a precise definition, but the term generally describes machines that have the capacity to process and respond to stimulation in a manner similar to human thought processes. Many industriessuch as the military, academia, and health carerely on AI today.

For decades, health care professionals have used AI to increase efficiency and enhance the quality of patient care. For example, radiologists employ AI to identify signs of certain diseases in medical imaging. Tech companies are also partnering with health care providers to develop AI-based predictive models to increase the accuracy of diagnoses. A recent study applied AI to predict COVID-19 based on self-reported symptoms.

In the wake of the COVID-19 pandemic and the rise of telemedicine, experts predict that AI technology will continue to be used to prevent and treat illness and will become more prevalent in the health care industry.

The use of AI in health care may improve patient care, but it also raises issues of data privacy and health equity. Although the health care sector is heavily regulated, no regulations target the use of AI in health care settings. Several countries and organizations, including the United States, have proposed regulations addressing the use of AI in health care, but no regulations have been adopted.

Even beyond the context of health care, policymakers have only begun to develop rules for the use of AI. Some existing data privacy laws and industry-specific regulations do apply to the use of AI, but no country has enacted AI-specific regulations. In January 2021, the European Union released its proposal for the first regulatory framework for the use of AI. The proposal establishes a procedure for new AI products entering the market and imposes heightened standards for applications of AI that are considered high risk.

The EUs suggested framework provides some examples of high-risk applications of AI that are related to health care such as the use of AI to triage emergency aid. Although the EUs proposal does not focus on the health care industry in particular, experts predict that the EU regulations will serve as a framework for future, more specific guidelines.

The EUs proposal strikes a balance between ensuring the safety and security of the AI market, while also continuing to promote innovation and investment in AI. These conflicting values also appear in U.S. proposals to address AI in health care. Both the U.S. Food and Drug Administration (FDA) and the U.S. Department of Health and Human Services (HHS) more broadly have begun to develop guidelines on the use of AI in the health industry.

In 2019, FDA published a discussion paper outlining a proposed regulatory framework for modifications to AI-based software as a medical device (SaMD). FDA defines AI-based SaMD as software intended to treat, diagnose, cure, mitigate, or prevent disease. In the agencys discussion paper, FDA asserts its commitment to ensure that AI-based SaMD will deliver safe and effective software functionality that improves the quality of care that patients receive. FDA outlines the regulatory approval cycle for AI-based SaMD, which requires a holistic evaluation of the product and the maker of the product.

Earlier this year, FDA released an action plan for the regulation of AI-based SaMD that reaffirmed its commitment to encourage the development of AI best practices. HHS has also announced its strategy for the regulation of AI applied in health care settings. As with FDA and the EU, HHS balances the health and well-being of patients with the continued innovation of AI technology.

The United States is not alone in its attempt to monitor and govern the use of AI in health care. Countries such as China, Japan, and South Korea have also released guidelines and proposals seeking to ensure patient safety. In June 2021, the World Health Organization (WHO) issued a report on the use of AI in health care and offered six guiding principles for AI regulation: protecting autonomy; promoting safety; ensuring transparency; fostering responsibility; ensuring equity; and promoting sustainable AI.

Scholars are also discussing the use of AI in health care. Some experts have urged policymakers to develop AI systems designed to advance health equity. Others warn that algorithmic bias and unequal data collection in AI can exacerbate existing health inequalities. Experts argue that, to mitigate the risk of discriminatory AI practices, policymakers should consider the unintended consequences of the use of AI.

For example, AI systems must be trained to recognize patterns in data, and the training data may reflect historical discrimination. One study showed that women are less likely to receive certain treatments than men even though they are more likely to need them. Similarly biased data would train an AI system to perpetuate this pattern of discrimination. Health care regulators must address the need to protect patients from potential inequalities without discouraging the development of life-saving innovation in AI.

As the use of AI becomes more prominent in health care, regulators in the United States and elsewhere find themselves considering more robust regulations to ensure quality of care.

Read more here:
How Will Health Care Regulators Address Artificial Intelligence? - The Regulatory Review

UC adopts recommendations for the responsible use of Artificial Intelligence – Preuss School Ucsd

Camille Nebeker, Ed.D., associate professor with appointments in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science and the Design Lab

The University of California Presidential Working Group on Artificial Intelligence was launched in 2020 by University of California President Michael V. Drake and former UC President Janet Napolitano to assist UC in determining a set of responsible principles to guide procurement, development, implementation, and monitoring of artificial intelligence (AI) in UC operations.

To support these goals, the working group developed a set of UC Responsible AI Principles and explored four high-risk application areas: health, human resources, policing, and student experience. The working group has published a final report that explores current and future applications of AI in these areas and provides recommendations for how to operationalize the UC Responsible AI Principles. The report concludes with overarching recommendations to help guide UCs strategy for determining whether and how to responsibly implement AI in its operations.

Camille Nebeker, Ed.D., associate professor with appointments in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science and the Design Lab, and two researchers in theDepartment of Computer Science and Engineering,Nadia Henninger, Ph.D., associateprofessor whose work focuses on cryptography and security, and Lawrence Saul, Ph.D., professor whose research interests are machine learning and data analysis, were members of the working group.

The use of artificial intelligence within the UC campuses cuts across human resources, procurement, policing, student experience and healthcare. We, as an organization, did not have guiding principles to support responsible decision-making around AI, said Nebeker, who co-founded and directs the Research Center for Optimal Digital Ethics Health at UC San Diego, a multidisciplinary group that conducts research and provides education to support ethical digital health study practices.

The UC Presidential Working Group on AI has met over the past year to develop principles to advance responsible practices specific to the selection, implementation and management of AI systems.

With universities increasingly turning to AI-enabled tools to support greater efficiency and effectiveness, UC is setting an important precedent as one of the first universities, and the largest public university system, to develop governance processes for the responsible use of AI. More info is available on the UC Newsroom.

View post:
UC adopts recommendations for the responsible use of Artificial Intelligence - Preuss School Ucsd

Artificial Intelligence project aims to improve standards and development of AI systems – University of Birmingham

A new project has been launched in partnership with the University of Birmingham aiming to address racial and ethical health inequalities using artificial intelligence (AI).

STANDING Together, being led by University Hospitals Birmingham NHS Foundation Trust (UHB), aims to develop standards for datasets that AI systems use, to ensure they are diverse, inclusive and work across all demographic groups. The resulting standards will help regulators, commissioners, policymakers and health data institutions assess whether AI systems are underpinned by datasets that represent everyone, and dont leave underrepresented or minority groups behind.

Xiao Liu, Clinical Researcher in Artificial Intelligence and Digital Healthcare at the University of Birmingham and UHB, and STANDING Together project co-leader, said: Were looking forward to starting work on our project, and developing standards that we hope will improve the use of AI both in the UK and around the world. We believe AI has enormous potential to improve patient care, but through our earlier work on producing AI guidelines, we also know that there is still lots of work to do to make sure AI is a success stories for all patients. Through the STANDING Together project, we will work to ensure AI benefits all patients and not just the majority.

NHSX NHS AI Lab, the NIHR, and the Health Foundation have awarded in total 1.4m to four projects, including STANDING Together. The other organisations working with UHB and the University of Birmingham on STANDING Together are the Massachusetts Institute of Technology, Health Data Research UK, Oxford University Hospitals NHS Foundation Trust, and The Hospital for Sick Children (Sickkids, Toronto).

The NHS AI Lab introduced the AI Ethics Initiative to support research and practical interventions that complement existing efforts to validate, evaluate and regulate AI-driven technologies in health and care, with a focus on countering health inequalities. Todays announcement is the result of the Initiatives partnership with The Health Foundation on a research competition, enabled by NIHR, to understand and enable opportunities to use AI to address inequalities and to optimise datasets and improve AI development, testing and deployment.

Brhmie Balaram, Head of AI Research and Ethics at NHSX, said: We're excited to support innovative projects that demonstrate the power of applying AI to address some of our most pressing challenges; in this case, we're keen to prove that AI can potentially be used to close gaps in minority ethnic health outcomes. Artificial intelligence has the potential to revolutionise care for patients, and we are committed to ensuring that this potential is realised for all patients by accounting for the health needs of diverse communities."

Dr Indra Joshi, Director of the NHS AI Lab at NHSX, added: As we strive to ensure NHS patients are amongst the first in the world to benefit from leading AI, we also have a responsibility to ensure those technologies dont exacerbate existing health inequalities.These projects will ensure the NHS can deploy safe and ethical Artificial Intelligence tools that meet the needs of minority communities and help our workforce deliver patient-centred and inclusive care to all.

Excerpt from:
Artificial Intelligence project aims to improve standards and development of AI systems - University of Birmingham