Call for Contributions to the National Artificial Intelligence Policy – TechEconomy.ng

Artificial Intelligence (AI) growth can be attributed to Digital Innovation and the evolution of technology.

Globally, countries are grappling with ways to manage the exponential growth of new and emerging technologies to advance their economies.

In cognisance of the exponential growth and potential value of digital technologies, which is in line with the vision of President Muhammadu Buhari, GCFR, to diversify the Nigerian Economy through utilising digital technologies, the President launched the National Digital Economy Policy and Strategy (NDEPS), developed by the Federal Ministry of Communications and Digital Economy (FMoCDE), to reposition Nigerias Economy and leverage the many opportunities provided by these digital technologies.

It is against this backdrop that the Honourable Minister for Communications and Digital Economy, Prof Isa Ali Ibrahim, directs the National Information Technology Development Agency (NITDA) to develop a National Artificial Intelligence Policy (NAIP).

The development of the NAIP is envisaged to maximise the benefits, mitigate possible risks, and address some of the complexities attributed to using AI in our daily activities.

Furthermore, it will provide directions on how Nigeria can take advantage of AI, including the development, use, and adoption of AI to proactively facilitate the development of Nigeria into a sustainable digital economy.

NITDA is responsible for developing standards, guidelines, and frameworks for the IT sector in Nigeria, as enshrined in Section 6 of the NITDA Act 2007. The Agency invites the public to contribute and participate in developing National Artificial Intelligence Policy (NAIP).

Kindly use the link below to participate by providing input or contributing to the Policys development as an expert/specialist volunteer.

Send your input/contributions through:

Read the original here:
Call for Contributions to the National Artificial Intelligence Policy - TechEconomy.ng

Artificial intelligence innovation among automotive industry companies has dropped off in the last three months – just-auto.com

Research and innovation in artificial intelligence in the automotive manufacturing and supply sector has declined in the last year.

The most recent figures show that the number of AI related patent applications in the industry stood at 517 in the three months ending June down from 579 over the same period in 2021.

Figures for patent grants related to AI followed a different pattern to filings growing from 180 in the three months ending June 2021 to 221 in the same period in 2022.

The figures are compiled by GlobalData, who track patent filings and grants from official offices around the world. Using textual analysis, as well as official patent classifications, these patents are grouped into key thematic areas, and linked to key companies across various industries.

AI is one of the key areas tracked by GlobalData. It has been identified as being a key disruptive force facing companies in the coming years, and is one of the areas that companies investing resources in now are expected to reap rewards from.

The figures also provide an insight into the largest innovators in the sector.

Ford Motor Co was the top AI innovator in the automotive manufacturing and supply sector in the latest quarter. The company, which has its headquarters in the United States, filed 100 AI related patents in the three months ending June. That was up from 61 over the same period in 2021.

It was followed by the Japan based Toyota Motor Corp with 93 AI patent applications, the United States based General Motors Co (61 applications), and Ireland based Aptiv Plc. (43 applications).

Hyundai Mobis Co Ltd has recently ramped up R&D in AI. It saw growth of 80% in related patent applications in the three months ending June compared to the same period in 2021 - the highest percentage growth out of all companies tracked with more than 10 quarterly patents in the automotive manufacturing and supply sector.

The rest is here:
Artificial intelligence innovation among automotive industry companies has dropped off in the last three months - just-auto.com

Women negative to artificial intelligence – Moonshot News

Women are more sceptical than men about some uses of artificial intelligence (AI). This is the case concerning use of driverless cars and using AI to find false information on social media, according to a new analysis of US data made by Pew Research Center.

34% of women are unsure about whether social media algorithms to find false information are a good or bad idea, compared with 26% of men. When it comes to the use of face recognition by police, 31% of women are not certain whether it is a good or bad idea, compared with 22% of men.

Women are more likely to support the inclusion of a wider variety of groups in AI design. 67% of women say its extremely or very important for social media companies to include people of different genders when designing social media algorithms to find false information, compared with 58% of men. Women are also more likely to say it is important that different racial and ethnic groups are included in the same AI design process (71% vs. 63%).

Additionally, women are more doubtful than men that it is possible to design AI computer programs that can consistently make fair decisions in complex situations. Only around two-in-ten women (22%) think it is possible to design AI programs that can consistently make fair decisions, while a larger share of men (38%) say the same. A plurality of women (46%) say they are not sure whether this is possible, compared with 35% of men.

Overall, women in the U.S. are lesslikely than men to say that technology has had a mostly positive effect on society (42% vs. 54%) and morelikely to say technology has had equally positive and negative impacts (45% vs. 37%). In addition, women are less likely than men to say they feel more excited than concerned about the increased use of AI computer programs in daily life (13% vs. 22%).

Gender remains a factor in views about AI and technologys impact when accounting for other variables, such as respondents political partisanship, education and race and ethnicity.

The analysis says women are consistently more likely than men to express concern about computer programs executing tasks. 43% of women say they would be very or somewhat concerned if AI programs could diagnose medical problems, while 27% of men say the same.

In addition to gender differences about AI in general, women and men express different attitudes about autonomous cars, specifically, the analysis says

37% of men say driverless cars are a good idea for society, while 17% of women say the same. Women, are somewhat more likely than men to say they are not sure if the widespread use of driverless vehicles is a good or bad idea (32% vs. 25%).

46% of men say they would definitely or probably personally want to ride in a driverless passenger vehicle if given the opportunity, compared with 27% of women. 54% of women say they would not feel comfortable sharing the road with a driverless passenger vehicle if their use becomes widespread. Only 35% of men say the same.

Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.

Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!

We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.

Read more from the original source:
Women negative to artificial intelligence - Moonshot News

Artificial intelligence isn’t that intelligent | The Strategist – The Strategist

Late last month, Australias leading scientists, researchers and businesspeople came together for the inaugural Australian Defence Science, Technology and Research Summit (ADSTAR), hosted by the Defence Departments Science and Technology Group. In a demonstration of Australias commitment to partnerships that would make our non-allied adversaries flinch, Chief Defence Scientist Tanya Monro was joined by representatives from each of the Five Eyes partners, as well as Japan, Singapore and South Korea. Two streams focusing on artificial intelligence were dedicated to research and applications in the defence context.

At the end of the day, isnt hacking an AI a bit like social engineering?

A friend who works in cybersecurity asked me this. In the world of information security, social engineering is the game of manipulating people into divulging information that can be used in a cyberattack or scam. Cyber experts may therefore be excused for assuming that AI might display some human-like level of intelligence that makes it difficult to hack.

Unfortunately, its not. Its actually very easy.

The man who coined the term artificial intelligence in the 1950s, cybernetics researcher John McCarthy, also said that once we know how it works, it isnt called AI anymore. This explains why AI means different things to different people. It also explains why trust in and assurance of AI is so challenging.

AI is not some all-powerful capability that, despite how much it can mimic humans, also thinks like humans. Most implementations, specifically machine-learning models, are just very complicated implementations of the statistical methods were familiar with from high school. It doesnt make them smart, merely complex and opaque. This leads to problems in AI safety and security.

Bias in AI has long been known to cause problems. For example, AI-driven recruitment systems in tech companies have been shown to filter out applications from women, and re-offence prediction systems in US prisons exhibit consistent biases against black inmates. Fortunately, bias and fairness concerns in AI are now well known and actively investigated by researchers, practitioners and policymakers.

AI security is different, however. While AI safety deals with the impact of the decisions an AI might make, AI security looks at the inherent characteristics of a model and whether it could be exploited. AI systems are vulnerable to attackers and adversaries just as cyber systems are.

A known challenge is adversarial machine learning, where adversarial perturbations added to an image cause a model to predictably misclassify it.

When researchers added adversarial noise imperceptible to humans to an image of a panda, the model predicted it was a gibbon.

In another study, a 3D-printed turtle had adversarial perturbations embedded in its surface so that an object-detection model believed it to be a rifle. This was true even when the object was rotated.

I cant help but notice disturbing similarities between the rapid adoption of and misplaced trust in the internet in the latter half of the last century and the unfettered adoption of AI now.

It was a sobering moment when, in 2018, the then US director of national intelligence, Daniel Coats, called out cyber as the greatest strategic threat to the US.

Many nations are publishing AI strategies (including Australia, the US and the UK) that address these concerns, and theres still time to apply the lessons learned from cyber to AI. These include investment in AI safety and security at the same pace as investment in AI adoption is made; commercial solutions for AI security, assurance and audit; legislation for AI safety and security requirements, as is done for cyber; and greater understanding of AI and its limitations, as well as the technologies, like machine learning, that underpin it.

Cybersecurity incidents have also driven home the necessity for the public and private sectors to work together not just to define standards, but to reach them together. This is essential both domestically and internationally.

Autonomous drone swarms, undetectable insect-sized robots and targeted surveillance based on facial recognition are all technologies that exist. While Australia and our allies adhere to ethical standards for AI use, our adversaries may not.

Speaking on resilience at ADSTAR, Chief Scientist Cathy Foley discussed how pre-empting and planning for setbacks is far more strategic than simply ensuring you can get back up after one. That couldnt be more true when it comes to AI, especially given Defences unique risk profile and the current geostrategic environment.

I read recently that Ukraine is using AI-enabled drones to target and strike Russians. Notwithstanding the ethical issues this poses, the article I read was written in Polish and translated to English for me by Googles language translation AI. Artificial intelligence is already pervasive in our lives. Now we need to be able to trust it.

More here:
Artificial intelligence isn't that intelligent | The Strategist - The Strategist

Artificial intelligence: Top trending companies on Twitter in Q2 2022 – Verdict

Verdict has listed five of the companies that trended the most in Twitter discussions related to artificial intelligence (AI), using research from GlobalDatas Technology Influencer platform.

The top companies are the most mentioned companies among Twitter discussions of more than 629 AI experts tracked by GlobalDatas Technology Influencer platform during the second quarter (Q2) of 2022.

Alphabets Google claiming its new AI models allow for nearly instant weather forecasts, the companys new AI Test Kitchen app helping users explore the potential of conversational AI, and Google Researchs collaboration with New York Stem Cell Foundation (NYSCF) Research Institute scientists to detect cellular signatures of Parkinsons disease, were some of the popular discussions in Q2 2022.

Ronald van Loon, CEO of the Intelligent World, an influencer network that connects businesses and experts to audiences, shared an article on multinational technology conglomerate Alphabets Google, a technology company, stating that its new AI models allow for nearly instantaneous weather forecasts. The increasingly important tool to address climate change, is in its initial stages of development, and is yet to be used in commercial systems, the article detailed. However, a non-peer-reviewed paper published by Googles researchers described how they were able to accurately predict rainfall up to six hours in advance at a one kilometre resolution from just minutes of calculation. Researchers notedthat short-term weather forecasts will be critical in crisis management and to minimise damages to life and property.

Alphabet Inc is the holding company of Google, a global technology company, headquartered in Mountain View, California, the US. The company offers a wide range of products and platforms, including search, maps, calendar, ads, Gmail, Google Play, Android, google cloud, chrome, and YouTube. It also offers online advertising services through its AdSense, internet, TV services, licensing and research and development services, and is involved in investments related to infrastructure, data management, analytics, and AI.

Growing enterprise gaps in AI adoption leading to the popularity of Amazons Amazon Web Services (AWS) SageMaker, and AWS introducing a solution of AI services to manage contact centre workflows, were some of the popular discussions in Q2.

Spiros Margaris, a venture capitalist and board member at the venture capital firm Margaris Ventures, shared an article on AIs growing enterprise gaps leading to ecommerce company Amazons Amazon Web Services (AWS) SageMakers growth. A study revealed that inability of AI to maturity today due to many enterprises lacking a strategy that prioritised security, compliance, fairness, bias, and ethics, the article noted. According to the assessment of enterprise AI adoption, only 26% of organisations have AI projects in production, the same percentage as the previous year. Additionally, 31% of enterprises reported not leveraging AI in their businesses today, up from 13% last year. Only 53% of AI projects make it out of pilot into production, taking on an average eight months or longer to develop scalable models, the article further highlighted.

SageMakers architecture is built to adapt to the changing model building, validating, training, and deployment situations. It integrates across AI services, machine learning (ML) frameworks, and infrastructure in the middle of the AWS ML Stack, the article detailed. As a result, the SageMaker offers greater flexibility in handling training, notebooks, tuning, debugging, and deploying models. In other words, it enables the model interpretability and transparency enterprises require to make AI less risky.

Amazon Web Services Inc is a subsidiary of the online retailer and web service provider Amazon, headquartered in Seattle, Washington, the US. The company offers a range of cloud infrastructure services including compute, storage, databases, analytics, networking, mobile, developer tools, augmented reality (AR) and virtual reality (VR), robotics, game tech, ML, management tools, content delivery, media services, customer engagement, app streaming and security, identity and compliance.

NVIDIAs launch of an AI computing platform for medical devices and computational sensing systems, the companys invention of a new video-to-video synthesis AI model, and its Morpheus AI framework allowing developers to create and scale cybersecurity solutions, were popularly discussed in the second quarter.

Elitsa Krumova, a technology influencer, shared an article on the technology company NVIDIA introducing the Clara Holoscan MGXTM, a platform for the development and deployment of real-time AIapplicationsat the edge for the medical device industry. The platform is specifically created keeping in mind regulatory standards, and is an expansion of the Clara Holoscan platform with the aim of offering a comprehensive, medical-grade reference architecture and long-term software support, to speed up innovation in the medical device industry, the article detailed.

Kimberly Powell, NVIDIAs vice president of healthcare stated thatdeploying real-time AI in healthcare was critical for areas such as drug discovery, diagnostics, and surgery. The platforms combination with AI sped up computing and advanced visualisation, accelerated the productisation of AI, and also deliveredsoftware-as-a-service business models for the industry, the article noted.

Nvidia Corp is a technology company headquartered in Santa Clara, California, the US. The company designs and develops graphics processing units, central processing units, and system-on-a-chip units for gaming, professional visualisation, data centre, and automotive markets. It also offers solutions for AI and data science, data centre and cloud computing, design and visualisation, edge computing, high-performance computing, and self-driving vehicles.

Meta (formerly Facebook) describing how AI will unlock the metaverse, the companys new AI that can discover and refine formulas for increasingly strong, low-carbon concrete, and the company releasing the Mephisto platform for collecting data to train AI models, were some of the popular discussions in the second quarter.

Mario Pawlowski, CEO of iTrucker, a trucking, logistics, and supply chain related company, shared an article on Meta describing how technologies such as AI, AR, VR, 5G, and blockchain will merge to power the metaverse. Jrme Pesenti, leader of Facebook AI, stated that AI will be key to the metaverse and that the role of Meta AI was to advance AI through further research in AI breakthroughs and improving the companys products through them, the article highlighted.

Meta AI is particularly making progress in areas such as embodiment and robotics, creativity and self-supervised learning, where AI can learn the data without human intervention, Pesenti added.

Facebook Inc (now Meta) is a technology company headquartered in Menlo Park, California, the US. The company is a provider of social networking, advertising, and business insight solutions. Through its virtual-reality vision, the metaverse, Meta is focusing on developing a virtual environment that allows people to interact and connect with technology. Some of its major products include Facebook, Instagram, Oculus, Messenger, and WhatsApp.

International Business Machines Corp (IBM) rolling out more AI drive-thru McDonalds chatbots, the company making its cancer-fights AI projects open source, and the companys Non-Von Neumann AI hardware breakthrough in neuromorphic computing, were some of the popular discussions in Q2.

Evan Kirstel, chief digital evangelist and co-founder of the marketing firm eViRa Health, shared an article on the technology company IBM adding its natural language processing (NLP) software to several McDonalds drive-thrus. This came right after the company bought the automated order technology unit from the fast-food chain, as well as the team that built it, the article noted. Automated ordering had been piloted across ten McShacks in Chicago in June 2021, with humans not required to interfere in circa four out of every five orders made with the AI drive-thru bots.

Rob Thomas, senior vice president of global markets at IBM stated that the fast-food company had been struggling with ordering, and that IBNs MLP technology could augment McDonalds technology and service in a time of wage inflation and the need for quick service restaurants, the article highlighted.

IBM is a technology company headquartered in Armonk, New York, the US. The company creates and sells system hardware and software, and offers infrastructure, hosting, and consulting services. Its technology-based product line includes analytics, AI, automation, blockchain, cloud computing, IT infrastructure, IT management, cybersecurity, and software development products. The company also offers a range of services including cloud, networking, security, technology consulting, application services, business resiliency services, and technology support services.

Here is the original post:
Artificial intelligence: Top trending companies on Twitter in Q2 2022 - Verdict

How Artificial Intelligence Is Changing The Odds In Online Casino – Intelligent Living

Online casinos are increasing the use of artificial intelligence to enhance a players gambling experience. With internet use rising over the past decade, the online casino market has more than doubled over that period. It is estimated that by 2024, the numbers will have gone over 65 million in the UK alone.

It may seem obvious why the online market has such high growth rates, especially considering the events that occurred in the last couple of years and how they pushed people to use the internet for many services, including retail and banking. In this editorial, we examine the effects of artificial intelligences continued advancement in the casino industry, with a focus on the real money slot game on amazon slots.

Several factors contribute to the online casino market turning into a billion-pound industry. For instance, digital wallets such as PayPal have assisted casinos in appearing more accessible to people who want instant transactions. Similarly, daily and weekly bonuses and promotions have increased registration numbers on online casino sites.

Another example is the features found in the slot games that offer free spins daily and help increase players odds of striking the big jackpots, winning cash prizes, live casino bonuses, and free sportsbook wagers. And, if punters dont get it right the first few times, they have an incentive to keep trying.

So, attributes such as electronic payment options, daily bonuses, and promotions all encourage new and existing players to keep coming back for more. Thus, increasing winning odds for returning players and the industrys market value goes up all the same.

The introduction of AI technology has remarkably affected the online casino industry by transforming it into what it is these days. How is that, you ask? Over the past few years, casino operators and gaming developers have had more access to user information, which has aided them in creating and improving games best suited for their customers.

The application of AI in gathering user data on new and returning players has assisted operators and developers in keeping fresh content that maintains relevance while creating targeted marketing campaigns. AI helps determine which games players engage with the most, how much traffic a casino site receives, and how much wagering takes place on games and sporting events. Advanced bots in gambling provide higher quality customer service in online gambling.

The good news for casino providers is that AI could save a lot on personnel costs. The bad news for those employees is that they would end up losing work to robots, which means an entire industry would take a hit. The overall positive thing about it all, though, is that people are innovative by nature. Perhaps the closing of one old industry might drive an opening for another.

Read more here:
How Artificial Intelligence Is Changing The Odds In Online Casino - Intelligent Living

FedEx to expand robotics technology and AI – CBS19.tv KYTX

MEMPHIS, Tenn. Memphis-based FedEx will be using more robots and artificial intelligence.

This comes after the company announced an expanded relationship with Berkshire Grey, a Massachusetts-based company that develops robotics technology and AI software for logistics businesses.

Berkshire Grey's CEO said the new agreement will help with supply chain issues and ease the physical burden on employees.

Berkshire Grey and FedEx are strategically aligned. These new agreements reflect our mutual commitment to innovations in robotic automation that can remove barriers within the supply chain, ease the physical burden on employees and streamline operations, said Tom Wagner, CEO of Berkshire Grey. We look forward to working together on this new program and to advancing other automation programs with FedEx moving forward.

The company has also worked with Walmart and Target in the past with technology to compete with Amazon.

Read the original post:
FedEx to expand robotics technology and AI - CBS19.tv KYTX

Global Artificial Intelligence in Healthcare Diagnosis Market Research Report 2022: Rising Adoption of Healthcare Artificial Intelligence in Research…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Diagnosis Market Research Report by Technology, Component, Application, End User, Region - Global Forecast to 2026 - Cumulative Impact of COVID-19" report has been added to ResearchAndMarkets.com's offering.

The Global Artificial Intelligence in Healthcare Diagnosis Market size was estimated at USD 2,318.98 million in 2020, USD 2,725.72 million in 2021, and is projected to grow at a Compound Annual Growth Rate (CAGR) of 17.81% to reach USD 6,202.67 million by 2026.

Market Segmentation:

This research report categorizes the Artificial Intelligence in Healthcare Diagnosis to forecast the revenues and analyze the trends in each of the following sub-markets:

Competitive Strategic Window:

The Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies to help the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. It describes the optimal or favorable fit for the vendors to adopt successive merger and acquisition strategies, geography expansion, research & development, and new product introduction strategies to execute further business expansion and growth during a forecast period.

FPNV Positioning Matrix:

The FPNV Positioning Matrix evaluates and categorizes the vendors in the Artificial Intelligence in Healthcare Diagnosis Market based on Business Strategy (Business Growth, Industry Coverage, Financial Viability, and Channel Support) and Product Satisfaction (Value for Money, Ease of Use, Product Features, and Customer Support) that aids businesses in better decision making and understanding the competitive landscape.

Market Share Analysis:

The Market Share Analysis offers the analysis of vendors considering their contribution to the overall market. It provides the idea of its revenue generation into the overall market compared to other vendors in the space. It provides insights into how vendors are performing in terms of revenue generation and customer base compared to others. Knowing market share offers an idea of the size and competitiveness of the vendors for the base year. It reveals the market characteristics in terms of accumulation, fragmentation, dominance, and amalgamation traits.

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Key Topics Covered:

1. Preface

2. Research Methodology

3. Executive Summary

4. Market Overview

5. Market Insights

6. Artificial Intelligence in Healthcare Diagnosis Market, by Technology

7. Artificial Intelligence in Healthcare Diagnosis Market, by Component

8. Artificial Intelligence in Healthcare Diagnosis Market, by Application

9. Artificial Intelligence in Healthcare Diagnosis Market, by End User

10. Americas Artificial Intelligence in Healthcare Diagnosis Market

11. Asia-Pacific Artificial Intelligence in Healthcare Diagnosis Market

12. Europe, Middle East & Africa Artificial Intelligence in Healthcare Diagnosis Market

13. Competitive Landscape

14. Company Usability Profiles

15. Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/vgkht7

View post:
Global Artificial Intelligence in Healthcare Diagnosis Market Research Report 2022: Rising Adoption of Healthcare Artificial Intelligence in Research...

Artificial Intelligence Regulation Updates: China, EU, and U.S – The National Law Review

Wednesday, August 3, 2022

Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions.

Accordingly,enterprises are increasingly embracing this dynamic technology. A2022 global study by IBMfound that 77% of companies are either currently using AI or exploring AI for future use, creating value by increasing productivity through automation, improved decision-making, and enhanced customer experience. Further, according to a2021 PwC studythe COVID-19 pandemic increased the pace of AI adoption for 52% of companies as they sought to mitigate the crises impact on workforce planning, supply chain resilience, and demand projection.

For these many businesses investing significant resources into AI, it is critical to understand the current and proposed legal frameworks regulating this novel technology. Specifically for businesses operating globally, the task of ensuring that their AI technology complies with applicable regulations will be complicated by the differing standards that are emerging from China, the European Union (EU), and the U.S.

China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed aregulationgoverning companies use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and disseminate positive energy. The regulation mandates companies notify users when an AI algorithm is playing a role in determining which information to display to them and give users the option to opt out of being targeted. Additionally, the regulation prohibits algorithms that use personal data to offer different prices to consumers. We expect these themes to manifest themselves in AI regulations throughout the world as they develop.

Meanwhile in the EU, the European Commission has published an overarchingregulatory framework proposaltitled the Artificial Intelligence Act which would have a much broader scope than Chinas enacted regulation. The proposal focuses on the risks created by AI, with applications sorted into categories of minimal risk, limited risk, high risk, or unacceptable risk. Depending on an applications designated risk level, there will be corresponding government action or obligations. So far, the proposed obligations focus on enhancing the security, transparency, and accountability of AI applications through human oversight and ongoing monitoring. Specifically, companies will be required to register stand-alone high-risk AI systems, such as remote biometric identification systems, in an EU database. If the proposed regulation is passed, the earliest date for compliance would be the second half of 2024 with potential fines for noncompliance ranging from 2-6% of a companys annual revenue.

Additionally, the previously enacted EU General Data Protection Regulation (GDPR) already carries implications for AI technology.Article 22prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals unless the program gains the users explicit consent or meets other requirements.

In the United States there has been a fragmented approach to AI regulation thus far, with states enacting their own patchwork AI laws. Many of the enacted regulations focus on establishing various commissions to determine how state agencies can utilize AI technology and to study AIs potential impacts on the workforce and consumers. Common pending state initiatives go a step further and would regulate AI systems accountability and transparency when they process and make decisions based on consumer data.

On a national level, the U.S. Congress enacted theNational AI Initiative Actin January 2021, creating theNational AI Initiativethat provides an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies . . . . The Act created new offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. administrative agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services.

Pending national legislation includes theAlgorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. In response to reports that AI systems can lead to biased and discriminatory outcomes, the proposed Act would direct the FTC to create regulations that mandate covered entities, including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.

While the FTC has not promulgated AI-specific regulations, this technology is on the agencys radar. In April 2021 the FTC issued amemowhich apprised companies that using AI that produces discriminatory outcomes equates to a violation of Section 5 of the FTC Act, which prohibits unfair or deceptive practices. And the FTC may soon take this warning a step fartherin June 2022 theagency indicatedthat it will submit an Advanced Notice of Preliminary Rulemaking to ensure that algorithmic decision-making does not result in harmful discrimination with the public comment period ending in August 2022. The FTC also recently issued areportto Congress discussing how AI may be used to combat online harms, ranging from scams, deep fakes, and opioid sales, but advised against over-reliance on these tools, citing the technologys susceptibility to producing inaccurate, biased, and discriminatory outcomes.

Companies should carefully discern whether other non-AI specific regulations could subject them to potential liability for their use of AI technology. For example, the U.S. Equal Employment Opportunity Commission (EEOC) put forthguidancein May 2022 warning companies that their use of algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act by, in part, intentionally or unintentionally screening out individuals with disabilities. Further analysis of the EEOCs guidance can be foundhere.

Many other U.S. agencies and offices are beginning to delve into the fray of AI. In November 2021, the White House Office of Science and Technology Policysolicited engagementfrom stakeholders across industries in an effort to develop a Bill of Rights for an Automated Society. Such a Bill of Rights could cover topics like AIs role in the criminal justice system, equal opportunities, consumer rights, and the healthcare system. Additionally, the National Institute of Standards and Technology (NIST), which falls under the U.S. Department of Commerce, is engaging with stakeholders todevelopa voluntary risk management framework for trustworthy AI systems. The output of this project may be analogous to the EUs proposed regulatory framework, but in a voluntary format.

The overall theme of enacted and pending AI regulations globally is maintaining the accountability, transparency, and fairness of AI. For companies leveraging AI technology, ensuring that their systems remain compliant with the various regulations intended to achieve these goals could be difficult and costly. Two aspects of AIs decision-making process make oversight particularly demanding:

Opaquenesswhere users can control data inputs and view outputs, but are often unable to explain how and with which data points the system made a decision.

Frequent adaptationwhere processes evolve over time as the system learns.

Therefore, it is important for regulators to avoid overburdening businesses to ensure that stakeholders may still leverage AI technologies great benefits in a cost-effective manner. The U.S. has the opportunity to observe the outcomes of the current regulatory action from China and the EU to determine whether their approaches strike a favorable balance. However, the U.S. should potentially accelerate its promulgation of similar laws so that it can play a role in setting the global tone for AI regulatory standards.

Thank you to co-author Lara Coole, a summer associate in Foley & Lardners Jacksonville office, for her contributions to this post.

Here is the original post:
Artificial Intelligence Regulation Updates: China, EU, and U.S - The National Law Review

Business leaders commemorate anniversary of EqualAI and its new leadership role on the National Artificial Intelligence Advisory Committee – PR…

EqualAI's Miriam Vogel leads committee advising the President and National AI Initiative Office on a range of issues related to artificial intelligence

NEW YORK, Aug. 2, 2022 /PRNewswire/ -- LivePerson(Nasdaq: LPSN), a global leader in customer engagement solutions, joined business leaders in congratulating EqualAIon four years of progress fighting unconscious bias in AI.

EqualAI is an independent nonprofit organization and movement founded in 2018 to reduce unconscious bias in the development and use of artificial intelligence. It is supported by corporate members from the tech industry.

In addition to launching impactful initiatives including the EqualAI Pledgeand EqualAI Badge Program for Responsible AI Governance the organization's president, Miriam Vogel, was recently appointed as Chair of the National Artificial Intelligence Advisory Committee(NAIAC), which advises the US President and National AI Initiative Office on a range of issues related to artificial intelligence.

The NAIAC was established by the US Department of Commerce and consists of leaders with a broad and interdisciplinary range of AI-relevant expertise across academia, nonprofits, civil society, and the private sector.

LivePerson founder and CEO Rob LoCascio said, "In just four years, EqualAI has made an incredible impact on the trajectory of artificial intelligence, bending it toward more responsible and ethical outcomes for all. At LivePerson, we're proud to have played a key role investing in and spearheading these efforts as a founding member of EqualAI. As we celebrate Miriam Vogel's appointment to the NAIAC, we encourage organizations of all kinds to take the EqualAI Pledge and undertake the EqualAI Badge Program to make tangible steps toward responsible AI governance."

Arianna Huffington, founder and CEO of Thrive Global and a founding member of EqualAI, added, "From raising awareness to designing frameworks and initiatives that fight bias, EqualAI has pushed policymakers and business leaders to do more and do better when it comes to artificial intelligence. As AI continues to reshape our daily lives, it's more critical than ever that we come together to ensure it helps, not hurts, the well-being of all of our communities."

In addition to LoCascio and Huffington, EqualAI's leadership includes Karyn Temple, Senior EVP and Global General Counsel at Motion Picture Association; Monica Greenberg, EVP, Corporate Development, Strategic Alliances and General Counsel at LivePerson; Susan Gonzales, CEO of AIandYou; and Reggie Townsend, Director of Data Ethics at SAS. LivePerson, Verizon, and SAS support EqualAI through corporate membership.

To learn more about reducing bias in artificial intelligence, visit EqualAI's websiteand LivePerson's blog.

About LivePerson, Inc.

LivePerson (NASDAQ: LPSN) is a global leader in customer engagement solutions. We create AI-powered digital experiences that feel Curiously Human. Our customers including leading brands like HSBC, Orange, and GM Financial have conversations with millions of consumers as personally as they would with one. Our Conversational Cloud platform powers nearly a billion conversational interactions every month, providing a uniquely rich data set to build connections that reduce costs, increase revenue, and are anything but artificial. Fast Company named us the #1 Most Innovative AI Company in the world. To talk with us or our Conversational AI, please visit liveperson.com.

About EqualAI

EqualAIis a nonprofit organization that was created to reduce unconscious bias in the development and use of artificial intelligence (AI). AI is transforming our society enabling important and exciting developments that were unimaginable just a few years ago. With these immense benefits comes significant responsibility. Together with leaders and experts across industry, academia, technology, and government, EqualAI is developing standards and tools to increase awareness and reduce bias, as well as identifying regulatory and legislative solutions.

Contact:Mike Tague[emailprotected]

SOURCE LivePerson, Inc.

Read the original post:
Business leaders commemorate anniversary of EqualAI and its new leadership role on the National Artificial Intelligence Advisory Committee - PR...