Artificial intelligence re-imagined to tackle society’s challenges with people at its heart – University of Southampton

Home>>>

Published:27 November 2020

AI systems will be re-designed to value people as more than passive providers of data in a prestigious new Turing Artificial Intelligence Acceleration Fellowship at the University of Southampton.

The novel research, led by Electronics and Computer Science's Dr Sebastian Stein, will create AI systems that are aware of citizens' preferences and act to maximise the benefit to society.

In these systems, citizens are supported by trusted personal software agents that learn an individuals preferences. Importantly, rather than share this data with a centralised system, the AI agents keep it safe on private smart devices and only use it in their owners' interests.

Over the next five years, the 1.4m fellowship will develop and trial citizen-centric AI systems in a range of application areas, such as smart home energy management, on-demand mobility and disaster response, including for the provision of advice and medical support during epidemics like COVID-19.

Dr Stein, of the Agents, Interaction and Complexity (AIC) research group, says: "AI systems are increasingly used to support and often automate decision-making on an unprecedented scale. Such AI systems can draw on a vast range of data sources to make fast, efficient, data-driven decisions to address important societal challenges and potentially benefit millions of people.

"However, building AI systems on such a large and pervasive scale raises a range of important challenges. First, these systems may need access to relevant information from people, such as health-related data, which raises privacy issues and may also encourage people to misrepresent their requirements for personal benefit. Furthermore, the systems must be trusted to act in a manner that aligns with societys ethical values. This includes the minimisation of discrimination and the need to make equitable decisions.

"Novel approaches are needed to build AI systems that are trusted by citizens, that are inclusive and that achieve their goals effectively. To enable this, citizens must be viewed as first-class agents at the centre of AI systems, rather than as passive data sources."

The new vision for AI systems will be achieved by developing techniques that learn the preferences, needs and constraints of individuals to provide personalised services, incentivise socially-beneficial behaviour changes, make choices that are fair, inclusive and equitable, and provide explanations for these decisions.

The Southampton team will draw upon a unique combination of research in multi-agent systems, mechanism design, human-agent interaction and responsible AI.

Dr Stein will work with a range of high-profile stakeholders over the duration of the fellowship. This will include citizen end-users, to ensure the research aligns with their needs and values, as well as industrial partners, to put the research into practice.

Specifically, collaboration with EA Technology and Energy Systems Catapult will generate incentive-aware smart charging mechanisms for electric vehicles. Meanwhile, work with partners including Siemens Mobility, Thales and Connected Places Catapult will develop new approaches for trusted on-demand mobility. Within the Southampton region, the fellowship will engage with the Fawley Waterside development to work on citizen-centric solutions to smart energy and transportation.

The team will also work with Dstl to create disaster response applications that use crowdsourced intelligence from citizens to provide situational awareness, track the spread of infectious diseases or issue guidance to citizens. Further studies with Dstl and Thales will explore applications in national security and policing, and joint work with UTU Technologies will investigate how citizens can share their preferences and recommendations with trusted peers while retaining control over what data is shared and with whom.

Finally, with IBM Research, Dr Stein will develop new explainability and fairness tools, and integrate these with their existing open source frameworks.

Turing Artificial Intelligence Acceleration Fellowships, named after AI pioneer Alan Turing, are supported by a 20 million government investment in AI being delivered by UK Research and Innovation (UKRI), in partnership with the Department for Business Energy and Industrial Strategy, Office for AI and the Alan Turing Institute.

Science Minister, Amanda Solloway, says: "The UK is the birthplace of artificial intelligence and we therefore have a duty to equip the next generation of Alan Turings, like Southampton's Dr Sebastian Stein, with the tools that will keep the UK at the forefront of this remarkable technological innovation. "The inspiring AI project we are backing today to will help inform UK citizens in their decision making - from managing their energy needs to advising which mode of transport to take - transforming the way we live and work, while cementing the UK's status as a world leader in AI and data."

Digital Minister, Caroline Dinenage, says: "The UK is a nation of innovators and this government investment will help our talented academics use cutting-edge technology to improve people's daily lives - from delivering better disease diagnosis to managing our energy needs."

The University of Southampton has placed Machine Intelligence at the centre of its research activities for more than 20 years and has generated over 50m of funding for associated technologies in the last 10 years across 30 medium to large projects. Southampton draws together researchers and practitioners through its Centre for Machine Intelligence, trains the next generation of AI researchers via its UKRI Centre for Doctoral Training in Machine Intelligence for Nano- Electronic Devices and Systems (MINDS), and was recently chosen to host the UKRI Trustworthy Autonomous Systems (TAS) Hub.

Southampton is also a leading member of the UK national Alan Turing Institute with activities co-ordinated by the Universitys Web Science Institute.

View post:
Artificial intelligence re-imagined to tackle society's challenges with people at its heart - University of Southampton

The U.S. government needs to get involved in the A.I. race against China, Nasdaq executive says – CNBC

The U.S. needs to take a "strategic approach" as it competes with China on artificial intelligence, according to a Nasdaq executive.

AI is an area that is going to only develop in partnership with government, and U.S. authorities need to get involved, said Edward Knight, vice chairman of Nasdaq.

The Chinese government has already started "investing heavily" and working with their private sector to develop new technologies based on artificial intelligence, he said.

Beijing in 2017 said it wanted to become the world leader in AI by 2030 and aims to make the industry worth 1 trillion yuan ($152 billion). It included a roadmap about how AI could be developed and deployed.

"I think the U.S. already is leading, but it needs more of a strategic approach involving the government," Knight told CNBC's Dan Murphy as part of FinTech Abu Dhabi, which was held online this year. "The private sector alone cannot take on the entire Chinese government and private sector, which is very focused on this."

A U.S. and a Chinese flag wave outside a commercial building in Beijing.

Teh Eng Koon | AFP | Getty Images

Predicting that society will benefit from any innovation that comes from artificial intelligence, Knight added: "If the U.S. is going to continue to be a growing economy and innovative economy, it has to master that new technology."

Artificial intelligence refers to technology in which computers or machines imitate human intelligence such as in image and pattern recognition. It is increasingly being used in sectors from financial services to health care, but has been criticized as being "more dangerous than nukes" by Tesla CEO Elon Musk.

Musk fears that AI will develop too quickly for humans to safely manage, but researchers have pushed back, calling him a "sensationalist."

Separately, Knight weighed in on what a Biden presidency would mean for the initial public offering market.

He said the pipeline traditionally slows down when a new president comes into office because there's uncertainty about possible policy changes.

However, he sees low interest rates and the likelihood of a divided government as positive for the IPO market. "We expect there will not be radical, if you will, changes in public policy," Knight said. "Change will come incrementally, and I think that makes markets more predictable."

We cannot have a strong economy with unhealthy American people. Once we can restore their health and deal with the pandemic, I think you'll start to see the economy fully recover.

Meanwhile, the Federal Reserve this month said it would keep rates near zero for as long as necessary to help the economy recover from the effects of Covid-19.

"With more predictable markets and low interest rates, I think you'll continue to have a healthy demand and pipeline for IPOs," Knight said.

He also said the president-elect's priority is managing the coronavirus crisis and "hopefully getting to the place where we have a widely available vaccine," which would act as a foundation for a recovery.

"We cannot have a strong economy with unhealthy American people," he said. "Once we can restore their health and deal with the pandemic, I think you'll start to see the economy fully recover."

CNBC's Arjun Kharpal, Sam Shead and Catherine Clifford contributed to this report.

Read the original here:
The U.S. government needs to get involved in the A.I. race against China, Nasdaq executive says - CNBC

Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems – Privacy – United States – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

Published in The Journal of Robotics, ArtificialIntelligence & Law (January-February 2021)

Many information security and privacy laws such as theCalifornia Consumer Privacy Act1 and the New York StopHacks and Improve Electronic Data Security Act2 requireperiodic assessments of an organization's informationmanagement systems. Because many organizations collect, use, andstore personal information from individualsmuch of whichcould be used to embarrass or impersonate those individuals ifinappropriately accessedthese laws require organizations toregularly test and improve the security they use to protect thatinformation.

As of yet, there is no similar specific law in the United Statesdirected at artificial intelligence systems ("AIS"),requiring the organizations that rely on AIS to test its accuracy,fairness, bias, discrimination, privacy, and security.

However, existing law is broad enough to impose on manyorganizations a general obligation to assess their AIS, andlegislation has appeared requiring certain entities to conductimpact assessments on their AIS. Even without a regulatory mandate,many organizations should perform AIS assessments as a bestpractice.

This column summarizes current and pending legal requirementsbefore providing more details about the assessment process.

The Federal Trade Commission's ("FTC") authorityto police "unfair or deceptive acts or practices in oraffecting commerce" through rule making and administrativeadjudication is broad enough to govern AIS, and it has a departmentthat focuses on algorithmic transparency, the Office of TechnologyResearch and Investigation.3 However, the FTC has notissued clear guidance regarding AIS uses that qualify as unfair ordeceptive acts or practices. There are general practices thatorganizations can adopt that will minimize their potential forengaging in unfair or deceptive practices, which include conductingassessments of their AIS.4 However, there is no specificFTC rule obligating organizations to assess their AIS.

There have been some legislative efforts to create such anobligation, including the Algorithmic AccountabilityAct,5 which was proposed in Congress, and a similar billproposed in New Jersey,6 both in 2019.

The federal bill would require covered entities to conduct"impact assessments" on their "high-risk" AISin order to evaluate the impacts of the AIS's design processand training data on "accuracy, fairness, bias,discrimination, privacy, and security."7

The New Jersey bill is similar, requiring an evaluation of theAIS's development process, including the design and trainingdata, for impacts on "accuracy, fairness, bias,discrimination, privacy, and security," and must includeseveral elements, including a "detailed description of thebest practices used to minimize the risks" and a"cost-benefit analysis."8 It would alsorequire covered entities to work with external third parties,independent auditors, and independent technology experts to conductthe assessments, if reasonably possible.9

Although neither of these has become law, they represent theexpected trend of emerging regulation.10

When organizations rely on AIS to make or inform decisions oractions that have legal or similarly significant effects onindividuals, it is reasonable for governments to require that thoseorganizations also conduct periodic assessments of the AIS. Forexample, state criminal justice systems have begun to adopt AISthat use algorithms to report on a defendant's risk to commitanother crime, risk to miss his or her next court date, etc.; humandecision makers then use those reports to inform theirdecisions.11

The idea is that the AIS can be a tool to inform decisionmakerspolice, prosecutors, judgesto help them makebetter, data-based decisions that eliminate biases they may haveagainst defendants based on race, gender, etc.12 This ispotentially a wonderful use for AIS, but only if the AIS actuallyremoves inappropriate and unlawful human bias rather than recreateit.

Unfortunately, the results have been mixed at best, as there isevidence suggesting that some of the AIS in the criminal justicesystem is merely replicating human bias.

In one example, an African-American teenage girl and a whiteadult male were each convicted of stealing property totaling about$80. An AIS determined that the white defendant was rated as alower recidivism risk than the teenager, even though he had a muchmore extensive criminal record, with felonies versus juvenilemisdemeanors. Two years after their arrests, the AISrecommendations were revealed to be incorrect: the male defendantwas serving an eight-year sentence for another robbery; theteenager had not committed any further crimes.13 Similarissues have been observed in AIS used in hiring,14lending,15 health care,16 and schooladmissions.17

Although some organizations are conducting AIS assessmentswithout a legal requirement, a larger segment is reluctant to adoptthe assessments as a best practice, as many for-profit companiescare more about accuracy to the original data used to train theirAIS than they do about eliminating the biases in that originaldata.18 According to Daniel Soukup, a data scientistwith Mostly AI, a start-up experimenting with controlling biases indata, "There's always another priority, it seems. . . .You're trading off revenue against making fair predictions, andI think that is a very hard sell for these institutions and theseorganizations."19

I suspect, though, that the tide will turn in the otherdirection in the near future, with or without a direct legislativeimpetus, similar to the trend in privacy rights and operations.Although most companies in the United States are not subject tobroad privacy laws like the California Consumer Privacy Act or theEuropean Union's General Data Protection Regulation, I haveobserved an increasing number of clients that want to provide theprivacy rights afforded by those laws, either because theircustomers expect them to or they want to position themselves ascompanies that care about individuals' privacy.

It is not hard to see a similar trend developing among companiesthat rely on AIS. As consumers become more aware of the problematicissues involved in AIS decision-makingpotential bias, use ofsensitive personal information, security of that information, thesignificant effects, lack of oversight, etc.they will becomejust as demanding about AIS requirements as privacy requirements.Similar to privacy, consumer expectations will likely be pushed inthat direction by jurisdictions that adopt AIS assessmentlegislation, even if they do not live in those jurisdictions.

Organizations that are looking to perform AIS assessments now inanticipation of regulatory activity and consumer expectationsshould conduct an assessment consistent with the followingprinciples and goals:

Consistent with the New Jersey Algorithmic Accountability Act,any AIS assessment should be done by an outside party, preferablyby qualified AI counsel, who can retain a technological consultantto assist them. This performs two functions.

First, it will avoid the situation in which the developers thatcreated the AIS for the organization are also assessing it, whichcould result in a conflict of interest, as the developers have anincentive to assess the AIS in a way that is favorable to theirwork.

Second, by retaining outside AI counsel, in addition tobenefiting from the counsel's expertise, organizations are ableto claim that the resulting assessment report and any related workproduct is protected by attorney-client privilege in the event thatthere is litigation or a government investigation related to theAIS. Companies that experience or anticipate a data security breachor event retain outside information security counsel for similarreasons, as the resulting breach analysis could be discoverable ifoutside counsel is not properly retained. The results can be veryexpensive if the breach report is mishandled.

For example, Capital One recently entered into an $80 millionConsent Order with the Department of Treasury related to a dataincident following an order from a federal court that a breachreport prepared for Capital One was not properly coordinatedthrough outside counsel and therefore not protected byattorney-client privilege.20

An AIS assessment should identify, catalogue, and describe therisks of an organization's AIS.

Properly identifying these risks, among others, and describinghow the AIS impacts each will allow an organization to understandthe issues it must address to improve its AIS.21

Once the risks in the AIS are identified, the assessment shouldfocus on how the organization alerts impacted populations. This canbe in the form of a public-facing AI policy, posted and maintainedin a manner similar to an organization's privacypolicy.22 This can also be in the form of more pointedpop-up prompts, a written disclosure and consent form, automatedverbal statement in telephone interactions, etc. The appropriateform of the notice will depend on a number of factors, includingthe organization, the AIS, the at-risk populations, the nature ofthe risks involved, etc. The notice should include the relevantrights regarding AIS afforded by privacy laws and otherregulations.

After implementing appropriate notices, the organization shouldanticipate receiving comments from members of the impactedpopulations and the general public. The assessment should help theorganization implement a process that allows it to accept, respondto, and act on those comments. This may be similar to howorganizations process privacy rights requests from consumers anddata subjects, particularly when a notice addresses those rights.The assessment may recommend that certain employees be tasked withaccepting and responding to comments, the organization addoperative capabilities that address privacy rights impacting AIS orrisks identified in the assessment and objected to by comments,etc. It may be helpful to have a technological consult provideinput on how the organization can leverage its technology to assistin this process.

The assessment should help the organization remediate identifiedrisks. The nature of the remediation will depend on the nature ofthe risks, the AIS, and the organization. Any outside AIS counselconducting the assessment needs to be well-versed in the variousforms remediation can take. In some instances, properly noticingthe risk to the relevant individuals will be sufficient, per bothlegal requirements and the organization's principles. Otherrisks cannot or should not be "papered over," but ratherobligate the organization to reduce the AIS's potential toinjure.23 This may include adding more human oversight,at least temporarily, to check the AIS's output fordiscriminatory activity or bias. A technology consultant may beable to advise the organization regarding revising the code orprocedures of the AIS to address the identified risks.

Additionally, where the AIS is evidencing bias because of thedata used to train it, more appropriate historical data or evensynthetic data may be used to retrain the AIS to remove or reduceits discriminatory behavior.24

All organizations that rely on AIS to make decisions that havelegal or similarly significant effects on individuals shouldperiodically conduct assessments of their AIS. This is true for allorganizations: for-profit companies, non-profit corporations,governmental entities, educational institutions, etc. Doing so willhelp them avoid potential legal trouble in the event their AIS isinadvertently demonstrating illegal behavior and ensure the AISacts consistently with the organization's values.

Organizations that adopt assessments earlier rather than laterwill be in a better position to comply with AIS-specific regulationwhen it appears and to develop a brand as an organization thatcares about fairness.

Footnotes

* John Frank Weaver, a member of McLaneMiddleton's privacy and data security practice group, is amember of the Board of Editors of The Journal of Robotics,Artificial Intelligence & Law and writes its"Everything Is Not Terminator" column. Mr.Weaver has a diverse technology practice that focuses oninformation security, data privacy, and emerging technologies,including artificial intelligence, self-driving vehicles, anddrones.

1. Cal. Civ. Code 1798.150(granting private right of action when a business fails to"maintain reasonable security procedures and practicesappropriate to the nature of the information," withassessments necessary to identify reasonable procedures).

2. New York General Business Law, Chapter20, Article 39-F, 899-bb.2(b)(ii)(A)(3) (requiringentities to assess "the sufficiency of safeguards in place tocontrol the identified risks"), 899.2(b)(ii)(B)(1) (requiringentities to assess "risks in network and softwaredesign"), 899.2(b)(ii)(B)(2)(requiring entities to assess"risks in information processing, transmission andstorage"), and 899.2(b)(ii)(C)(1) (requiring entities toassess "risks of information storage and disposal").

3. 15 U.S.C. 45(b); 15 U.S.C. 57a.

4. John Frank Weaver, "Everything IsNot Terminator: Helping AI to Comply with the FederalTrade Commission Act," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 4; July-August 2019),291-299 (other practices include: establishing a governingstructure for the AIS; establishing policies to address the useand/sale of AIS; establishing notice procedures; and ensuringthird-party agreements properly allocate liability andresponsibility).

5. Algorithmic Transparency Act of 2019,S. 1108, H.R. 2231, 116th Cong. (2019).

6. New Jersey Algorithmic AccountabilityAct, A.B. 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).

7. Algorithmic Accountability Act of2019, supra note 5, at 2(2) and 3(b).

8. New Jersey Algorithmic AccountabilityAct, supra note 6, at 2.

9. Id., at 3.

10. For a fuller discussion of thesebills and other emerging legislation intended to govern AIS, seeYoon Chae, "U.S. AI Regulation Guide: Legislative Overview andPractical Considerations," The Journal of ArtificialIntelligence & Law (Vol. 3, No. 1; January-February 2020),17-40.

11. See Jason Tashea,"Courts Are Using AI to Sentence Criminals. That Must StopNow," Wired (April 17, 2017), https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/.

12. Julia Angwin, Jeff Larson, SuryaMattu, & Lauren Kirchner, "Machine Bias,"ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing("The appeal of the [AIS's] risk scores is obvious. . . Ifcomputers could accurately predict which defendants were likely tocommit new crimes the criminal justice system could be fairer andmore selective about who is incarcerated and for howlong.").

13. Id.

14. Jeffrey Dastin, "Amazon scrapssecret AI recruiting tool that showed bias against women,"Reuters (October 9, 2018), https://uk.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUKKCN1MK08G(Amazon "realized its new system was not rating candidates forsoftware developer jobs and other technical posts in agender-neutral way").

15. Dan Ennis and Tim Cook, "Bankingfrom AI lending models raises questions of culpability,regulation," Banking Dive (August 16, 2019), https://www.bankingdive.com/news/artificial-intelligence-lending-bias-model-regulation-liability/561085/#:~:text=Bill%20Foster%2C%20D%2DIL%2C,lenders%20for%20mortgage%20refinancing%20loans("African-Americans may find themselves the subject ofhigher-interest credit cards simply because a computer has inferredtheir race").

16. Shraddha Chakradhar, "Widelyused algorithm for follow-up care in hospitals is racially biased,study finds," STAT (October 24, 2019), https://www.statnews.com/2019/10/24/widely-used-algorithm-hospitals-racial-bias/("An algorithm commonly used by hospitals and other healthsystems to predict which patients are most likely to need follow-upcare classified white patients overall as being more ill than blackpatientseven when they were just as sick").

17. DJ Pangburn, "Schools are usingsoftware to help pick who gets in. What could go wrong?"Fast Company (May 17, 2019), https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong("If future admissions decisions are based on past decisiondata, Richardson warns of creating an unintended feedback loop,limiting a school's demographic makeup, harming disadvantagedstudents, and putting a school out of sync with changingdemographics.").

18. Todd Feathers, "Fake Data CouldHelp Solve Machine Learning's Bias ProblemIf We LetIt," Slate (September 17, 2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html.

19. Id.

20. In the Matter of Capital One, N.A.,Capital One Bank (USA), N.A., Consent Order (Document #2020-036),Department of Treasury, Office of the Comptroller of the Currency,AA-EC-20-51 (August 5, 2020), https://www.occ.gov/static/enforcement-actions/ea2020-036.pdf;In re: Capital One Consumer Data Security BreachLitigation, MDL No. 1:19md2915 (AJT/JFA) (E.D. Va. May 26,2020).

21. For a great discussion of identifyingrisks in AIS, see Nicol Turner Lee, Paul Resnick, and Genie Barton,"Algorithmic bias detection and mitigation: Best practices andpolicies to reduce consumer harms," Brookings (May22, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

22. For more discussion of public facingAI policies, see John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart I," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 1; January-February 2019),59-65; John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart II," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 2; March-April 2019),141-146.

23. For a broad overview of remediatingAIS, see James Manyika, Jake Silberg, and Brittany Presten,"What Do We Do About Biases in AI?" Harvard BusinessReview (October 25, 2019), https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

24. There are numerous popular andacademic articles exploring this idea, including Todd Feathers,"Fake Data Could Help Solve Machine Learning's BiasProblemIf We Let It," Slate (September 17,2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html,and Lokke Moerel, "Algorithms can reduce discrimination, butonly with proper data," IAPP (November, 16, 2018), https://iapp.org/news/a/algorithms-can-reduce-discrimination-but-only-with-proper-data/.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Continued here:
Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems - Privacy - United States - Mondaq News Alerts

How Artificial Intelligence Will Impact The Future Of Tech Jobs – Utah Public Radio

Artificial intelligence may seem like something out of a science fiction movie, but its used in everything from ride-sharing apps to personalized online shopping suggestions.

A common concern with artificial intelligence, or AI, is that it will take over jobs as more tasks become automated. Char Sample, a chief research scientist at the Idaho National Laboratory, believes this is likely, but instead of robots serving you lunch, AI may have more of an impact on cybersecurity and other white-collar jobs.

The people who are blue collar jobs that work in service industry, they're probably not going to be as impacted by AI, but the jobs that are more repetitive in nature, like students who are graduating with cybersecurity degrees, some of their early jobs are running scans and auditing systems, those jobs could be replaced. Sample said.

This may have a disproportional effect on jobs in tech hubs, like Salt Lake City. However, as AI becomes increasingly prevalent, AI-related jobs, and the cities where these jobs are sourced, are expected to grow.

If we want to expand beyond AIs current capabilities, Sample thinks researchers need to be ambitious and think outside the box.

Yeah, I firmly believe we need an AI moonshot initiative. And right now, I'm seeing a lot of piecemeal, even though some of the pieces of the piecemeal are very big, they lack that comprehensive overview that says, let's look at all aspects of artificial intelligence. Sample said.

Not only could a moonshot push AI forward, but it would bring in people with diverse backgrounds to improve AI.

I'm hoping that if we were able to do such a thing, as a moonshot, we could look at it across the whole spectrum of disciplines, and gain a new understanding of how this works, and we can use it to our advantage. Sample said.

Sample spoke about Artificial Intelligence at USUs Science Unwrapped program this fall. For information on how to watch her recorded presentation, visit http://www.usu.edu/unwrapped/presentations/2020/smart-cookies-october-2020.

View original post here:
How Artificial Intelligence Will Impact The Future Of Tech Jobs - Utah Public Radio

Artificial Intelligence Usage on the Rise – Rockland County Times

Steven Kemler Says AI is increasingly effective and in demand

Machine learning and artificial intelligence (AI) have captured our imaginations for decades, but until more recently, had limited practical application. Steven Kemler, an entrepreneurial business leader and Managing Director of the Stone Arch Group, says that with recent increases in available data and computing power, AI already impacts our lives on many levels and that going forward, self-teaching algorithms will play an increasingly important role in bothin society and in business.

In 1997,Deep Blue, developed by IBM, became the first computer / artificial intelligence system to beat a current world chess champion (Gary Kasparov), significantly elevating interest in the practical applications of AI. These practical uses still took years to develop, with the worldwide market for AI technology not reaching $10 billion until 2016. Since then, AI market growth has accelerated significantly, reaching $50 billion in 2020 and expected to exceed $100 billion by 2024, according to the Wall Street Journal.

Kemler says AI and machine learning are playing a leading role in technological innovation across a wide spectrum of industries from healthcare and education, to transportation and the military. Many large corporations are using machine learning and AI to more accurately target customers based on their digital footprints, and in finance, AI is being widely used to power high speed trading systems and reduce fraud.

Intelligence agencies and the military are spending heavily on AI to analyze very large data sets and detect potential threats earlier than humans would normally be able to do so, including through the use of facial recognition. AI powered facial recognition is not only helpful for security purposes but can be used to identify lockdown and quarantine-avoiders and track the movements of individuals displaying symptoms. Despite privacy concerns, evidence suggests that the public is becoming more tolerant of these surveillance tactics and other uses of AI that would previously have been considered overly invasive.

Kemler points out that we can expect research and development in AI, and the machine learning field, to lead to continued breakthroughs in health sciences, including in the prevention and treatment of viruses. According an article recently published in the Lancet, a well-respected medical journal, [there is] a strong rationale for using AI-based assistive tools for drug repurposing medications for human disease, including during the COVID-19 pandemic. For more insights from Steven Kemler, visit his LinkedIn and Twitter platforms.

Read more:
Artificial Intelligence Usage on the Rise - Rockland County Times

Sheremetyevo Shows How It Uses Artificial Intelligence to Effectively Plan and Execute Airport Functions and Activities – PRNewswire

MOSCOW, Nov. 25, 2020 /PRNewswire/ -- Sergei Konyakhin, Director of the Production Modeling Department of JSC Sheremetyevo International Airport, gave a presentation at the Artificial Intelligence Systems 2020 on November 24 conference showing how Sheremetyevo International Airport uses artificial intelligence (AI) systems to effectively manage the airport.

The conference was part of the online forum TAdviser Summit 2020: Results of the Year and Plans for 2021. The discussion among of top managers of large companies and leading experts in the IT industry centered on issues related to the implementation of artificial intelligence technologies in the activities of Russian enterprises.

Sheremetyevo Airport has developed and implemented systems for automatic long-term and short-term planning of personnel and resources. As a result, the planning system was calibrated based on real processes and its previous weaknesses were eliminated; recommendation systems were implemented allowing dispatchers to manage resources taking into account future events; and the company was able to significantly optimize expenses.

The company is looking at developing AI systems in the near future for automatic dispatching, automation of administrative personnel functions, and providing top management with transparent reporting and detailed factor analysis.

In the long term, the use of artificial intelligence systems will help maintain high quality services for passengers, airlines and punctuality of flights while taking into account the long-term growth of passenger and cargo traffic.

Sheremetyevo is the largest airport in Russia and has the largest terminal and airfield infrastructure in the country, including six passenger terminals with a total area of more than 570,000 square meters, three runways, a cargo terminal with a capacity of 380,000 tonnes of cargo annually, and other facilities. The uninterrupted operation of all Sheremetyevo systems requires precise planning, scheduling of all processes, and efficient allocation of resources. At the same time, forecasting the production activities of the airport need to take into account a number of specific factors, including:

Sheremetyevo International Airportis among the TOP-10 airport hubs in Europe, the largest Russian airport in terms of passenger and cargo traffic. The route network comprises more than 230 destinations. In 2019, the airport served 49 million 933 thousand passengers, which is 8.9% more than in 2018. Sheremetyevo is the best airport in terms of the quality of services in Europe, the absolute world leader in punctuality of flights, the owner of the highest 5-star Skytrax rating.

You can find additional information at http://www.svo.aero

TAdviser.ru is the largest business portal in Russia on corporate informatization, a leading organizer of events in this area, a resource on which a unique knowledge base is formed in three areas:

TAdviser.ru provides convenient mechanisms for finding the right IT solution and IT supplier based on information about implementations and the experience of companies. The site's audience exceeds 1 million people. The target audience of the portal are representatives of customer companies interested in obtaining complete and objective information from an independent source, companies that provide IT solutions, as well as persons observing the development of the IT market in Russia (investors, officials, the media, the expert community, etc.).

SOURCE Sheremetyevo International Airport

Continue reading here:
Sheremetyevo Shows How It Uses Artificial Intelligence to Effectively Plan and Execute Airport Functions and Activities - PRNewswire

Siemens : Mindsphere application "Predictive Service Assistance" uses artificial intelligence to optimize the maintenance efficiency of…

Press

Nuremberg, November 25, 2020

Digital Enterprise SPS Dialog

Mindsphere app Predictive Service Assistance uses artificial intelligence to optimize maintenance efficiency of drive systems

Siemens has supplemented the Mindsphere application Predictive Service Assistance with an AI-based module. The new Artificial Intelligence module identifies concrete fault patterns in motors at an early stage, such as misalignment or a defective bearing. The application thus helps users to reduce downtimes and further improve spare parts and maintenance processes. The new Artificial Intelligence module for motors uses a neural network to solve what was previously implemented using a defined KPI limit value. This enables the module to detect anomalies even before the defined limit value and provides clear indications of the type and severity of faults and their development. As soon as the application detects signs of an error, it warns the user and generates a due date that indicates when the error should ideally be corrected and what corrective action is recommended to prevent an unplanned downtime. Predictive Service Assistance with the new AI-based Artificial Intelligence module is offered as part of a Predictive Service Assessment. The package includes a customized configuration service that ensures that the Mindsphere application with the new AI-based module runs optimally according to the customer's requirements.

The Mindsphere application Predictive Service Assistance is a central element of Predictive Services for Drive Systems, a standardized extension to the local service contract. It is used for more efficient maintenance of Sinamics and Simotics drive systems used on pumps, fans, and compressors, among other things. With Predictive

Siemens AG

Werner-von-Siemens-Strae 1

Communications

80333 Munich

Head: Clarissa Haller

Germany

Reference number: HQDIPR202011246072EN

Page 1/3

Services for Drive Systems, customers benefit from higher productivity and reduced unplanned downtime of their machines and systems. With the support of the respective Mindsphere application, users also enjoy full transparency on spare parts and maintenance activities to minimize risks through simple weak-point analysis. The application also contributes to more efficient maintenance and reduced planned downtimes.

With Predictive Services, Siemens offers a comprehensive range of services for industry. Each industry requires specific predictive services, which the company has developed based on its extensive industry know-how. The modular services for the collection, analysis and evaluation of machine data are adapted to the requirements of different industries.

This press release and a press picture are available at https://sie.ag/360GfSb

For further information regarding the Digital Enterprise SPS Dialog 2020, please see http://www.siemens.com/sps-dialog

For further information regarding Predictive Services for Drive Systems, please see http://www.siemens.com/drivesystemservices

Reference number: HQDIPR202011246072EN

Page 2/3

Contact for journalists

Katharina Lamsa

Phone: +49 172 8413539

E-mail: katharina.lamsa@siemens.com

Follow us on Social Media

Twitter: http://www.twitter.com/siemens_pressand http://www.twitter.com/SiemensIndustry

Blog: https://ingenuity.siemens.com/

Siemens Digital Industries (DI) is an innovation leader in automation and digitalization. Closely collaborating with partners and customers, DI drives the digital transformation in the process and discrete industries. With its Digital Enterprise portfolio, DI provides companies of all sizes with an end-to-end set of products, solutions and services to integrate and digitalize the entire value chain. Optimized for the specific needs of each industry, DI's unique portfolio supports customers to achieve greater productivity and flexibility. DI is constantly adding innovations to its portfolio to integrate cutting-edge future technologies. Siemens Digital Industries has its global headquarters in Nuremberg, Germany, and has around 76,000 employees internationally.

Siemens AG (Berlin and Munich) is a global technology powerhouse that has stood for engineering excellence, innovation, quality, reliability and internationality for more than 170 years. Active around the world, the company focuses on intelligent infrastructure for buildings and distributed energy systems and on automation and digitalization in the process and manufacturing industries. Siemens brings together the digital and physical worlds to benefit customers and society. Through Mobility, a leading supplier of intelligent mobility solutions for rail and road transport, Siemens is helping to shape the world market for passenger and freight services. Via its majority stake in the publicly listed company Siemens Healthineers, Siemens is also a world-leading supplier of medical technology and digital health services. In addition, Siemens holds a minority stake in Siemens Energy, a global leader in the transmission and generation of electrical power that has been listed on the stock exchange since September 28, 2020. In fiscal 2020, which ended on September 30, 2020, the Siemens Group generated revenue of 57.1 billion and net income of

4.2 billion. As of September 30, 2020, the company had around 293,000 employees worldwide. Further information is available on the Internet at http://www.siemens.com.

Reference number: HQDIPR202011246072EN

Page 3/3

This is an excerpt of the original content. To continue reading it, access the original document here.

Disclaimer

Siemens AG published this content on 25 November 2020 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 27 November 2020 07:50:03 UTC

Publicnow 2020

Technical analysis trends SIEMENS AG

Income Statement Evolution

Follow this link:
Siemens : Mindsphere application "Predictive Service Assistance" uses artificial intelligence to optimize the maintenance efficiency of...

Dyson plans massive investment in robotics and artificial intelligence – Business Live

Dyson is planning a 2.75bn investment to double its product range - and has vowed to grow research into robotics and artificial intelligence at its Hullavington Airfield Campus in Wiltshire.

The company - known for its vacuum cleaners and fans - says it wants to "go beyond the home for the first time" and enter entirely new fields by 2025.

The move comes after it last year scrapped its plans to develop electric cars, saying the project was not "commercially viable".

In the South West, the space where the cars were being developed at Hullavington has become home to Dyson's large and growing robotics and machine learning hub. With its existing campus in nearby Malmesbury, Dyson employs some 4,000 people in Wiltshire working on new technologies and new products.

The new investment will focus on the UK, Singapore and the Philippines.

In Wiltshire, Dyson said new investment "will drive new research in fields of study including products for sustainable healthy indoor environments and wellbeing".

In Singapore Dyson is continuing its plans for its new global head office at St James Power Station. It will also expand its R&D facilities and is planning an advanced manufacturing hub.

In the Philippines, it will create a dedicated software hub in Alabang. It already has a factory in Calamba which makes 13 million Dyson digital motors each year and employs 600 people.

Dyson chief executive Roland Krueger said: We continue the expansion of our operations in Singapore, UK and South East Asia, as a vital step of our future development.

"Now is the time to invest in new technologies such as energy storage, robotics and software which will drive performance and sustainability in our products for the benefit of Dysons customers.

"We will expand our existing product categories, as well as enter entirely new fields for Dyson over the next five years. This will start a new chapter in Dysons development".

See the original post here:
Dyson plans massive investment in robotics and artificial intelligence - Business Live

Luko and Shift Technology Apply Artificial Intelligence to the Fight Against Fraud – PRNewswire

BOSTON and PARIS, Nov. 24, 2020 /PRNewswire/ --Shift Technology, a provider of AI-native fraud detection and claims automation solutions for the global insurance industry today announced its fraud detection technology has been selected by digital native neo-insurance company Luko.

Since its launch in May 2018, Luko has been forging a new path in the world of homeowners insurance. This pioneering new insurance company employs patented technology which predicts which claims may be filed (water damage, fire, etc.) and convinces policyholders to adopt best practices in terms of prevention. In cases where claims cannot be avoided, Luko relies on technology to shorten the claims process and provide its customers with an exemplary customer experience.

Ensuring that claims are legitimate is a critical component of ensuring a fast, efficient, and accurate claims process. However, Luko's market success and rapid growth exposed that the existing procedures used to detect potential fraudulent claims simply could not keep up. As a result, Luko turned to Shift Technology and its award-winning AI-based insurance fraud detection solution.

"The insurance sector is the target of numerous attempts at fraud, whether opportunistic or resulting from organized crime networks," explained Raphal Vullierme, co-founder of Luko. "It was therefore essential that we continue to reinforce our processes and technologies in terms of fraud detection, so as to quickly identify potentially illegitimate claims."

In addition to the fraud detection technology offered by Shift, Luko is supported by the deep insurance industry experience and experience of its data science teams. This strong combination of people and technology help to ensure Luko is always staying abreast of the latest fraud trends and schemes.

"We have always considered the fight against fraud to be a critical topic for insurers," stated Jeremy Jawish, CEO and co-founder, Shift Technology. "Not only does effective fraud fighting reduce undeserved indemnity pay-outs and dismantle fraud networks, but also supports the digital transformation of the customer journey."

About Shift TechnologyShift Technology delivers the only AI-native fraud detection and claims automation solutions built specifically for the global insurance industry. Our SaaS solutions identify individual and network fraud with double the accuracy of competing offerings, and provide contextual guidance to help insurers achieve faster, more accurate claim resolutions. Shift has analyzed hundreds of millions of claims to date and was presented Frost & Sullivan's 2020 Global Claims Solutions for Insurance Market Leadership Award. For more information please visit http://www.shift-technology.com.

About LukoLuko is reinventing home insurance, placing social responsibility and technology at the heart of its priorities. The company is now the first neo-insurance firm in France, with more than 100,000 policyholders, and the Insurtech with the strongest growth in Europe. More than a simple insurance contract, Luko's ambition is for insurance to change from a model activated as a reaction to a model based on prevention, using internally-developed technology. The co-founders Raphal Vullierme, a serial entrepreneur, and Benoit Bourdel, have pooled their expertise to create a company with a specific, positive impact, recognized by Bcorp certification in July 2019.

Contacts:Rob MortonCorporate CommunicationsShift Technology+1.617.416.9216[emailprotected]

SOURCE Shift Technology

https://www.shift-technology.com

More here:
Luko and Shift Technology Apply Artificial Intelligence to the Fight Against Fraud - PRNewswire

When Melissa McCarthy had lunch with Elon Musk to talk Artificial Intelligence and left terrified – WION

Melissa McCarthy's new comedy 'Superintelligence'centres on artificial intelligence deciding it's time to destroy the world. While bringing it to life McCarthy and her director/husband Ben Falcone completed major items on their bucket lists including having lunch with Elon Musk.

"And we sat down and had lunch with Elon Musk," says McCarthy, 50. "Which is not a sentence I thought I was ever going to be able to say."

Melissa McCarthy spoke with USA TODAY about the future horrors she picked up from her chat with Tesla CEO Musk, who has warned that AI is a "fundamental risk to the existence of human civilization".

Also read:Melissa McCarthy apologises for donating to anti-abortion, anti-LGBTQI+ charity

"We wanted to talk to him about, "Is this possible?" To our surprise, and slight terror, he was like, "It's not what if, but when." It was eye-opening to realize we are doing this dance with technology, friend and foe." she said."It makes our lives better and worse. It's all up to how we use it, how we program it, even if it is to our own demise, which could possibly be our own fault. I walked out of there just like, "Oh, boy!" she further added.

On being asked who paid for lunch, McCarthy quipped, "I have a change belt and sat there counting out pennies and nickels, sliding pennies across the table, saying, "This should cover it, Mr. Musk." He finally said, "I can't watch this anymore."

'Superintelligence'is aromantic comedy sci-fi film directed by Ben Falcone and written by Steve Mallory. The film stars Melissa McCarthy in her fourth collaboration with her husband, Falcone. It was digitally released by Warner Bros. Pictures via HBO Max on November 26, 2020.

Read more:
When Melissa McCarthy had lunch with Elon Musk to talk Artificial Intelligence and left terrified - WION