Page 103«..1020..102103104105..110120..»

Category Archives: Artificial Intelligence

Artificial Intelligence Usage on the Rise – Rockland County Times

Posted: November 29, 2020 at 6:24 am

Steven Kemler Says AI is increasingly effective and in demand

Machine learning and artificial intelligence (AI) have captured our imaginations for decades, but until more recently, had limited practical application. Steven Kemler, an entrepreneurial business leader and Managing Director of the Stone Arch Group, says that with recent increases in available data and computing power, AI already impacts our lives on many levels and that going forward, self-teaching algorithms will play an increasingly important role in bothin society and in business.

In 1997,Deep Blue, developed by IBM, became the first computer / artificial intelligence system to beat a current world chess champion (Gary Kasparov), significantly elevating interest in the practical applications of AI. These practical uses still took years to develop, with the worldwide market for AI technology not reaching $10 billion until 2016. Since then, AI market growth has accelerated significantly, reaching $50 billion in 2020 and expected to exceed $100 billion by 2024, according to the Wall Street Journal.

Kemler says AI and machine learning are playing a leading role in technological innovation across a wide spectrum of industries from healthcare and education, to transportation and the military. Many large corporations are using machine learning and AI to more accurately target customers based on their digital footprints, and in finance, AI is being widely used to power high speed trading systems and reduce fraud.

Intelligence agencies and the military are spending heavily on AI to analyze very large data sets and detect potential threats earlier than humans would normally be able to do so, including through the use of facial recognition. AI powered facial recognition is not only helpful for security purposes but can be used to identify lockdown and quarantine-avoiders and track the movements of individuals displaying symptoms. Despite privacy concerns, evidence suggests that the public is becoming more tolerant of these surveillance tactics and other uses of AI that would previously have been considered overly invasive.

Kemler points out that we can expect research and development in AI, and the machine learning field, to lead to continued breakthroughs in health sciences, including in the prevention and treatment of viruses. According an article recently published in the Lancet, a well-respected medical journal, [there is] a strong rationale for using AI-based assistive tools for drug repurposing medications for human disease, including during the COVID-19 pandemic. For more insights from Steven Kemler, visit his LinkedIn and Twitter platforms.

The rest is here:

Artificial Intelligence Usage on the Rise - Rockland County Times

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Usage on the Rise – Rockland County Times

How Artificial Intelligence Will Impact The Future Of Tech Jobs – Utah Public Radio

Posted: at 6:24 am

Artificial intelligence may seem like something out of a science fiction movie, but its used in everything from ride-sharing apps to personalized online shopping suggestions.

A common concern with artificial intelligence, or AI, is that it will take over jobs as more tasks become automated. Char Sample, a chief research scientist at the Idaho National Laboratory, believes this is likely, but instead of robots serving you lunch, AI may have more of an impact on cybersecurity and other white-collar jobs.

The people who are blue collar jobs that work in service industry, they're probably not going to be as impacted by AI, but the jobs that are more repetitive in nature, like students who are graduating with cybersecurity degrees, some of their early jobs are running scans and auditing systems, those jobs could be replaced. Sample said.

This may have a disproportional effect on jobs in tech hubs, like Salt Lake City. However, as AI becomes increasingly prevalent, AI-related jobs, and the cities where these jobs are sourced, are expected to grow.

If we want to expand beyond AIs current capabilities, Sample thinks researchers need to be ambitious and think outside the box.

Yeah, I firmly believe we need an AI moonshot initiative. And right now, I'm seeing a lot of piecemeal, even though some of the pieces of the piecemeal are very big, they lack that comprehensive overview that says, let's look at all aspects of artificial intelligence. Sample said.

Not only could a moonshot push AI forward, but it would bring in people with diverse backgrounds to improve AI.

I'm hoping that if we were able to do such a thing, as a moonshot, we could look at it across the whole spectrum of disciplines, and gain a new understanding of how this works, and we can use it to our advantage. Sample said.

Sample spoke about Artificial Intelligence at USUs Science Unwrapped program this fall. For information on how to watch her recorded presentation, visit http://www.usu.edu/unwrapped/presentations/2020/smart-cookies-october-2020.

Read more:

How Artificial Intelligence Will Impact The Future Of Tech Jobs - Utah Public Radio

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence Will Impact The Future Of Tech Jobs – Utah Public Radio

Meet GPT-3. It Has Learned to Code (and Blog and Argue). – The New York Times

Posted: at 6:24 am

Before asking GPT-3 to generate new text, you can focus it on particular patterns it may have learned during its training, priming the system for certain tasks. You can feed it descriptions of smartphone apps and the matching Figma code. Or you can show it reams of human dialogue. Then, when you start typing, it will complete the sequence in a more specific way. If you prime it with dialogue, for instance, it will start chatting with you.

It has this emergent quality, said Dario Amodei, vice president for research at OpenAI. It has some ability to recognize the pattern that you gave it and complete the story, give another example.

Previous language models worked in similar ways. But GPT-3 can do things that previous models could not, like write its own computer code. And, perhaps more important, you can prime it for specific tasks using just a few examples, as opposed to the thousands of examples and several hours of additional training required by its predecessors. Researchers call this few-shot learning, and they believe GPT-3 is the first real example of what could be a powerful phenomenon.

It exhibits a capability that no one thought possible, said Ilya Sutskever, OpenAIs chief scientist and a key figure in the rise of artificial intelligence technologies over the past decade. Any layperson can take this model and provide these examples in about five minutes and get useful behavior out of it.

This is both a blessing and a curse.

OpenAI plans to sell access to GPT-3 via the internet, turning it into a widely used commercial product, and this year it made the system available to a limited number of beta testers through their web browsers. Not long after, Jerome Pesenti, who leads the Facebook A.I. lab, called GPT-3 unsafe, pointing to sexist, racist and otherwise toxic language the system generated when asked to discuss women, Black people, Jews and the Holocaust.

With systems like GPT-3, the problem is endemic. Everyday language is inherently biased and often hateful, particularly on the internet. Because GPT-3 learns from such language, it, too, can show bias and hate. And because it learns from internet text that associates atheism with the words cool and correct and that pairs Islam with terrorism, GPT-3 does the same thing.

This may be one reason that OpenAI has shared GPT-3 with only a small number of testers. The lab has built filters that warn that toxic language might be coming, but they are merely Band-Aids placed over a problem that no one quite knows how to solve.

See the rest here:

Meet GPT-3. It Has Learned to Code (and Blog and Argue). - The New York Times

Posted in Artificial Intelligence | Comments Off on Meet GPT-3. It Has Learned to Code (and Blog and Argue). – The New York Times

Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems – Privacy – United States – Mondaq News Alerts

Posted: at 6:24 am

To print this article, all you need is to be registered or login on Mondaq.com.

Published in The Journal of Robotics, ArtificialIntelligence & Law (January-February 2021)

Many information security and privacy laws such as theCalifornia Consumer Privacy Act1 and the New York StopHacks and Improve Electronic Data Security Act2 requireperiodic assessments of an organization's informationmanagement systems. Because many organizations collect, use, andstore personal information from individualsmuch of whichcould be used to embarrass or impersonate those individuals ifinappropriately accessedthese laws require organizations toregularly test and improve the security they use to protect thatinformation.

As of yet, there is no similar specific law in the United Statesdirected at artificial intelligence systems ("AIS"),requiring the organizations that rely on AIS to test its accuracy,fairness, bias, discrimination, privacy, and security.

However, existing law is broad enough to impose on manyorganizations a general obligation to assess their AIS, andlegislation has appeared requiring certain entities to conductimpact assessments on their AIS. Even without a regulatory mandate,many organizations should perform AIS assessments as a bestpractice.

This column summarizes current and pending legal requirementsbefore providing more details about the assessment process.

The Federal Trade Commission's ("FTC") authorityto police "unfair or deceptive acts or practices in oraffecting commerce" through rule making and administrativeadjudication is broad enough to govern AIS, and it has a departmentthat focuses on algorithmic transparency, the Office of TechnologyResearch and Investigation.3 However, the FTC has notissued clear guidance regarding AIS uses that qualify as unfair ordeceptive acts or practices. There are general practices thatorganizations can adopt that will minimize their potential forengaging in unfair or deceptive practices, which include conductingassessments of their AIS.4 However, there is no specificFTC rule obligating organizations to assess their AIS.

There have been some legislative efforts to create such anobligation, including the Algorithmic AccountabilityAct,5 which was proposed in Congress, and a similar billproposed in New Jersey,6 both in 2019.

The federal bill would require covered entities to conduct"impact assessments" on their "high-risk" AISin order to evaluate the impacts of the AIS's design processand training data on "accuracy, fairness, bias,discrimination, privacy, and security."7

The New Jersey bill is similar, requiring an evaluation of theAIS's development process, including the design and trainingdata, for impacts on "accuracy, fairness, bias,discrimination, privacy, and security," and must includeseveral elements, including a "detailed description of thebest practices used to minimize the risks" and a"cost-benefit analysis."8 It would alsorequire covered entities to work with external third parties,independent auditors, and independent technology experts to conductthe assessments, if reasonably possible.9

Although neither of these has become law, they represent theexpected trend of emerging regulation.10

When organizations rely on AIS to make or inform decisions oractions that have legal or similarly significant effects onindividuals, it is reasonable for governments to require that thoseorganizations also conduct periodic assessments of the AIS. Forexample, state criminal justice systems have begun to adopt AISthat use algorithms to report on a defendant's risk to commitanother crime, risk to miss his or her next court date, etc.; humandecision makers then use those reports to inform theirdecisions.11

The idea is that the AIS can be a tool to inform decisionmakerspolice, prosecutors, judgesto help them makebetter, data-based decisions that eliminate biases they may haveagainst defendants based on race, gender, etc.12 This ispotentially a wonderful use for AIS, but only if the AIS actuallyremoves inappropriate and unlawful human bias rather than recreateit.

Unfortunately, the results have been mixed at best, as there isevidence suggesting that some of the AIS in the criminal justicesystem is merely replicating human bias.

In one example, an African-American teenage girl and a whiteadult male were each convicted of stealing property totaling about$80. An AIS determined that the white defendant was rated as alower recidivism risk than the teenager, even though he had a muchmore extensive criminal record, with felonies versus juvenilemisdemeanors. Two years after their arrests, the AISrecommendations were revealed to be incorrect: the male defendantwas serving an eight-year sentence for another robbery; theteenager had not committed any further crimes.13 Similarissues have been observed in AIS used in hiring,14lending,15 health care,16 and schooladmissions.17

Although some organizations are conducting AIS assessmentswithout a legal requirement, a larger segment is reluctant to adoptthe assessments as a best practice, as many for-profit companiescare more about accuracy to the original data used to train theirAIS than they do about eliminating the biases in that originaldata.18 According to Daniel Soukup, a data scientistwith Mostly AI, a start-up experimenting with controlling biases indata, "There's always another priority, it seems. . . .You're trading off revenue against making fair predictions, andI think that is a very hard sell for these institutions and theseorganizations."19

I suspect, though, that the tide will turn in the otherdirection in the near future, with or without a direct legislativeimpetus, similar to the trend in privacy rights and operations.Although most companies in the United States are not subject tobroad privacy laws like the California Consumer Privacy Act or theEuropean Union's General Data Protection Regulation, I haveobserved an increasing number of clients that want to provide theprivacy rights afforded by those laws, either because theircustomers expect them to or they want to position themselves ascompanies that care about individuals' privacy.

It is not hard to see a similar trend developing among companiesthat rely on AIS. As consumers become more aware of the problematicissues involved in AIS decision-makingpotential bias, use ofsensitive personal information, security of that information, thesignificant effects, lack of oversight, etc.they will becomejust as demanding about AIS requirements as privacy requirements.Similar to privacy, consumer expectations will likely be pushed inthat direction by jurisdictions that adopt AIS assessmentlegislation, even if they do not live in those jurisdictions.

Organizations that are looking to perform AIS assessments now inanticipation of regulatory activity and consumer expectationsshould conduct an assessment consistent with the followingprinciples and goals:

Consistent with the New Jersey Algorithmic Accountability Act,any AIS assessment should be done by an outside party, preferablyby qualified AI counsel, who can retain a technological consultantto assist them. This performs two functions.

First, it will avoid the situation in which the developers thatcreated the AIS for the organization are also assessing it, whichcould result in a conflict of interest, as the developers have anincentive to assess the AIS in a way that is favorable to theirwork.

Second, by retaining outside AI counsel, in addition tobenefiting from the counsel's expertise, organizations are ableto claim that the resulting assessment report and any related workproduct is protected by attorney-client privilege in the event thatthere is litigation or a government investigation related to theAIS. Companies that experience or anticipate a data security breachor event retain outside information security counsel for similarreasons, as the resulting breach analysis could be discoverable ifoutside counsel is not properly retained. The results can be veryexpensive if the breach report is mishandled.

For example, Capital One recently entered into an $80 millionConsent Order with the Department of Treasury related to a dataincident following an order from a federal court that a breachreport prepared for Capital One was not properly coordinatedthrough outside counsel and therefore not protected byattorney-client privilege.20

An AIS assessment should identify, catalogue, and describe therisks of an organization's AIS.

Properly identifying these risks, among others, and describinghow the AIS impacts each will allow an organization to understandthe issues it must address to improve its AIS.21

Once the risks in the AIS are identified, the assessment shouldfocus on how the organization alerts impacted populations. This canbe in the form of a public-facing AI policy, posted and maintainedin a manner similar to an organization's privacypolicy.22 This can also be in the form of more pointedpop-up prompts, a written disclosure and consent form, automatedverbal statement in telephone interactions, etc. The appropriateform of the notice will depend on a number of factors, includingthe organization, the AIS, the at-risk populations, the nature ofthe risks involved, etc. The notice should include the relevantrights regarding AIS afforded by privacy laws and otherregulations.

After implementing appropriate notices, the organization shouldanticipate receiving comments from members of the impactedpopulations and the general public. The assessment should help theorganization implement a process that allows it to accept, respondto, and act on those comments. This may be similar to howorganizations process privacy rights requests from consumers anddata subjects, particularly when a notice addresses those rights.The assessment may recommend that certain employees be tasked withaccepting and responding to comments, the organization addoperative capabilities that address privacy rights impacting AIS orrisks identified in the assessment and objected to by comments,etc. It may be helpful to have a technological consult provideinput on how the organization can leverage its technology to assistin this process.

The assessment should help the organization remediate identifiedrisks. The nature of the remediation will depend on the nature ofthe risks, the AIS, and the organization. Any outside AIS counselconducting the assessment needs to be well-versed in the variousforms remediation can take. In some instances, properly noticingthe risk to the relevant individuals will be sufficient, per bothlegal requirements and the organization's principles. Otherrisks cannot or should not be "papered over," but ratherobligate the organization to reduce the AIS's potential toinjure.23 This may include adding more human oversight,at least temporarily, to check the AIS's output fordiscriminatory activity or bias. A technology consultant may beable to advise the organization regarding revising the code orprocedures of the AIS to address the identified risks.

Additionally, where the AIS is evidencing bias because of thedata used to train it, more appropriate historical data or evensynthetic data may be used to retrain the AIS to remove or reduceits discriminatory behavior.24

All organizations that rely on AIS to make decisions that havelegal or similarly significant effects on individuals shouldperiodically conduct assessments of their AIS. This is true for allorganizations: for-profit companies, non-profit corporations,governmental entities, educational institutions, etc. Doing so willhelp them avoid potential legal trouble in the event their AIS isinadvertently demonstrating illegal behavior and ensure the AISacts consistently with the organization's values.

Organizations that adopt assessments earlier rather than laterwill be in a better position to comply with AIS-specific regulationwhen it appears and to develop a brand as an organization thatcares about fairness.

Footnotes

* John Frank Weaver, a member of McLaneMiddleton's privacy and data security practice group, is amember of the Board of Editors of The Journal of Robotics,Artificial Intelligence & Law and writes its"Everything Is Not Terminator" column. Mr.Weaver has a diverse technology practice that focuses oninformation security, data privacy, and emerging technologies,including artificial intelligence, self-driving vehicles, anddrones.

1. Cal. Civ. Code 1798.150(granting private right of action when a business fails to"maintain reasonable security procedures and practicesappropriate to the nature of the information," withassessments necessary to identify reasonable procedures).

2. New York General Business Law, Chapter20, Article 39-F, 899-bb.2(b)(ii)(A)(3) (requiringentities to assess "the sufficiency of safeguards in place tocontrol the identified risks"), 899.2(b)(ii)(B)(1) (requiringentities to assess "risks in network and softwaredesign"), 899.2(b)(ii)(B)(2)(requiring entities to assess"risks in information processing, transmission andstorage"), and 899.2(b)(ii)(C)(1) (requiring entities toassess "risks of information storage and disposal").

3. 15 U.S.C. 45(b); 15 U.S.C. 57a.

4. John Frank Weaver, "Everything IsNot Terminator: Helping AI to Comply with the FederalTrade Commission Act," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 4; July-August 2019),291-299 (other practices include: establishing a governingstructure for the AIS; establishing policies to address the useand/sale of AIS; establishing notice procedures; and ensuringthird-party agreements properly allocate liability andresponsibility).

5. Algorithmic Transparency Act of 2019,S. 1108, H.R. 2231, 116th Cong. (2019).

6. New Jersey Algorithmic AccountabilityAct, A.B. 5430, 218th Leg., 2019 Reg. Sess. (N.J. 2019).

7. Algorithmic Accountability Act of2019, supra note 5, at 2(2) and 3(b).

8. New Jersey Algorithmic AccountabilityAct, supra note 6, at 2.

9. Id., at 3.

10. For a fuller discussion of thesebills and other emerging legislation intended to govern AIS, seeYoon Chae, "U.S. AI Regulation Guide: Legislative Overview andPractical Considerations," The Journal of ArtificialIntelligence & Law (Vol. 3, No. 1; January-February 2020),17-40.

11. See Jason Tashea,"Courts Are Using AI to Sentence Criminals. That Must StopNow," Wired (April 17, 2017), https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/.

12. Julia Angwin, Jeff Larson, SuryaMattu, & Lauren Kirchner, "Machine Bias,"ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing("The appeal of the [AIS's] risk scores is obvious. . . Ifcomputers could accurately predict which defendants were likely tocommit new crimes the criminal justice system could be fairer andmore selective about who is incarcerated and for howlong.").

13. Id.

14. Jeffrey Dastin, "Amazon scrapssecret AI recruiting tool that showed bias against women,"Reuters (October 9, 2018), https://uk.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUKKCN1MK08G(Amazon "realized its new system was not rating candidates forsoftware developer jobs and other technical posts in agender-neutral way").

15. Dan Ennis and Tim Cook, "Bankingfrom AI lending models raises questions of culpability,regulation," Banking Dive (August 16, 2019), https://www.bankingdive.com/news/artificial-intelligence-lending-bias-model-regulation-liability/561085/#:~:text=Bill%20Foster%2C%20D%2DIL%2C,lenders%20for%20mortgage%20refinancing%20loans("African-Americans may find themselves the subject ofhigher-interest credit cards simply because a computer has inferredtheir race").

16. Shraddha Chakradhar, "Widelyused algorithm for follow-up care in hospitals is racially biased,study finds," STAT (October 24, 2019), https://www.statnews.com/2019/10/24/widely-used-algorithm-hospitals-racial-bias/("An algorithm commonly used by hospitals and other healthsystems to predict which patients are most likely to need follow-upcare classified white patients overall as being more ill than blackpatientseven when they were just as sick").

17. DJ Pangburn, "Schools are usingsoftware to help pick who gets in. What could go wrong?"Fast Company (May 17, 2019), https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong("If future admissions decisions are based on past decisiondata, Richardson warns of creating an unintended feedback loop,limiting a school's demographic makeup, harming disadvantagedstudents, and putting a school out of sync with changingdemographics.").

18. Todd Feathers, "Fake Data CouldHelp Solve Machine Learning's Bias ProblemIf We LetIt," Slate (September 17, 2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html.

19. Id.

20. In the Matter of Capital One, N.A.,Capital One Bank (USA), N.A., Consent Order (Document #2020-036),Department of Treasury, Office of the Comptroller of the Currency,AA-EC-20-51 (August 5, 2020), https://www.occ.gov/static/enforcement-actions/ea2020-036.pdf;In re: Capital One Consumer Data Security BreachLitigation, MDL No. 1:19md2915 (AJT/JFA) (E.D. Va. May 26,2020).

21. For a great discussion of identifyingrisks in AIS, see Nicol Turner Lee, Paul Resnick, and Genie Barton,"Algorithmic bias detection and mitigation: Best practices andpolicies to reduce consumer harms," Brookings (May22, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

22. For more discussion of public facingAI policies, see John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart I," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 1; January-February 2019),59-65; John Frank Weaver, "Everything Is NotTerminator: Public-Facing Artificial IntelligencePoliciesPart II," The Journal of ArtificialIntelligence & Law (Vol. 2, No. 2; March-April 2019),141-146.

23. For a broad overview of remediatingAIS, see James Manyika, Jake Silberg, and Brittany Presten,"What Do We Do About Biases in AI?" Harvard BusinessReview (October 25, 2019), https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

24. There are numerous popular andacademic articles exploring this idea, including Todd Feathers,"Fake Data Could Help Solve Machine Learning's BiasProblemIf We Let It," Slate (September 17,2020), https://slate.com/technology/2020/09/synthetic-data-artificial-intelligence-bias.html,and Lokke Moerel, "Algorithms can reduce discrimination, butonly with proper data," IAPP (November, 16, 2018), https://iapp.org/news/a/algorithms-can-reduce-discrimination-but-only-with-proper-data/.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

More here:

Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems - Privacy - United States - Mondaq News Alerts

Posted in Artificial Intelligence | Comments Off on Everything Is Not Terminator: Assessment Of Artificial Intelligence Systems – Privacy – United States – Mondaq News Alerts

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks – Quality Magazine

Posted: at 6:24 am

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks | 2020-11-27 | Quality Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Excerpt from:

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks - Quality Magazine

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks – Quality Magazine

Sheremetyevo Shows How It Uses Artificial Intelligence to Effectively Plan and Execute Airport Functions and Activities – PRNewswire

Posted: at 6:24 am

MOSCOW, Nov. 25, 2020 /PRNewswire/ -- Sergei Konyakhin, Director of the Production Modeling Department of JSC Sheremetyevo International Airport, gave a presentation at the Artificial Intelligence Systems 2020 on November 24 conference showing how Sheremetyevo International Airport uses artificial intelligence (AI) systems to effectively manage the airport.

The conference was part of the online forum TAdviser Summit 2020: Results of the Year and Plans for 2021. The discussion among of top managers of large companies and leading experts in the IT industry centered on issues related to the implementation of artificial intelligence technologies in the activities of Russian enterprises.

Sheremetyevo Airport has developed and implemented systems for automatic long-term and short-term planning of personnel and resources. As a result, the planning system was calibrated based on real processes and its previous weaknesses were eliminated; recommendation systems were implemented allowing dispatchers to manage resources taking into account future events; and the company was able to significantly optimize expenses.

The company is looking at developing AI systems in the near future for automatic dispatching, automation of administrative personnel functions, and providing top management with transparent reporting and detailed factor analysis.

In the long term, the use of artificial intelligence systems will help maintain high quality services for passengers, airlines and punctuality of flights while taking into account the long-term growth of passenger and cargo traffic.

Sheremetyevo is the largest airport in Russia and has the largest terminal and airfield infrastructure in the country, including six passenger terminals with a total area of more than 570,000 square meters, three runways, a cargo terminal with a capacity of 380,000 tonnes of cargo annually, and other facilities. The uninterrupted operation of all Sheremetyevo systems requires precise planning, scheduling of all processes, and efficient allocation of resources. At the same time, forecasting the production activities of the airport need to take into account a number of specific factors, including:

Sheremetyevo International Airportis among the TOP-10 airport hubs in Europe, the largest Russian airport in terms of passenger and cargo traffic. The route network comprises more than 230 destinations. In 2019, the airport served 49 million 933 thousand passengers, which is 8.9% more than in 2018. Sheremetyevo is the best airport in terms of the quality of services in Europe, the absolute world leader in punctuality of flights, the owner of the highest 5-star Skytrax rating.

You can find additional information at http://www.svo.aero

TAdviser.ru is the largest business portal in Russia on corporate informatization, a leading organizer of events in this area, a resource on which a unique knowledge base is formed in three areas:

TAdviser.ru provides convenient mechanisms for finding the right IT solution and IT supplier based on information about implementations and the experience of companies. The site's audience exceeds 1 million people. The target audience of the portal are representatives of customer companies interested in obtaining complete and objective information from an independent source, companies that provide IT solutions, as well as persons observing the development of the IT market in Russia (investors, officials, the media, the expert community, etc.).

SOURCE Sheremetyevo International Airport

Read the rest here:

Sheremetyevo Shows How It Uses Artificial Intelligence to Effectively Plan and Execute Airport Functions and Activities - PRNewswire

Posted in Artificial Intelligence | Comments Off on Sheremetyevo Shows How It Uses Artificial Intelligence to Effectively Plan and Execute Airport Functions and Activities – PRNewswire

Can a Computer Devise a Theory of Everything? – The New York Times

Posted: at 6:24 am

By the times that A.I. comes back and tells you that, then we have reached artificial general intelligence, and you should be very scared or very excited, depending on your point of view, Dr. Tegmark said. The reason Im working on this, honestly, is because what I find most menacing is, if we build super-powerful A.I. and have no clue how it works right?

Dr. Thaler, who directs the new institute at M.I.T., said he was once a skeptic about artificial intelligence but now was an evangelist. He realized that as a physicist he could encode some of his knowledge into the machine, which would then give answers that he could interpret more easily.

That becomes a dialogue between human and machine in a way that becomes more exciting, he said, rather than just having a black box you dont understand making decisions for you.

He added, I dont particularly like calling these techniques artificial intelligence, since that language masks the fact that many A.I. techniques have rigorous underpinnings in mathematics, statistics and computer science.

Yes, he noted, the machine can find much better solutions than he can despite all of his training: But ultimately I still get to decide what concrete goals are worth accomplishing, and I can aim at ever more ambitious targets knowing that, if I can rigorously define my goals in a language the computer understands, then A.I. can deliver powerful solutions.

Recently, Dr. Thaler and his colleagues fed their neural network a trove of data from the Large Hadron Collider, which smashes together protons in search of new particles and forces. Protons, the building blocks of atomic matter, are themselves bags of smaller entities called quarks and gluons. When protons collide, these smaller particles squirt out in jets, along with whatever other exotic particles have coalesced out of the energy of the collision. To better understand this process, he and his team asked the system to distinguish between the quarks and the gluons in the collider data.

We said, Im not going to tell you anything about quantum field theory; Im not going to tell you what a quark or gluon is at a fundamental level, he said. Im just going to say, Heres a mess of data, please separate it into basically two categories. And it can do it.

View original post here:

Can a Computer Devise a Theory of Everything? - The New York Times

Posted in Artificial Intelligence | Comments Off on Can a Computer Devise a Theory of Everything? – The New York Times

Reaping the benefits of artificial intelligence – FoodManufacture.co.uk

Posted: at 6:24 am

Security and food safety

Many factories are introducing smart machinery, taking advantage of the benefits that robots on the production line, connected devices and predicative maintenance offer. However, smart devices require an internet connection and anything with an internet connection can be hacked, potentially leading to data loss or comprising the safety of the final product.

Who bears contractual responsibility in this event? You? The supplier? The AI itself? There is no clear answer to this, but in our view the responsibility for the actions of a smart device will likely lie with the operator.

We eventually see the law automatically placing liability on the supplier in certain circumstances, such as where it fails to ensure that its algorithms are free from bias or discrimination.

Intellectual property and data

The optimisations AI can lead towards are the product of the data that the machine has learnt from. For the AI tool to tell your business how to optimise its production process or reduce its wastage, first you too must hand over valuable information.

The AIs learnings can now be applied to your business but the AI tool now has another data point: yours. And there is nothing to stop its owner going to a competitor of yours and teaching it efficiencies based on its newly expanded (thanks to your business) pool of data.

How can we stop data spreading to competitors?! How do you share the gains fairly between the two organisations? These are issues that must be documented carefully in your contracts as the current law does not yet provide a clear answer.

As AI gets more advanced, it may begin to create new ideas recipes or methods of production requiring less and less human input to do so. Who would own the intellectual property rights in new inventions created solely by AI?

The legal community is still carefully considering the ownership of IP developed by non-humans, but early-adopters of these technologies should be contemplating the ownership question and documenting it in their contracts now.

In the UK, the regulatory issues surrounding AI are still being debated and different bodies have different views. Some believe regulation is urgently needed, whilst others consider that the technology needs to be more widely deployed before rules dictating its use can be drafted. What is clear, though, is the need to take a holistic approach.

The legal implications of AI cannot be looked at in silos for instance only from a data protection perspective or only from an antitrust perspective any regulation of AI must be reviewed as a whole and the risks and benefits carefully weighed.

This is particularly true for the use of AI technologies in the food manufacturing industry where consumer safety is at stake. It may be too early to build laws controlling AI tools used to manufacture consumer goods, but the consequences of AI getting it wrong could be highly damaging and result in the industry rejecting AI completely, despite its many benefits.

Until the law does catch up, make sure to read the small print on security policies, adhere to best practice information management processes and document agreed terms clearly with suppliers.

Excerpt from:

Reaping the benefits of artificial intelligence - FoodManufacture.co.uk

Posted in Artificial Intelligence | Comments Off on Reaping the benefits of artificial intelligence – FoodManufacture.co.uk

Luko and Shift Technology Apply Artificial Intelligence to the Fight Against Fraud – PRNewswire

Posted: at 6:24 am

BOSTON and PARIS, Nov. 24, 2020 /PRNewswire/ --Shift Technology, a provider of AI-native fraud detection and claims automation solutions for the global insurance industry today announced its fraud detection technology has been selected by digital native neo-insurance company Luko.

Since its launch in May 2018, Luko has been forging a new path in the world of homeowners insurance. This pioneering new insurance company employs patented technology which predicts which claims may be filed (water damage, fire, etc.) and convinces policyholders to adopt best practices in terms of prevention. In cases where claims cannot be avoided, Luko relies on technology to shorten the claims process and provide its customers with an exemplary customer experience.

Ensuring that claims are legitimate is a critical component of ensuring a fast, efficient, and accurate claims process. However, Luko's market success and rapid growth exposed that the existing procedures used to detect potential fraudulent claims simply could not keep up. As a result, Luko turned to Shift Technology and its award-winning AI-based insurance fraud detection solution.

"The insurance sector is the target of numerous attempts at fraud, whether opportunistic or resulting from organized crime networks," explained Raphal Vullierme, co-founder of Luko. "It was therefore essential that we continue to reinforce our processes and technologies in terms of fraud detection, so as to quickly identify potentially illegitimate claims."

In addition to the fraud detection technology offered by Shift, Luko is supported by the deep insurance industry experience and experience of its data science teams. This strong combination of people and technology help to ensure Luko is always staying abreast of the latest fraud trends and schemes.

"We have always considered the fight against fraud to be a critical topic for insurers," stated Jeremy Jawish, CEO and co-founder, Shift Technology. "Not only does effective fraud fighting reduce undeserved indemnity pay-outs and dismantle fraud networks, but also supports the digital transformation of the customer journey."

About Shift TechnologyShift Technology delivers the only AI-native fraud detection and claims automation solutions built specifically for the global insurance industry. Our SaaS solutions identify individual and network fraud with double the accuracy of competing offerings, and provide contextual guidance to help insurers achieve faster, more accurate claim resolutions. Shift has analyzed hundreds of millions of claims to date and was presented Frost & Sullivan's 2020 Global Claims Solutions for Insurance Market Leadership Award. For more information please visit http://www.shift-technology.com.

About LukoLuko is reinventing home insurance, placing social responsibility and technology at the heart of its priorities. The company is now the first neo-insurance firm in France, with more than 100,000 policyholders, and the Insurtech with the strongest growth in Europe. More than a simple insurance contract, Luko's ambition is for insurance to change from a model activated as a reaction to a model based on prevention, using internally-developed technology. The co-founders Raphal Vullierme, a serial entrepreneur, and Benoit Bourdel, have pooled their expertise to create a company with a specific, positive impact, recognized by Bcorp certification in July 2019.

Contacts:Rob MortonCorporate CommunicationsShift Technology+1.617.416.9216[emailprotected]

SOURCE Shift Technology

https://www.shift-technology.com

Continue reading here:

Luko and Shift Technology Apply Artificial Intelligence to the Fight Against Fraud - PRNewswire

Posted in Artificial Intelligence | Comments Off on Luko and Shift Technology Apply Artificial Intelligence to the Fight Against Fraud – PRNewswire

Explainable-AI (Artificial Intelligence – XAI) Image Recognition Startup Included in Responsible AI (RAI) Solutions Report, by a Leading Global…

Posted: at 6:24 am

POTOMAC, Md., Nov. 23, 2020 /PRNewswire/ --Z Advanced Computing, Inc. (ZAC), the pioneer Explainable-AI (Artificial Intelligence) software startup, was included by Forrester Research, Inc. in their prestigious report: "New Tech: Responsible AI Solutions, Q4 2020" (November 23, 2020, at https://go.Forrester.com/). Forrester Research is a leading global research and advisory firm, performing syndicated research on technology and business, advising major corporations, governments, investors, and financial sectors. ZAC is the first to demonstrate Cognition-based Explainable-AI (XAI), where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior algorithms, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks (CNN), even with an extremely large number of training samples. That's basically hitting the limitations of CNNs, which others are using now," continued Dr. Bijan Tadayon, CEO of ZAC. "For complex tasks, such as detailed 3D image recognition, you need ZAC Cognitive algorithms. ZAC also requires less CPU/ GPU and electrical power to run, which is great for mobile or edge computing," emphasized Dr. Saied Tadayon.

Read the original:

Explainable-AI (Artificial Intelligence - XAI) Image Recognition Startup Included in Responsible AI (RAI) Solutions Report, by a Leading Global...

Posted in Artificial Intelligence | Comments Off on Explainable-AI (Artificial Intelligence – XAI) Image Recognition Startup Included in Responsible AI (RAI) Solutions Report, by a Leading Global…

Page 103«..1020..102103104105..110120..»