Welcome to the roaring 2020s, the artificial intelligence decade – GreenBiz

This article first appeared in GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here.

Ive long believed the most profound technology innovations are ones we take for granted on a day-to-day basis until "suddenly" they are part of our daily existence, such as computer-aided navigation or camera-endowed smartphones. The astounding complexity of whats "inside" these inventions is what makes them seem simple.

Perhaps thats why Im so fascinated by the intersection of artificial intelligence and sustainability: the applications being made possible by breakthroughs in machine learning, image recognition, analytics and sensors are profoundly practical. In many instances, the combination of these technologies completely could transform familiar systems and approaches used by the environmental and sustainability communities, making them far smarter with far less human intervention.

Take the camera trap, a pretty common technique used to study wildlife habits and biodiversity and one that has been supported by an array of big-name tech companies. Except what researcher has the time or bandwidth to analyze thousands, let alone millions, of images? Enter systems such as Wildlife Insights, a collaboration between Google Earth and seven organizations, led by Conservation International.

Wildlife Insights is, quite simply, the largest database of public camera-trap images in the world it includes 4.5 million photos that have been analyzed and mapped with AI for characteristics such as country, year, species and so forth. Scientists can use it to upload their own trap photos, visualize territories and gather insights about species health.

Heres the jaw-dropper: This AI-endowed database can analyze 3.6 million photos in an hour, compared with the 300 to 1,000 images that you or I can handle. Depending on the species, the accuracy of identification is between 80 and 98.6 percent. Plus, the system automatically discounts shots where no animals are present: no more blanks.

Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world.

At the same time, we are certainly right to be cautious about the potential side effects of AI. That theme comes through loud and clear in five AI predictions published by IBM in mid-December. Two resonate with me the most: first, the idea that AI will be instrumental in building trust and ensuring that data is governed in ways that are secure and reliable; and second, that before we get too excited about all the cool things AI might be able to do, we need to make sure that it doesnt exacerbate the problem. That means spending more time focused on ways to make the data centers behind AI applications less energy-intensive and less-impactful from a materials standpoint.

From an ethical standpoint, I also have two big concerns: first, that sufficient energy is put into ensuring that the data behind the AI predictions we will come to rely on more heavily isnt flawed or biased. That means spending time to make sure a diverse set of human perspectives are represented and that the numbers are right in the first place. And second, we must view these systems as part of the overall solution, not replacements for human workers.

As IBMs vice president of AI research, Sriram Raghavan, puts it: "New research from the MIT-IBM Watson AI Lab shows that AI will increasingly help us with tasks such as scheduling, but will have a less direct impact on jobs that require skills such as design expertise and industrial strategy. Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world; employers have to start adapting job roles, while employees should focus on expanding their skills."

Projections by tech market research firm IDC suggest that spending on AI systems could reach $97.9 billion in 2023 thats 2.5 times the estimated $37.5 billion spent in 2019. Why now? Its a combination of geeky factors: faster chips; better cameras; massive cloud data-processing services. Plus, did I mention that we dont really have time to waste?

Where will AI-enabled applications really make a difference for environmental and corporate sustainability? Here are five areas where I believe AI will have an especially dramatic impact over the next decade.

For more inspiration and background on the possibilities, I suggest this primer (PDF) published by the World Economic Forum. And, consider this your open invitation to alert me about the intriguing applications of AI youre seeing in your own work.

Here is the original post:

Welcome to the roaring 2020s, the artificial intelligence decade - GreenBiz

A reality check on artificial intelligence: Can it match the hype? – PhillyVoice.com

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Dr. Jesse Ehrenfeld, who chairs the physician groups board of trustees.

In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Dr. Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation which is not affiliated with Kaiser Permanente.

More:

A reality check on artificial intelligence: Can it match the hype? - PhillyVoice.com

The U.S. Patent and Trademark Office Takes on Artificial Intelligence – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

Read more here:

The U.S. Patent and Trademark Office Takes on Artificial Intelligence - JD Supra

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews – Vox.com

Artificial intelligence is increasingly playing a role in companies hiring decisions. Algorithms help target ads about new positions, sort through resumes, and even analyze applicants facial expressions during video job interviews. But these systems are opaque, and we often have no idea how artificial intelligence-based systems are sorting, scoring, and ranking our applications.

Its not just that we dont know how these systems work. Artificial intelligence can also introduce bias and inaccuracy to the job application process, and because these algorithms largely operate in a black box, its not really possible to hold a company that uses a problematic or unfair tool accountable.

A new Illinois law one of the first of its kind in the US is supposed to provide job candidates a bit more insight into how these unregulated tools actually operate. But its unlikely the legislation will change much for applicants. Thats because it only applies to a limited type of AI, and it doesnt ask much of the companies deploying it.

Set to take effect January 1, 2020, the states Artificial Intelligence Video Interview Act has three primary requirements. First, companies must notify applicants that artificial intelligence will be used to consider applicants fitness for a position. Those companies must also explain how their AI works and what general types of characteristics it considers when evaluating candidates. In addition to requiring applicants consent to use AI, the law also includes two provisions meant to protect their privacy: It limits who can view an applicants recorded video interview to those whose expertise or technology is necessary and requires that companies delete any video that an applicant submits within a month of their request.

As Aaron Rieke, the managing director of the technology rights nonprofit Upturn, told Recode about the law, This is a pretty light touch on a small part of the hiring process. For one thing, the law only covers artificial intelligence used in videos, which constitutes a small share of the AI tools that can be used to assess job applicants. And the law doesnt guarantee that you can opt out of an AI-based review of your application and still be considered for a role (all the law says is that a company has to gain your consent before using AI; it doesnt require that hiring managers give you an alternative method).

Its hard to feel that that consent is going to be super meaningful if the alternative is that you get no shot at the job at all, said Rieke. He added that theres no guarantee that the consent and explanation the law requires will be useful; for instance, the explanation could be so broad and high-level that its not helpful.

If I were a lawyer for one of these vendors, I would say something like, Look, we use the video, including the audio language and visual content, to predict your performance for this position using tens of thousands of factors, said Rieke. If I was feeling really conservative, I might name a couple general categories of competency. (He also points out that the law doesnt define artificial intelligence, which means its difficult to tell what companies and what types of systems the law actually applies to).

Because the law is limited to AI thats used in video interviews, the company it most clearly applies to is Utah-based HireVue, a popular job interview platform that offers employers an algorithm-based analysis of recorded video interviews. Heres how it works: You answer pre-selected questions over your computer or phone camera. Then, an algorithm developed by HireVue analyzes how youve answered the questions, and sometimes even your facial expressions, to make predictions about your fit for a particular position.

HireVue says it already has about 100 clients using this artificial intelligence-based feature, including major companies like Unilever and Hilton.

Some candidates who have used HireVues system complain that the process is awkward and impersonal. But thats not the only problem. Algorithms are not inherently objective, and they reflect the data used to train them and the people that design them. That means they can inherit, and even amplify, societal biases, including racism and sexism. And even if an algorithm is explicitly instructed not to consider factors like a persons name, it can still learn proxies for protected identities (for instance, an algorithm could learn to discriminate against people who have gone to a womens college).

Facial recognition tech, in particular, has faced criticism for struggling to identify and characterize the faces of people with darker skin, women, and trans and non-binary people, among other minority groups. Critics also say that emotion (or affect) recognition technology in particular, which purports to make judgments about a persons emotions based on their facial expressions, is scientifically flawed. Thats why one research nonprofit, the AI Now Institute, called for the prohibition of such technology in high-stakes decision-making including job applicant vetting.

[W]hile youre being interviewed, theres a camera thats recording you, and its recording all of your micro facial expressions and all of the gestures youre using, the intonation of your voice, and then pattern matching those things that they can detect with their highest performers, AI Now Institute co-founder Kate Crawford told Recodes Kara Swisher earlier this year. [It] might sound like a good idea, but think about how youre basically just hiring people who look like the people you already have.

Even members of Congress are worried about that technology. In 2018, US Sens. Kamala Harris, Elizabeth Warren, and Patty Murray wrote to the Equal Employment Opportunity Commission, the federal agency charged with investigating employment discrimination, asking whether such facial analysis technology could violate anti-discrimination laws.

Despite being one of the first laws to regulate these tools, the Illinois law doesnt address concerns about bias. No federal legislation explicitly regulates these AI-based hiring systems. Instead, employment lawyers say such AI tools are generally subject to the Uniform Guidelines, employment discrimination standards created by several federal agencies back in 1978 (you can read more about that here).

The EEOC did not respond to Recodes multiple requests for comment.

Meanwhile, its not clear how, under Illinois new law, companies like HireVue will go about explaining the characteristics in applicants that its AI considers, given that the company claims that its algorithms can weigh up to tens of thousands of factors (it says it removes factors that are not predictive of job success).

The law also doesnt explain what an applicant might be entitled to if a company violates one of its provisions. Law firms advising clients on compliance have also noted that its not clear whether the law applies exclusively to businesses filling a position in Illinois, or just interviews that take place in the state. Neither Illinois State Sen. Iris Martinez nor Illinois Rep. Jaime M. Andrade, legislators who worked on the law, responded to a request for comment by the time of publication.

HireVues CEO Kevin Parker said in a blog post that the law entails very little, if any, change because its platform already complies with GDPRs principles of transparency, privacy, and the right to be forgotten. [W]e believe every job interview should be fair and objective, and that candidates should understand how theyre being evaluated. This is fair game, and its good for both candidates and companies, he wrote in August.

A spokesperson for HireVue said the decision to provide an alternative to an AI-based analysis is up to the company thats hiring, but argued that those alternatives can more time-consuming for candidates. If a candidate believes that a system is biased, the spokesperson said recourse options are the same as when a candidate believes that any part of the hiring process, or any individual interviewer, was unfairly biased against them.

Under the new law in Illinois, if you participate in a video interview that uses AI tech, you can ask for your footage to be deleted after the fact. But its worth noting that the law appears to still give the company enough time to train its model on the results of your job interview even if you think the final decision was problematic.

This gives these AI hiring companies room to continue to learn, says Rieke. Theyre going to delete the underlying video, but any learning or improvement to their systems they get to keep.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Continued here:

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews - Vox.com

Can medical artificial intelligence live up to the hype? – Los Angeles Times

Health products powered by artificial intelligence are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, Topol said. Even the Food and Drug Administration which has approved more than 40 AI products in the last five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Some doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain.

In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech start-ups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval. None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and coauthor of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the last decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said.

The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

Some AI devices are more carefully tested than others. An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the test, sold as IDx-DR, right, said Dr. Michael Abramoff, the companys founder and executive chairman.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said coauthor Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said.

Google had no comment in response to Jhas conclusions.

This story was written for Kaiser Health News, an editorially independent publication of the Kaiser Family Foundation.

See original here:

Can medical artificial intelligence live up to the hype? - Los Angeles Times

Top Movies Of 2019 That Depicted Artificial Intelligence (AI) – Analytics India Magazine

Artificial intelligence (AI) is creating a great impact on the world by enabling computers to learn on their own. While in the real world AI is still focused on solving narrow problems, we see a whole different face of AI in the fictional world of science fiction movies which predominantly depict the rise of artificial general intelligence as a threat for human civilization. As a continuation of the trend, here we take a look at how artificial intelligence was depicted in 2019 movies.

A warning in advance the following listicle is filled with SPOILERS.

Terminator: Dark Fate the sixth film of the Terminator movie franchise, featured a super-intelligent Terminator named Gabriel designated as Rev-9, and was sent from the future to kill a young woman (Dani) who is set to become an important figure in the Human Resistance against Skynet. To fight the Rev-9 Terminator, the Human Resistance from the future also sends Grace, a robot soldier, back in time, to defend Dani. Grace is joined by Sarah Connor, and the now-obsolete ageing model of T-800 Terminator the original killer robot in the first movie (1984).

We all know Tony Stark as the man of advanced technology and when it comes to artificial intelligence, Stark has nothing short of state-of-the-art technology in Marvels cinematic universe. One such artificial intelligence was the Even Dead, Im The Hero (E.D.I.T.H.) which we witnessed in the 2019 movie Spider-Man: Far From Home. EDITH is an augmented reality security defence and artificial tactical intelligence system created by Tony Stark and was given to Peter Parker following Starks death. It is encompassed in a pair of sunglasses and gives its users access to Stark Industries global satellite network along with an array of missiles and drones.

I Am Mother is a post-apocalyptic movie which was released in 2019. The films plot is focused on a mother-daughter relationship where the mother is a robot designed to repopulate Earth. The robot mother takes care of her human child known as daughter who was born with artificial gestation. The duo stays in a secure bunker alone until another human woman arrives there. The daughter now faces a predicament of whom to trust- her robot mother or a fellow human who is asking the daughter to come with her.

Wandering Earth is another 2019 Chinese post-apocalyptic film with a plot involving Earths imminent crash into another planet and a group of family members and soldiers efforts to save it. The films artificial intelligence character is OSS, a computer system which was programmed to warn people in the earth space station. A significant subplot of the film is focused on protagonist Liu Peiqiangs struggle with MOSS which forced the space station to go into low energy mode during the crash as per its programming from the United Earth Government. In the end, Liu Peiqiang resists and ultimately sets MOSS on fire to help save the Earth.

James Camerons futuristic action epic for 2019 Alita: Battle Angel is a sci-fi action film which depicts the human civilization in an extremely advanced stage of transhumanism. The movie describes the dystopian future where robots and autonomous systems are extremely powerful. To elaborate, in one of the initial scenes of the movie, Ido attaches a cyborg body to a human brain he found (from another cyborg) and names her Alita after his deceased daughter, which is an epitome of advancements in AI and robotics.

Jexi is the only Hollywood rom-com movie depicting artificial intelligence in 2019. The movie features an AI-based operating system called Jexi with recognizable human behaviour and reminds the audience of the previously acclaimed film Her, which was released in 2014. But unlike Her, the movie goes the other way around depicting how the AI system becomes emotionally attached to its socially-awkward owner, Phil. The biggest shock of the comedy film is when Jexi the AI which lives inside Phils cellphone acts to control his life and even chases him angrily using a self-driving car.

Hi, AI is a German documentary which was released in early 2019. The documentary was based on Chucks relationship with Harmony an advanced humanoid robot. The films depiction of artificial intelligence is in sharp contrast with other fictional movies on AI. The documentary also depicts that even though human research is moving in the direction of creating advanced robots, interactions with robots still dont have the same depth as human conversations. The film won the Max Ophls Prize for best documentary for the year.

comments

Vishal Chawla is a senior tech journalist at Analytics India Magazine (AIM) and writes on the latest in the world of analytics, AI and other emerging technologies. Previously, he was a senior correspondent for IDG CIO and ComputerWorld. Write to him at vishal.chawla@analyticsindiamag.com

Go here to read the rest:

Top Movies Of 2019 That Depicted Artificial Intelligence (AI) - Analytics India Magazine

Artificial intelligence takes scam to a whole new level – The Jackson Sun

RANDY HUTCHINSON, Better Business Bureau Published 12:54 a.m. CT Jan. 1, 2020

Imagine you wired hundreds of thousands of dollars somewhere based on a call from your boss, whose voice you recognized, only to find out you were talking to a machine and the money is lost. One company executive doesnt have to imagine it happening. He and his company were victims of what some experts say is one of the first cases of voice-mimicking software, a form of artificial intelligence (AI), being used in a scam.

In a common version of the Business Email Compromise scam, an employee in a companys accounting department wires money somewhere based on what appears to be a legitimate email from the CEO, CFO or other high-ranking executive. I wrote a column last year noting that reported losses to the scam had grown from $226 million in 2014 to $676 million in 2017. The FBI says losses doubled in 2018 to $1.8 billion and recommends making a phone call to verify the legitimacy of the request rather than relying on an email.

But now you may not even be able to trust voice instructions. The CEO of a British firm received what he thought was a call from the CEO of his parent company in Germany instructing him to wire $243,000 to the bank account of a supplier in Hungary. The call was actually originated by a crook using AI voice technology to mimic the bosss voice. The crooks moved the money from Hungary to Mexico to other locations.

An executive with the firms insurance company, which ultimately covered the loss, told The Wall Street Journal that the victim recognized the subtle German accent in his bosss voice and moreover that it carried the mans melody. The victim became suspicious when he received a follow-up call from the boss that originated in Austria requesting another payment be made. He didnt make that one, but the damage was already done.

Google says crooks may also synthesize speech to fool voice authentication systems or create forged audio recordings to defame public figures. It launched a challenge to researchers to develop countermeasures against spoofed speech.

Many companies are working on voice-synthesis software and some of it is available for free. The insurer thinks the crooks used commercially available software to steal the $243,000 from its client.

Many scams rely on victims letting their emotions outrun their common sense. An example is the Grandparent Scam, in which an elderly person receives a phone call purportedly from a grandchild in trouble and needing money. Victims have panicked and wired thousands of dollars before ultimately determining that the grandchild was safe and sound at home.

The crooks often invent some reason why the grandchilds voice may not sound right, such as the child having been in an accident or it being a poor connection. How much more successful might that scam be if the voice actually sounds like the grandchild? The executive who wired the $243,000 said he thought the request was strange, but the voice sounded so much like his boss that he felt he had to comply.

The BBB recommends companies install additional verification steps for wiring money, including calling the requestor back on a number known to be authentic.

Randy Hutchinson is the president of the Better Business Bureau of the Mid-South. Reach him at 901-757-8607.

Read or Share this story: https://www.jacksonsun.com/story/news/2020/01/01/artificial-intelligence-takes-scam-whole-new-level/2719833001/

Read the original post:

Artificial intelligence takes scam to a whole new level - The Jackson Sun

How This Cofounder Created An Artificial Intelligence Styling Company To Help Consumers Shop – Forbes

Michelle Harrison Bacharach, the cofounder and CEO of FindMine, an AI styling company, has designed a technology, Complete the Look, that creates complete outfits around retailers products. It blends the art of styling with the ease of automation to represent a companys brand(s) at scale and help answer the question how do I wear this? The technology shows shoppers how to wear clothes with accessories. The company uses artificial intelligence to scale out the guidance that the retailer would provide. FindMine serves over 1.5 billion requests for outfits per year across e-commerce and mobile platforms, and AOV (average order value) and conversions by up to 150% with full outfits.

Michelle Bacharach, Cofounder and CEO of FINDMINE, an AI styling company.

I'm picky about user experiences, Bacharach explains. When I was a consumer in my life, shopping, I was always frustrated by the friction that it caused that I was sold a product in isolation. If I buy a scarf, what do I wear with the scarf? What are the shoes and the top and the jacket? Just answer that question for me when I buy the scarf. Why is it so hard? I started asking those questions as a consumer. Then I started looking into why retailers don't do that. It's because they have a bunch of friction on their side. They have to put together the shirt and the shoe and the pant and the bag and the jacket that go with that outfit. So, because it's manual, and they have tens of thousands of products and products come and go so frequently, it's literally impossible to keep up with. It's physically impossible for them to give an answer to every consumerMy hypothesis was that I would spend more money if they sold me all the other pieces and showed me how to use it. I started looking into [the hypothesis], and it turned out to be true; consumers spend more money when they actually understand the whole package.

Bacharach began working for a startup in Silicon Valley after graduating from college. She focused on the user experience analysis and product management, which meant she looked at customer service tickets and the analytical data around how customers were using the products. After the analysis, shed then make fixes and suggestions for new features and prioritizing those with the tech team.

She always knew she wanted to start her own company. Working at the startup provided her the opportunity to understand how all the different sectors of an organization operated. However, she had always been curious about the possibility of acting. She decided to move to Los Angeles to try to become a professional actress. I ended up deciding that the part of acting that I liked the most was auditioning and competing for the job and positioning and marketing myself, she explains. If you talk to any other actors, thats the part they hate the most. I realized that I should go to business school and focus on the entertainment industry because that's the part of it that really resonated with me.

FINDMINE is part of the SAP CX innovation ecosystem and is currently part of the latest SAP.iO ... [+] Foundry startup accelerator in San Francisco.

After graduating from business school, Bacharach entered the corporate world, where she worked on corporate strategy and product management. The company she worked for underwent a culture shift, which made it difficult working there. At that point, she had two options. She could either find another position with a different company or start her own business venture. I didn't really know what that thing was going to be, Bacharach expresses. I used that as kind of a forcing function to sit down with my list of ideas and decide, what the heck am I going to work on. I thought about it as a time off, like a six-month sabbatical to try to figure out what we're doing. Then I'm going to get invested in from my idea, and then I'm going to be back on the salary wagon and be able to make a living again. I thought it's all going to be so easy. That's what started the journey of me becoming an entrepreneur. It took two-and-a-half years before she earned a livable salary.

I worked for a startup, she states. I watched other people do it. I was a consultant to start off. I worked in corporate America. So, I saw the other side of the coin in the way that that world functions. I didn't want to do this for the long term. I like the early stages of stuff. In retrospect, I guess I did prepare myself, but I didn't know it while I was going through it. I just jumped in.

As Bacharach continues to expand FindMine with the ever-updating artificial intelligence technology, she focuses on the following essential steps to help her with each pivot:

Michelle Bacharach, Cofounder and CEO of FINDMINE, sat down with John Furrier at the Intel AI Lounge ... [+] at South by Southwest 2017 in Austin, Texas.

Don't worry about getting 100% right, Bacharach concludes. Dont look at people who are successful and say, oh, wow. They're so different from me. I can never do that. Look at them and say they're exactly the same as me. They're just two or three years ahead in terms of their learnings and findings. I have to do that same thing, but for whatever I want to start.

The rest is here:

How This Cofounder Created An Artificial Intelligence Styling Company To Help Consumers Shop - Forbes

Artificial Intelligence at the movies – CineVue

Twentieth and twenty-first-century fiction has had a fascination with AI, and that fascination, of course, extends to movies. There are all sorts of AI programs in movies. With this ranking of AI in cinema, you can learn a little more about movie AI with typical adult intelligence.

Maria MetropolisMetropolis was released in 1927, and thats one of the reasons that its so interesting even to this day. As a robotic replica of another individual, Marias robotic double has the ability to process language, navigate itself, and even overthrow a whole city.

MU-TH-UR 6000 AlienAs an artificial intelligence mainframe, MU-TH-UR 6000, often referred to as Mother, was able to operate the main ships background systems with no human interaction necessary. Still, its less a character and more a database.

Replicants Blade RunnerReplicants are an important part of the Blade Runner films theyre stronger, smarter, and faster than humans, and have false human-like memories. That means in most circumstances, they behave just like their flesh-and-blood adult counterparts.

KITT Knight RiderIronically, KITT was developed after the executives had a difficult time getting good actors to play physical parts in Knight Rider. The car became a permanent fixture, sporting self-driving capabilities and an adult personality, complete with a fragile ego.

Bishop AlienAs viewers learn when it demonstrates its high-stakes knife trick, Bishop is able to move at near lightning speed with incredible accuracy. But although it can physically perform at levels above an ordinary human, it only learns human behaviours.

Evil Bill and Ted Bill & Teds Bogus JourneyThis movie may be a comedy, but the evil Bill and Ted robots are no joke. In fact, they actually murder the real Bill and Ted. But these evil doppelgangers eventually prove to be no match for the good Bill and Teds robotic bodyguards.

Agents The MatrixThe agents in The Matrix are comprised of software in a humans body. Although this allows them to do incredible feats like rewriting aspects of the world in which humans live, they still work at a similar level as other adults.

Bender FuturamaSure, Benders comic relief, but hes also an essential part of the world that Futurama builds for its viewers. He can disassemble body parts and has multiple processors. Still, he functions as basically another adult in the Futurama party.

Spider Robots Minority ReportIn Minority Report, PreCrime uses arachnid-like robots to find people with audio, visual, and thermal scans, and then to identify them with eye scans. But despite this advanced location technology, theyre still marginally controlled by humans behind the scenes.

Connecticut Housewives The Stepford WivesBased on a 1970 novel about men recreating their wives as docile androids, the housewives in The Stepford Wives arent really good or evil. But they are intentionally made to be as similar to a typical human as possible.

TARS InterstellarOne of the most interesting interactions in Interstellar is when the TARS makes one too many jokes about enslaving the humans inside itself, after which a crew member asks it to tone down its humour. Thats very reflective of how many viewers feel about real-life AI.

Ava Ex MachinaAva looks very obviously mechanic, but part of the storys theme centres around how good she is at behaving like a human. Though she has immensely strong mechanical capacities, shes ultimately a very human-like character.

AI is always growing and changing, both in real life and in movies and television. These adult-like AI systems offer both an interesting character choice and a peek into the world of real AI.

The rest is here:

Artificial Intelligence at the movies - CineVue

IIT Hyderabad to collaborate with Telangana government on artificial intelligence – India Today

IIT Hyderabad will also assist Telangana State in developing a strategy for artificial intelligence.

Indian Institute of Technology (IIT) Hyderabad is going to collaborate with the Government of Telangana for research on artificial intelligence. The institute is partnering with the Information Technology, Electronics and Communication (ITE&C) Department, Government of Telangana, for building/identifying quality datasets, along with third parties such as the industry.

They will also work on education and training to prepare and deliver content and curriculum on AI courses to be delivered to college students along with industry participants.

The MoU was signed by BS Murty, Director, IIT Hyderabad, and Jayesh Ranjan, IAS, Principal Secretary to Government of Telangana, Departments of Information Technology (IT) and Industries and Commerce (I&C) during an event held on January 2 as part of '2020: Declaring Telangana's Year of AI' initiative. Several other MoUs with other organizations were also signed by the Government of Telangana during this occasion.

The Telangana government declared 2020 as the 'Year of Artificial Intelligence' with the objective of promoting its use in various sectors ranging from urban transportation and healthcare to agriculture and others. The ITE&C Department aims to develop the ecosystem for the industry and to leverage emerging technologies for improving service delivery as part of this collaboration.

IIT Hyderabad will also assist the Telangana State in developing a strategy for AI/HPC (Artificial Intelligence / High-Performance Computing) infrastructure for various state needs and provide technology mentorship to identified partners for exploring and building AI PoCs (Point of Contacts).

The Telangana State Information Technology, Electronics and Communication Department's (ITE&C Department) is a Telangana Government department with a mandate to promote the use of Information Technology (IT) and act as a promoter/facilitator in the field of Information Technology in the state and build an IT driven continuum of Government services.

The vision of the ITE&C department is to leverage IT not only for effective and efficient governance, but also for sustainable economic development and inclusive social development. Its mission is to facilitate collaborative and innovative IT solutions, and to plan for the future growth while protecting and enhancing the quality of life.

Read: IIT Hyderabad researcher finds people from rural Bihar migrate to urban areas but do not settle

Also read: IIT Hyderabad researchers unravel working of protein that repairs damaged DNA

See more here:

IIT Hyderabad to collaborate with Telangana government on artificial intelligence - India Today