New AI diagnostic tool knows when to defer to a human, MIT researchers say – Healthcare IT News

Machine learning researchers at MIT's Computer Science and Artificial Intelligence Lab, or CSAIL, have developed a new AI diagnostic system they say can do two things: make a decision or diagnosis based on its digital findings or, crucially, recognize its own limitations and turn to a carbon-based lifeform who might make a more informed decision.

WHY IT MATTERSThe technology, as it learns, can also adapt how often it might defer to human clinicians, according to CSAIL, based on their availability and levels of experience.

"Machine learning systems are now being deployed in settings to [complement] human decision makers," write CSAIL researchers Hussein Mozannar and David Sontagin a new paperrecently presented at the International Conference of Machine Learningthat touches, not just on clinical applications of AI, but also on areas such as content moderation with social media sites such as Facebook or YouTube.

HIMSS20 Digital

"These models are either used as a tool to help the downstream human decision maker with judges relying on algorithmic risk assessment tools and risk scores being used in the ICU, or instead these learning models are solely used to make the final prediction on a selected subset of examples."

In healthcare, they point out, "deep neural networks can outperform radiologists in detecting pneumonia from chest X-rays, however, many obstacles are limiting complete automation, an intermediate step to automating this task will be the use of models as triage tools to complement radiologist expertise.

"Our focus in this work is to give theoretically sound approaches for machine learning models that can either predict or defer the decision to a downstream expert to complement and augment their capabilities."

THE LARGER TRENDAmong the tasks the machine learning system was trained on was the ability to assess chest X-rays to potentially diagnose conditions such as lung collapse (atelectasis) and enlarged heart (cardiomegaly).

Importantly, the system was developed with two parts, according to MIT researchers: a so-called "classifier," designed to predict a certain subset of tasks, and a "rejector" that decides whether a specific task should be handled by either its own classifier or ahuman expert.

The team performed experiments focused on medical diagnosis and text/image classification, the team showed that their approach not only achieves better accuracy than baselines, but does so with a lower computational cost and with far fewer training data samples.

While researchers say they haven't yet tested the system with human experts, they did develop"synthetic experts" to enable them to tweak parameters such as experience and availability.

They note that for the machine learning program to work with a new human expert, the algorithm would "need some minimal onboarding to get trained on the person's particular strengths and weaknesses."

Interestingly, in the case of cardiomegaly, researchers found that a human-AI hybrid model performed 8% percent better than either could on its own.

Going forward, Mozannar and Sontag plan to study how the tool works with human experts such as radiologists. They also hope to learn more about how it will process biased expert data, and work with several experts at once.

ON THE RECORD"In medical environments where doctors don't have many extra cycles, it's not the best use of their time to have them look at every single data point from a given patient's file," said Mozannar, in a statement. "In that sort of scenario, it's important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary."

"Our algorithms allow you to optimize for whatever choice you want, whether that's the specific prediction accuracy or the cost of the expert's time and effort," added Sontag. "Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa."

"There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability," says Sontag. "We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms."

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.com

Healthcare IT News is a publication of HIMSS Media.

Here is the original post:

New AI diagnostic tool knows when to defer to a human, MIT researchers say - Healthcare IT News

Infervision Receives FDA Clearance for the InferRead Lung CT.AI – Imaging Technology News

July 10, 2020Infervision announced U.S. Food and Drug Administration (FDA) 510(K) clearance of the InferRead Lung CT.AI product, which uses the state-of-the-art artificial intelligence and deep learning technology to automatically perform lung segmentation, along with accurately identifying and labeling nodules of different types. InferRead Lung CT.AI is designed to support concurrent reading and can aid radiologists in pulmonary nodule detection during the review of chest computed tomography (CT) scans, increasing accuracy and efficiency. With five years of international clinical use, Infervision's InferRead Lung CT.AI application is a robust and powerful tool to assist the radiologist.

InferRead Lung CT.AI is currently in use at over 380 hospitals and imaging centers globally. More than 55,000 cases daily are being processed by the system and over 19 million patients have already benefited from this advanced AI technology. "Fast, workflow friendly, and accurate are the three key areas we have emphasized during product development. We're very excited to be able to make our InferRead Lung CT.AI solution available to the North American market. Our clients tell us it has great potential to help provide improved outcomes for providers and patients alike," saidMatt Deng, Ph.D., Director of Infervision North America. The Company offers the system under a number of pricing models to make it easy to acquire.

The company predicts the system may also be of great benefit to lung cancer screening (LCS) programs across the nation. Lung cancer is the second most common cancer in both men and women in the U.S. Survival rates are 60% in five years if discovered at an early stage. However, the survival rate is lower than 10% if the disease progresses to later stages without timely follow-up and treatment. The Lung Cancer Screening program has been designed to encourage the early diagnosis and treatment of the high-risk population meeting certain criteria. The screening process involves Low-dose CT (LDCT) scans to determine any presence of lung nodules or early-stage lung disease. However small nodules can be very difficult to detect and missed diagnoses are not uncommon.

"The tremendous potential for lung cancer screening to reduce mortality in the U.S. is very much unrealized due to a combination of reasons. Based on our experience reviewing the algorithm for the past several months and my observations of its extensive use and testing internationally, I believe that Infervision's InferRead Lung CT.AI application can serve as a robust lung nodule "spell-checker" with the potential to improve diagnostic accuracy, reduce reading times, and integrate with the image review workflow," saidEliot Siegel, M.D., Professor and Vice Chair of research information systems in radiology at theUniversity of Maryland School of Medicine.

InferRead Lung CT.AI is now FDA cleared, and has also received the CE mark inEurope. "This is the first FDA clearance for our deep-learning-based chest CT algorithm and it will lead the way to better integration of advanced A.I. solutions to help the healthcare clinical workflow in the region," according to Deng. "This marks a great start in the North American market, and we are expecting to provide more high-performance AI tools in the near future."

For more information:global.infervision.com

View original post here:

Infervision Receives FDA Clearance for the InferRead Lung CT.AI - Imaging Technology News

Lost your job due to coronavirus? Artificial intelligence could be your best friend in finding a new one – The Conversation US

Millions of Americans are unemployed and looking for work. Hiring continues, but theres far more demand for jobs than supply.

As scholars of human resources and management, we believe artificial intelligence could be a boon for job seekers who need an edge in a tight labor market like todays.

Whats more, our research suggests it can make the whole process of finding and changing jobs much less painful, more effective and potentially more lucrative.

Over the last three years, weve intensely studied the role of AI in recruiting. This research shows that job candidates are positively inclined to use AI in the recruiting process and find it more convenient than traditional analog approaches.

Although companies have been using AI in hiring for a few years, job applicants have only recently begun to discover the power of artificial intelligence to help them in their search.

In the old days, if you wanted to see what jobs were out there, you had to go on a job board like Monster.com, type in some keywords, and then get back hundreds or even thousands of open positions, depending on the keywords you used. Sorting through them all was a pain.

Today, with AI and companies like Eightfold, Skillroads and Fortay, it is less about job search and more about matchmaking. You answer a few questions about your capabilities and preferences and provide a link to your LinkedIn or other profiles. AI systems that have already logged not just open jobs but also analyzed the companies behind the openings based on things like reputation, culture and performance then produce match reports showing the best fits for you in terms of job and company.

Typically, there is an overall match score expressed as a percentage from 0% to 100% for each job. In many cases the report will even tell you which skills or capabilities you lack or have not included and how much their inclusion would increase your match score. The intent is to help you spend your time on opportunities that are more likely to result in your getting hired and being happy with the job and company after the hire.

Usually, when you look for a job, you apply to lots of openings and companies at the same time. That means two choices: save time by sending each one a mostly generic resume, with minor tweaks for each, or take the time and effort to adjust and tailor your resume to better fit specific jobs.

Today, AI tools can help customize your resume and cover letter for you. They can tell you what capabilities you might want to add to your resume, show how such additions would influence your chances of being hired and even rewrite your resume to better fit a specific job or company. They can also analyze you, the job and the company and craft a customized cover letter.

While researchers have not yet systemically examined the quality of human- versus AI-crafted cover letters, the AI-generated samples weve reviewed are difficult to distinguish from the ones weve seen MBA graduates write for themselves over the last 30 years as professors. Try it for yourself.

Granted, for lots of lower-level jobs, cover letters are relics of the past. But for higher-level jobs, they are still used as an important screening mechanism.

Negotiations over compensation are another thorny issue in the job search.

Traditionally, applicants have been at a distinct informational disadvantage, making it harder to negotiate for the salary they may deserve based on what others earn for similar work. Now AI-enabled reports from PayScale.com, Salary.com, LinkedIn Salary and others provide salary and total compensation reports tailored to job title, education, experience, location and other factors. The data comes from company reported numbers, government statistics and self-reported compensation.

For self-reported data, the best sites conduct statistical tests to ensure the validity and accuracy of the data. This is only possible with large databases and serious number crunching abilities. PayScale.com, for example, has over 54 million respondents in its database and surveys more than 150,000 people per month to keep its reports up-to-date and its database growing.

Although no academics have yet tested if these reports result in better compensation packages than in the old days, research has long established that negotiating in general gets candidates better compensation offers, and that more information in that process is better than less.

[Deep knowledge, daily. Sign up for The Conversations newsletter.]

Use of these tools is growing, especially among young people.

A survey we conducted in 2018 found that half of employed workers aged 18 to 36 said that they were likely or highly likely to use AI tools in the job search and application process. And 64% of these respondents felt that AI-enabled tools were more convenient.

Most of the research on the use of AI in the hiring process including our own has focused on recruitment, however, and the use of the technology is expected to double over the next two years. Weve found it to be effective for companies, so it seems logical that it can be very useful for job candidates as well. In fact, at least US$2 billion in investments are fueling human resources startups aimed at using AI to help job candidates, according to our analysis of Crunchbase business data.

While more research is needed to determine exactly how effective these AI-enabled tools actually are, Americans who lost their jobs due to the coronavirus could use all the help they can get.

Read the rest here:

Lost your job due to coronavirus? Artificial intelligence could be your best friend in finding a new one - The Conversation US

AI-Generated Halloween Music Is a Cool Idea, But it Sounds Awful

Happy Halloween

The soundtrack to the 1960 Alfred Hitchcock thriller Psycho is iconic — remember the shower scene? But when an AI reinterprets its shrieking, staccato theme, it’s stripped of almost all its creepiness.

Or that’s at least what happened when MIT researchers ran a number of famous horror movie melodies through an algorithm. AI-Generated Halloween music just isn’t that scary.

Mix Tape

As part of the “Uncanny Musicbox,” the researchers fed short clips from famous soundtracks — from Psycho to The Exorcist — into an AI that’s trained to generate more spooky music on its own.

“The goal behind this project is to learn what makes a music sound scary, and develop an AI that generates personalized scary music,” wrote Pinar Yanardag, Postdoctoral Associate at MIT Media Lab, in an email to Futurism.

Wrong Way

The only problem: the music lacks emotional impact. It sounds more like a middle school music class project than a fully featured pop album composed and produced by an AI. Will Danny Elfman and Bernard Hermann be out of a job tomorrow? Unlikely.

But it’s a cool idea that shows off creative uses for AI. It just doesn’t inspire a sense of looming dread.

Read More: Get in the Halloween Spirit with This AI-Generated Spooky Music [Motherboard]

More on AI-generated music: The World’s First Album Composed and Produced by an AI Has Been Unveiled

See the original post here:

AI-Generated Halloween Music Is a Cool Idea, But it Sounds Awful

HPE Powers the University of EIDF With Software, HPC and AI solutions – AiThority

Europes first new region-wide data innovation facility will offer R&D resources to unlock regional economic growth across science, healthcare and more using solutions powered by HPE Ezmeral Container Platform, HPE Apollo Systems and HPE Superdome Flex Servers

Hewlett Packard Enterprise (HPE) announced that it has been chosen to power the Edinburgh International Data Facility(EIDF), Europes first regional data innovation center at the University of EdinburghEPCCin Scotland. HPE will deliver an end-to-end infrastructure featuring its industry-leading high-performance computing (HPC) and artificial intelligence (AI) solutions powered by HPE Apollo Systems and HPE Superdome Flex Servers, as well asHPE Ezmeral Container Platformsoftware capabilities.

The deal, which has an expected value of more than US $125 million over 10 years, will help 1,000 public, private and non-profit organizations to develop products and services using R&D and other data-driven programs, with a long-term vision to establish the Edinburgh region as the Data Capital of Europe.

As a hub for innovation, the EIDF will enable R&D on initiatives focused on addressing global issues such as food production, climate change, space exploration and genetically-tailored healthcare. The EIDF will offer researchers access to HPC and AI technologies to apply analytics to modeling and simulation to increase accuracy of results and speed time-to-discovery.

Recommended AI News:Driven, Inc. Becomes Member Of Brainspaces Exclusive Global Partner Program

The EIDF will also improve overall insights by allowing users to securely access shared datasets and analytics from public and private sources.

The EIDF will play a critical role in the regionsData Driven Innovation (DDI) program, which involves greater collaboration between industry, the public sector and academia. The new facility will power five DDI hub sites with vital infrastructure to meet complex long-term project demands. The DDI program was pioneered by the University of Edinburgh, along with Herriot Watt University, to tackle societal and industrial challenges and deliver benefits from the data economy, while improving the digital and data skills of over 100,000 people from across the region.

We are pleased to be working with HPE to deliver what we believe is the only facility of its kind in Europe focused specifically on data-driven regional growth,said Mark Parsons, Director of EPCC at the University of Edinburgh.With the Edinburgh International Data Facility, we are combining computing and data resources to create a facility that will allow organizations to use data to innovate throughout their organizations. HPE is uniquely positioned to provide the spectrum of infrastructure and services, as well as the flexibility that this project demands.

To support its mission, EIDF turned to HPE to uniquely deliver an end-to-end infrastructure that seamlessly combines advanced HPC, AI, container, and software technologies into a single framework to enable collaborative, optimal experiences across broad groups of users.

Recommended AI News:4G Wireless Transformed Americas Economy, New Study Shows

As AI and ML practices are becoming integral to scientific research and engineering, managing AI workloads and applications at scale is a critical requirement for EIDF. In order to address these requirements, EIDF is deploying theHPE Ezmeral Container Platformrunning on HPE Apollo Systems that are purpose-built to support HPC, deep learning and other data-intensive workloads. Additionally, the platform will also run HPE Superdome Flex Servers to support applications requiring large in-memory processing.

The HPE Ezmeral Container Platform provides native Kubernetes support and enables self-service AI/ML applications for EIDF scientists, with flexible use of accelerators such as GPUs. It will also allow developers to standardize machine learning workflows and accelerate AI deployments from months to days with the HPE Ezmeral ML Ops solution. The solution enables developers to streamline and speed up the entire machine learning model lifecycle from proof-of-concept and pilot stages, all the way through deployment, using a DevOps-like process to standardize models.

It also includes pre-integrated persistent data storage in the form of theHPE Ezmeral Data Fabricfile system, for high-performance and high-throughput advanced analytics. This allows EIDF scientists to easily access the data they need in a secure multitenant and collaborative manner, accelerate the deployment of machine learning workloads and models, and get to insights faster.

Recommended AI News:Scalyr Announces Industrys First Event Data Cloud

Share and Enjoy !

Link:

HPE Powers the University of EIDF With Software, HPC and AI solutions - AiThority

Announcing the second annual VentureBeat AI Innovation Awards at Transform 2020 – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

The past year has seen remarkable change. As innovation in the field of AI and real-world applications of its constituent technologies such as machine learning, natural language processing, and computer vision continue to grow, so has an understanding of their social impacts.

At our AI-focused Transform 2020 event, taking place July 15-17 entirely online, VentureBeat will recognize and award emergent, compelling, and influential work in AI through our second annual VB AI Innovation Awards.

Drawn both from our daily editorial coverage and the expertise, knowledge, and experience of our nominating committee members, these awards give us a chance to shine a light on the people and companies making an impact in AI.

Our nominating committee includes:

Claire Delaunay, Vice President of Engineering, Nvidia

Claire Delaunay is vice president of engineering at Nvidia, where she is responsible for the Isaac robotics initiative and leads a team to bring Isaac to market for use by roboticists and developers around the world.

Prior to joining Nvidia, Delaunay was the director of engineering at Uber, after it acquired Otto, a startup she cofounded. She was also the robotics program lead at Google and founded two other companies, Botiful and Robotics Valley.

Delaunay has 15 years of experience in robotics and autonomous vehicles leading teams ranging from startups and research labs to Fortune 500 companies. Sheholds a Master of Science in computer engineering from cole Prive des Sciences Informatiques (EPSI).

Asli Celikyilmaz, Principal Researcher, Microsoft Research

Asli Celikyilmaz is a principal researcher at Microsoft Research (MSR) in Redmond, Washington. She is also an affiliate professor at the University of Washington. She received her Ph.D. in information science from the University of Toronto, Canada, and continued her postdoc study in the Computer Science Department at the University of California, Berkeley.

Her research interests are mainly in deep learning and natural language (specifically language generation with long-term coherence), language understanding, language grounding with vision, and building intelligent agents for human-computer interaction. She serves on the editorial boards of Transactions of the ACL (TACL) as area editor and Open Journal of Signal Processing (OJSP) as associate editor. She has received several best of awards, including at NAFIPS 2007, Semantic Computing 2009, and CVPR 2019.

The award categories are:

Natural Language Processing/Understanding Innovation

Natural language processing and understanding have only continued to grow in importance, and new advancements, new models, and more use cases continue to emerge.

Business Application Innovation

The field of AI is rife with new ideas and compelling research, developed at a blistering pace, but its the practical applications of AI that matter to people right now, whether thats RPA to reduce human toil, streamlined processes, more intelligent software and services, or other solutions to real-world work and life problems.

Computer Vision Innovation

Computer vision is an exciting subfield of AI thats at the core of applications like facial recognition, object recognition, event detection, image restoration, and scene reconstruction and thats fast becoming an inescapable part of our everyday lives.

AI for Good

This award is for AI technology, the application of AI, or advocacy or activism in the field of AI that protects or improves human lives or operates to fight injustice, improve equality, and better serve humanity.

Startup Spotlight

This award spotlights a startup that holds great promise for making an impact with its AI innovation. Nominees are selected based on their contributions and criteria befitting their category, including technological relevance, funding size, and impact in their sub-field within AI.

As we count down to the awards, well offer editorial profiles of the nominees on VentureBeats AI channel The Machine and share them across our social channels. The award ceremony will be held on the evening of July 15 to conclude the first day of Transform 2020.

See original here:

Announcing the second annual VentureBeat AI Innovation Awards at Transform 2020 - VentureBeat

Flatfile raises $7.6 million for AI that extracts and transforms spreadsheet data – VentureBeat

Flatfile, a startup developing a platform that analyzes spreadsheets using AI and machine learning, today announced that it has raised $7.6 million in equity financing. The funding coincides with the launch of Concierge, Flatfiles newest product focused on data onboarding for large enterprises, which launched in private preview earlier this year.

Enterprises regularly engage in the process of data onboarding, which entails ingesting, anonymizing, matching, and distributing a customers data. According to IDC, 80% of this data will be unstructured by 2025, meaning it will have attributes that make it challenging to search for and analyze. This is expected to contribute to the widespread problem of data-underutilization. (Forester reports that between 60% and 73% of data within an enterprise is not currently used for analytics.)

Flatfile provides products that focus on solving data onboarding dilemmas. Its Portal platform which learns over time how data from comma-separated value files, Microsoft Excel spreadsheets, and manually pasted text should be organized lets users set a target model for validation and match any incoming values. (Flatfile says it sees over 95% accuracy on initial column matches for new users.)

Portal offers a validation feature that affords control over how data is formatted and supports functions for in-line transformation, with automatic matching of imported columns. And its architected to deploy either in the cloud or on-premises, in compliance with regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and Europes GDPR.

As for Flatfiles Concierge service, its a no-code tool that lets developers set up data models and share and collaborate on imports without passing spreadsheets back and forth. It offers a collated view of all data imports, with details like versions, statuses, owners, and validation errors, along with an authenticated login for each collaborator and an approval flow to ensure users retain control over onboarding.

Flatfile has formidable competition in Textract, Amazons service that can automatically extract text and data from scanned documents (including those stored in tables), and Cloud Natural Language, Googles tool that applies the AI used by Google Search and Google Assistant to perform syntax, sentiment, and entity analysis on existing files. Microsoft offers its own data onboarding tool in Form Recognizer, an Azure product that grabs text, key/value pairs, tables, and more from documents.

Flatfile says that in the two years since it launched, it has transformed over 300 million rows of data and 4.5 billion data points for more than 187 companies, including ClickUp, Blackbaud, Benevity, Housecall Pro, Hubspot, and Toast. Flatfile added that revenue has increased 5 times from the beginning of the year as the number of daily end users grows to 2,000.

Flatfile is solving for the highest friction point in adopting new software, Flatfile cofounder and CEO David Boskovic told VentureBeat via email. Moving data to a new product can take months or years for many companies. Were making it possible for software companies to automate the entire process, driving growth and decreasing lead time on every deal.

Two Sigma Ventures led the seed round announced this week, with participation from previous investors, including Afore Capital, Designer Fund, and Gradient Ventures (Googles AI-focused venture fund). This round brings the Denver, Colorado-based companys total raised to $9.7 million, following a $2 million seed round in September 2019. Flatfile says it expects to expand its workforce from 14 to 30 by the end of the year.

View original post here:

Flatfile raises $7.6 million for AI that extracts and transforms spreadsheet data - VentureBeat

4 Steps To Shape Your Business With AI – Forbes

While artificial intelligence (AI) has been around for many years, deployment has been picking up. Between 2017 and 2018, consulting firm McKinsey & Co. foundthe percentage of companiesembedding at least one AI capability in their business processes more than doubled to 47 percent from 20 percent the year before.

Although companies are adopting it, they often lack a clear plan: A recent IDC survey found that of the companies already using AI, only 25 percent had an enterprise-wide strategy on how to implement it.

To help navigate that challenge, heres how the four pillars of Google Clouds Deployed AI vision can reshape your business.

For AI to be deployed effectively, it must be focused on a new business problem or unrealized opportunity. To that end, there are several areas of business that the technology is well-suited to address.

One key problem is fixing aging processes, notes Ritu Jyoti, program vice president of Artificial Intelligence Strategies for research firm IDC.

Companies that have been around for a long time will have a lot of archaic processes, and they need to be upgraded, Jyoti says.

Bank fraud is one prominent example. Machine learning (ML), a subset of AI, provides an opportunity to solve this problemby helping banks sort through large amounts of bank transactions to detect suspicious patterns of financial activity.

Customer relationships is another area AI can improve. Chatbots, for example, are enhancing customer service by providing support 24/7, Jyoti says. Furthermore, companies can also use AI to develop the right incentives for customers without losing money. Firms sometimes lose income due to lapsed contracts or stuck deals in which a transaction is started but unable to be completed, Jyoti notes.

This feature helps organizations optimize early payment discount offers by using ML algorithms to find the right balance between incentivizing customers while ensuring profitability for the seller, Jyoti says.

Another business problem in which AI can help is in document processing, including insurance claims, tax returns and mortgage applications, which can involve hundreds of pages of documents on income and assets, notes Vinod Valloppillil, Googles head of product for Google Cloud Language AI, who spoke at Forbes CIO Next conference.

[Document processing] is one of the few domains that actually brings in multiple parts of AI all simultaneously, Valloppillil says. It incorporates computer vision, deep learning and natural language processing.

When deploying AI to solve a business problem, the technology should be central to that solution. Many examples across industriesincluding healthcare and energyexemplify how innovative problem-solving can hinge on AI.

The medical industry, for instance, is turning to AI to build algorithms to detect pneumonia. With genomic data bringing insights on who will be susceptible to various disease conditions, disease prevention is one area that cant be solved without AI. An AI platform can also become part of an end-to-end solution when hospitals need to connect medical data to cloud platforms.

AI can also help physicians determine whether a patient has diabetic retinopathy, Valloppillil says. An Explainable AI model would help determine if screening was necessary based on the appearance of various regions of the image.

Were getting to the point where AI can do quite a bit of engineering, Valloppillil says.Meanwhile, the energy sector has found AI to be essential to keeping wind facilities safer, faster, and more accurate, according to Andrs Gluski, president and CEO of AES, a global power company. Drones and Cloud AutoML Vision, a platform that provides advanced visual intelligence and custom ML models, make these improvements in wind energy possible.

Once youve identified the business problem and decided to use AI in the solution, the next step is building customer trust and maintaining proper ethics. In a Deloitte survey of 1,100 IT and line-of-business executives, 32 percent placed ethical risks in the top three of AI-related concerns.

To build trust, ethics should come ahead of any productivity or financial gains from using AI. A crucial part of that is being transparent about how a company uses AI.Consulting firm Capgemini recommends using opt-in forms to help build transparency with customers regarding AI. Meanwhile, privacy laws like the European Unions General Data Protection Regulation (GDPR) also contribute to the transparency requirements for AI.

Jyoti also recommends including fact sheetssimilar to how food packaging includes nutrition informationabout details like data sources and lineage. A set of AI Principles from Google help businesses ensure that theyre using AI in a responsible manner and that they understand the limits of the technology. Users of AI should maintain accountability to human direction, uphold standards set by scientists and test for safety.

These are the principles we as a company orient around, things like always optimize around the fairness of AI, and try to avoid any situation where AI can get abused, Valloppillil says.

Finally, to ensure a cycle of improvement, companies should use clear, objective metrics to assess progress towards their business goals.

For example, if AI is used to assist with rsum screening in the hiring process, make sure the screening adheres to company policies on equal opportunity to maintain fairness. To form an objective metric, come up with representative numbers of candidates for various demographics and train ML algorithms accordingly.

Avoid the blinders of the homogenous teams, Jyoti says.Tools and frameworks like Explainable AI can help companies build inclusive systems that address bias, which involves the data not being representative of the decisions a business is trying to make. This makes the problem of garbage in, garbage out multiplied a hundredfold, Valloppillil says.

The concept of explainability helps provide insight into the decisions that AI helps deliver.

With explainability we're now finally getting to the point where we go peek inside the box, Valloppillil says. We actually have a shot at understanding exactly why does AI make the call.

As AI continues to evolve, there are increasing opportunities for the technology to meaningfully improve business operations. With ethical implications in mind and a clear focus on measurable metrics, Deployed AI is poised for growth.

Originally posted here:

4 Steps To Shape Your Business With AI - Forbes

The Guardian view on artificial intelligence’s revolution: learning but not as we know it – The Guardian

Bosses dont often play down their products. Sam Altman, the CEO of artificial intelligence company OpenAI, did just that when people went gaga over his companys latest software: the Generative Pretrained Transformer 3 (GPT-3). For some, GPT-3 represented a moment in which one scientific era ends and another is born. Mr Altman rightly lowered expectations. The GPT-3 hype is way too much, he tweeted last month. Its impressive but it still has serious weaknesses and sometimes makes very silly mistakes.

OpenAIs software is spookily good at playing human, which explains the hoopla. Whether penning poetry, dabbling in philosophy or knocking out comedy scripts, the general agreement is that the GPT-3 is probably the best non-human writer ever. Given a sentence and asked to write another like it, the software can do the task flawlessly. But this is a souped up version of the auto-complete function that most email users are familiar with.

GPT-3 stands out because it has been trained on more information about 45TB worth than anything else. Because the software can remember each and every combination of words it has read, it can work out through lightning-fast trial-and-error attempts of its 175bn settings where thoughts are likely to go. Remarkably it can transfer its skills: trained as a language translator, GPT-3 worked out it could convert English to Javascript as easily as it does English to French. Its learning, but not as we know it.

But this is not intelligence or creativity. GPT-3 doesnt know what it is doing; it is unable to say how or why it has decided to complete sentences; it has no grasp of human experience; and cannot tell if it is making sense or nonsense. What GPT-3 represents is a triumph of one scientific paradigm over another. Once machines were taught to think like humans. They struggled to beat chess grandmasters. Then they began to be trained with data to, as one observer pointed out, discover like we can rather than contain what we have discovered. Grandmasters started getting beaten. These days they cannot win.

The reason is Moores law, the exponentially falling cost of number-crunching. AIs bitter lesson is that the more data that can be consumed, and the more models can be scaled up, the more a machine can emulate or surpass humans in quantitative terms. If scale truly is the solution to human-like intelligence then GPT-3 is still about 1,000 times smaller than the brains 100 trillion-plus synapses. Human beings can learn a new task by being shown how to do it only a few times. That ability to learn complex tasks from only a few examples, or no examples at all, has so far eluded machines. GPT-3 is no exception.

All this raises big questions that seldom get answered. Training GPT-3s neural nets is costly. A $1bn investment by Microsoft last year was doubtless needed to run and cool GPT-3s massive server farms. The bill for the carbon footprint a large neural net is equal to the lifetime emissions of five cars is due.

Fundamental is the regulation of a for-profit OpenAI. The company initially delayed the launch of its earlier GPT-2, with a mere 1.5bn parameters, because the company fretted over its implications. It had every reason to be concerned; such AI will emulate the racist and sexist biases of the data it swallows. In an era of deepfakes and fake news, GPT-style devices could become weapons of mass destruction: engaging and swamping political opponents with divisive disinformation. Worried? If you arent then remember that Dominic Cummings wore an OpenAI T-shirt on his first day in Downing Street.

Read this article:

The Guardian view on artificial intelligence's revolution: learning but not as we know it - The Guardian

Mapping the Future of AI – Project Syndicate

BRIGHTON Artificial intelligence already plays a major role in human economies and societies, and it will play an even bigger role in the coming years. To ponder the future of AI is thus to acknowledge that the future is AI.

This will be partly owing to advances in deep learning, which uses multilayer neural networks that were first theorized in the 1980s. With todays greater computing power and storage, deep learning is now a practical possibility, and a deep-learning application gained worldwide attention in 2016 by beating the world champion in Go. Commercial enterprises and governments alike hope to adapt the technology to find useful patterns in Big Data of all kinds.

In 2011, IBMs Watson marked another AI watershed, by beating two previous champions in Jeopardy!, a game that combines general knowledge with lateral thinking. And yet another significant development is the emerging Internet of Things, which will continue to grow as more gadgets, home appliances, wearable devices, and publicly-sited sensors become connected and begin to broadcast messages around the clock. Big Brother wont be watching you; but a trillion little brothers might be.

Beyond these innovations, we can expect to see countless more examples of what were once called expert systems: AI applications that aid, or even replace, human professionals in various specialties. Similarly, robots will be able to perform tasks that could not be automated before. Already, robots can carry out virtually every role that humans once filled on a warehouse floor.

Given this trend, it is not surprising that some people foresee a point known as the Singularity, when AI systems will exceed human intelligence, by intelligently improving themselves. At that point, whether it is in 2030 or at the end of this century, the robots will truly have taken over, and AI will consign war, poverty, disease, and even death to the past.

To all of this, I say: Dream on. Artificial general intelligence (AGI) is still a pipe dream. Its simply too difficult to master. And while it may be achieved one of these days, it is certainly not in our foreseeable future.

But there are still major developments on the horizon, many of which will give us hope for the future. For example, AI can make reliable legal advice available to more people, and at a very low cost. And it can help us tackle currently incurable diseases and expand access to credible medical advice, without requiring additional medical specialists.

In other areas, we should be prudently pessimistic not to say dystopian about the future. AI has worrying implications for the military, individual privacy, and employment. Automated weapons already exist, and they could eventually be capable of autonomous target selection. As Big Data becomes more accessible to governments and multinational corporations, our personal information is being increasingly compromised. And as AI takes over more routine activities, many professionals will be deskilled and displaced. The nature of work itself will change, and we may need to consider providing a universal income, assuming there is still a sufficient tax base through which to fund it.

A different but equally troubling implication of AI is that it could become a substitute for one-on-one human contact. To take a trivial example, think about the annoyance of trying to reach a real person on the phone, only to be passed along from one automated menu to another. Sometimes, this is vexing simply because you cannot get the answer you need without the intervention of human intelligence. Or, it may be emotionally frustrating, because you are barred from expressing your feelings to a fellow human being, who would understand, and might even share your sentiments.

Other examples are less trivial, and I am particularly worried about computers being used as carers or companions for elderly people. To be sure, AI systems that are linked to the Internet and furnished with personalized apps could inform and entertain a lonely person, as well as monitor their vital signs and alert physicians or family members when necessary. Domestic robots could prove to be very useful for fetching food from the fridge and completing other household tasks. But whether an AI system can provide genuine care or companionship is another matter altogether.

Those who believe that this is possible assume that natural-language processing will be up to the task. But the task would include having emotionally-laden conversations about peoples personal memories. While an AI system might be able to recognize a limited range of emotions in someones vocabulary, intonation, pauses, or facial expressions, it will never be able to match an appropriate human response. It might say, Im sorry youre sad about that, or, What a lovely thing to have happened! But either phrase would be literally meaningless. A demented person could be comforted by such words, but at what cost to their human dignity?

The alternative, of course, is to keep humans in these roles. Rather than replacing humans, robots can be human aids. Today, many human-to-human jobs that involve physical and emotional caretaking are undervalued. Ideally, these jobs will gain more respect and remuneration in the future.

But perhaps that is wishful thinking. Ultimately, the future of AI our AI future is bright. But the brighter it becomes, the more shadows it will cast.

More here:

Mapping the Future of AI - Project Syndicate

AI beats professional players at Super Smash Bros. video game – New Scientist

AIs latest challenge: Super Smash Bros.

REUTERS / Alamy Stock Photo

By Timothy Revell

AI has earned another victory against humans, this time in Nintendo fighting game Super Smash Bros. Melee.

A team led by Vlad Firoiu at Massachusetts Institute of Technology trained anAI to play the game using deep learning algorithms and then pitched it against ten highly-ranked players. The AI came out on top against every one of them.

Super Smash Bros. is a cult Nintendo series where players battle classic video game characters like Super Mario and Zelda. The aim is to knock out (KO) the opponent by sending them out of bounds. The Super Smash Bros. Melee game was originally released in 2001 for the Nintendo Gamecube console.

The game may not be as complex as strategy games like Go, which Google DeepMinds AlphaGo mastered in 2016, but Firoiu says it poses a different challenge for AI because you cant work out many moves in advance. You cant plan far ahead with Smash like you can with, for example, Go, he says. To add to the difficulty, the attacks you perform can be used against you by your opponent.

And unlike many video games AI has already conquered, like Pacman and Space Invaders, Super Smash Bros. is multiplayer, pitting two players against each other.

The new AI builds on previous efforts to make a Super Smash Bros. AI. A couple of years ago, security researcherDan Petrowas challenged by a friend who said it would be impossible to make an AI that could defeat him at the game. I took that as a challenge, says Petro.

Petromade a system called SmashBot based on his own experience playing the game. I directly programmed an optimum strategy into SmashBot, he says.

Firoiu and his colleagues saw Petros bot and asked if they could use his groundwork to take a Super Smash Bros.-playing AI to the next level. Petro had already built the infrastructure needed for an AI to interact with the Super Smash Bros. Melee game and control a character.

The researchers trained their AI using reinforcement learning. They first hadit fight the in-game AI, which players can battle in one-player mode, then improved it by making it play against itself.

Finally, they took the AI to two tournaments and asked professional players to try to defeat it. The AI played as the popular character Captain Falcon, partly because this character doesnt have any projectile attacks, which the AI wasnt trained to deal with. The AI won more fights than it lost against each of the 10 high-ranking players it fought, who ranked from 16th to 70th in the world.

The whole process was fast. After a couple of hours the AI was good enough to beat the in-game AI, and after a couple of weeks it could beat the top-ranking humans, says Firoiu.

Julian Togelius at the NYU Game Innovation Lab says it is not surprising that an AI has conquered Super Smash Bros. Melee, as computers excel at the fast reaction times that give players an advantage in this kind of game. Compared to other games, fighting games rely very little on long-term planning and very much on quick reactions, he says.

The AI plays with a reaction speed of around 33 milliseconds, compared to over 200 milliseconds for humans. The researchers are considering restricting the AIs reaction speed to see if they can build a system that is strategically superior when playing at human speed.

Meanwhile, the AI still has a fatal flaw that the top-ranking players didnt notice. If an opponent just crouched at the side of the stage, the AI didnt know what to do. It then killed itself, says Firoiu.

This should act as a warning for researchers trying to apply AI to unfamiliar situations, says Firoiu. If an AI encounters something that its not seen before, it can fail remarkably and spectacularly.

Journal reference: arXiv, arxiv.org/abs/1702.06230

More on these topics:

Original post:

AI beats professional players at Super Smash Bros. video game - New Scientist

The latest challenge to Google’s AI dominance comes from an unlikely place — Firefox – CNBC

Mozilla, the company behind the Firefox internet browser, has begun testing a feature that lets you enter a search query using your voice instead of typing it in. The move could help Mozilla's efforts to make Firefox more competitive with Google Chrome.

If you're using Firefox in English on Mac, Windows or Linux, you can turn on the experimental "Voice Fill" feature and then use it on Google, Yahoo and DuckDuckGo. Support for other websites will come later.

Alphabet's Google offers speech recognition on its search engine when accessed through Chrome on desktop -- it became available in 2013 -- and Yahoo, Microsoft's Bing and Google all let you run search queries with your voice on mobile devices. But searching with your voice on Google while using Firefox on the desktop, for example, has historically been impossible. Now Mozilla wants to make its desktop browser more competitive.

The Voice Fill feature comes a few weeks after Mozilla announced the Common Voice Project that allows people to "donate" recordings of them saying various things in order to build up "an open-source voice recognition engine" that anyone will be able to use. Mozilla will use recordings from Voice Fill and the Common Voice Project in order to make the speech recognition more accurate, speech engineer Andre Natal told CNBC in an interview.

Mozilla's latest efforts follow Facebook's push into speech recognition. And speech technology has become hotter thanks to the rise of "smart" speakers like the Amazon Alexa, the Google Home, and the Apple HomePod. Harman Kardon is now building a speaker that will let people interact with Microsoft's Cortana assistant.

But these big technology companies have collected considerable amounts of proprietary voice data. So while they zig, Mozilla will zag. Mozilla will release to the public its voice snippets from the Common Voice Project later this year. The speech recognition models will be free for others to use as well, and eventually there will be a service for developers to weave into their own apps, Natal said.

"There's no option for both users and developers to use -- something that is both concerned about your privacy and also affordable," Natal said.

That said, Mozilla is following along with the rest of the tech crowd in the sense that the underlying system -- a fork of the Kaldi open-source software -- employs artificial neural networks, a decades-old but currently trendy architecture for training machines to do things like recognize the words that people say.

Mozilla initially explored incorporating speech recognition into the assistant for its Firefox OS for phones, but in 2016 it shifted the OS focus to connected devices, and earlier this year Mozilla closed up the connected devices group altogether.

Today Mozilla has five people working on speech research and a total of about 30 people working on speech technology overall, Natal said. Eventually the team wants to make the technology work in languages other than English.

Mozilla introduced the browser that became Firefox back in 2002. Over the years the nonprofit Mozilla Foundation has received financial support from Google and Yahoo. Mozilla CEO Chris Beard is currently focused on trying to get people to care about the company again, as CNET's Stephen Shankland reported this week. Recent moves include the launch of the Firefox Focus mobile browser and the acquisition of read-it-later app Pocket.

But while Firefox could have roughly 300 million monthly active users, Chrome has more than 1 billion.

View original post here:

The latest challenge to Google's AI dominance comes from an unlikely place -- Firefox - CNBC

Recent AI Developments Offer a Glimpse of the Future of Drug Discovery – Tech Times

The science and practice of medicine has been around for much of recorded human history. Even today, doctors still swear an oath that dates back to ancient Greece, containing many of the ethical obligations we still expect our physicians to adhere to. It is one of the most necessary and universal fields of human study.

Despite the importance of medicine, though, true breakthroughs don't come easily. In fact, most medical professionals will only see a few within their lifetime. Developments such as the first medical x-ray, penicillin, stem cell therapy - true game changers that advance the cause of medical care don't happen often.

That's especially true when it comes to the development of medications. It takes a great deal of research and testing to find compounds that have medicinal benefits. Armies of scientists armed with microplate readers to measure absorbance, centrifuges for sample separation, and hematology analyzers to test compound efficacy make up just the beginnings of the long and labor-intensive process. It's why regulators tend to approve around 22 new drugs per year for public use, leaving many afflicted patients waiting for cures that may come too late.

Now, however, some recent advances in AI technology are promising to speed that process up. It could be the beginnings of a new medical technology breakthrough on the same order of magnitude as the ones mentioned earlier. Here's what's going on.

One of the reasons that it takes so long to develop new drug therapies, even for diseases that have been around for decades, is that much of the process relies on humans screening different molecule types to find ones likely to have an effect on the disease in question. Much of that work calls for lengthy chemical property analysis, followed by structured experimentation. On average, all of that work takes between three and six years to complete.

Recently, researchers have begun to adapt next-generation AI implementations for molecule screening that could cut that time down significantly. In one test, a startup called Insilico Medicine matched its' AI platform up against the already-completed work of human researchers seeking treatment options for fibrosis. It had taken them 8 years to come up with viable candidate molecules. It took the AI just 21 days. Although further refinements are required to put the AI on par with the human researchers in terms of result quality (the AI candidates performed a bit worse in treating fibrosis), the results were overwhelmingly positive.

Another major time-consuming hurdle that drug developers face is in trying to detect adverse side effects or toxicity in their new compounds. It's difficult because such effects don't always surface in clinical trials. Some take years to show up, long after scores of patients have already suffered from them. To avoid those outcomes, pharmaceutical firms take lots of time to study similar compounds that have already have reams of human interaction data, looking for patterns that could indicate a problem.

It's yet another part of the process that AI is proving adept at. AI systems can analyze vast amounts of data about known compounds to generate predictions about how a new molecule may behave. They can also model interactions between a new compound and different physical and chemical environments. That can provide clues to how a new drug might affect different parts of a human body. Best of all, AI can accomplish those tasks with more accuracy and in a fraction of the time it would take a human research team.

Even at this early stage of the development of drug discovery AI systems, there's every reason to believe that AI-developed drugs will be on the market in the very near future. In fact, there's already an AI-designed drug intended to treat obsessive-compulsive disorder (OCD) entering human trials in Japan. If successful, it will then proceed to worldwide testing and eventual regulatory approval processes in multiple countries.

It's worth noting that the drug in question took a mere 12 months for the AI to create, which would represent a revolution in the way we develop new disease treatments. With that as a baseline, it's easy to foresee drug development and testing cycles in the future reduced to weeks, not years. It's also easy to predict the advent of personalized drug development, with AI selecting and creating individualized treatments using patient physiological and genetic data. Such outcomes would render the medical field unrecognizable compared to today - and could create a disease-free future and a new human renaissance like nothing that's come before it.

2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.

See the rest here:

Recent AI Developments Offer a Glimpse of the Future of Drug Discovery - Tech Times

DeepMind’s neural network teaches AI to reason about the world – New Scientist

Size isnt everything, relationships are

imageBROKER/REX/Shutterstock

By Matt Reynolds

The world is a confusing place, especially for an AI. But a neural network developed by UK artificial intelligence firm DeepMind that gives computers the ability to understand how different objects are related to each other could help bring it into focus.

Humans use this type of inference called relational reasoning all the time, whether we are choosing the best bunch of bananas at the supermarket or piecing together evidence from a crime scene. The ability to transfer abstract relations such as whether something is to the left of another or bigger than it from one domain to another gives us a powerful mental toolset with which to understand the world. It is a fundamental part of our intelligence says Sam Gershman, a computational neuroscientist at Harvard University.

Whats intuitive for humans is very difficult for machines to grasp, however. It is one thing for an AI to learn how to perform a specific task, such as recognising what is in an image. But transferring know-how learned via image recognition to textual analysis or any other reasoning task is a big challenge. Machines capable of such versatility will be one step closer to general intelligence, the kind of smarts that lets humans excel at many different activities.

DeepMind has built a neural network that specialises in this kind of abstract reasoning and can be plugged into other neural nets to give them a relational-reasoning power-up. The researchers trained the AI using images depicting three-dimensional shapes of different sizes and colours. It analysed pairs of objects in the images and tried to work out the relationship between them.

The team then asked it questions such as What size is the cylinder that is left of the brown metal thing that is left of the big sphere? The system answered these questions correctly 95.5 per cent of the time slightly better than humans. To demonstrate its versatility, the relational reasoning part of the AI then had to answer questions about a set of very short stories, answering correctly 95 per cent of the time.

Still, any practical applications of the system are still a long way off, says Adam Santoro at DeepMind, who led the study. It could initially be useful for computer vision, however. You can imagine an application that automatically describes what is happening in a particular image, or even video for a visually impaired person, he says.

Outperforming humans at a niche task is also not that surprising, says Gershman. We are still a very long way from machines that can make sense of the messiness of the real world. Santoro agrees. DeepMinds AI has made a start by understanding differences in size, colour and shape but theres more to relational reasoning that that. There is a lot of work needed to solve richer real-world data sets, says Santoro.

Read more: The road to artificial intelligence: A case of data over theory

Read more: Im in shock! How an AI beat the worlds best human at Go

More on these topics:

Continue reading here:

DeepMind's neural network teaches AI to reason about the world - New Scientist

Byonic.ai Redefines the Future of Digital Marketing – inForney.com

FRISCO, Texas, Aug. 23, 2021 /PRNewswire-PRWeb/ --The next generation of AI- and ML-powered marketing is coming soon. Byonic.ai is the first-of-its-kind end-to-end platform for personalized lead insights, creative content, account intelligence, intent-based data, account-based marketing, and marketing automation. It allows data-driven teams to align their marketing, product, and customer success goals with revenue growth and sales.

Byonic.ai uses an extensive database that identifies the purchasing intent and habits of in-market prospects at various points in the sales and marketing cycles. AI capabilities target the right people at the right time, providing users with unparalleled real-time engagement opportunities that help turn prospects into well-qualified customers.

The platform uses predictive and actionable insights to discover the highest-quality leads for more successful marketing and sales outcomes. Users can measure campaign success with extensive reports and analysis. The end-to-end repeatable process embedded within Byonic.ai allows users to: discover, build, target, deliver, analyze, engage, and convert.

Account intelligence finally meets artificial intelligence.

How Byonic.ai Works

Byonic.ai will revolutionize digital marketing for:

B2B marketing and sales professionals can use the platform in several ways:

"Most platforms weren't built as a one-stop-shop for all your marketing campaign needs," says Snehhil Gupta, Chief Technology Officer, at Bython Media, creators of Byonic.ai. "Now, you get a full suite of end-to-end capabilities that include account intelligence, lead insights, marketing automation, and creative content, powered by AI/ML and wrapped in one simple and intuitive platform to run smarter campaigns."

Byonic.AI will launch in Fall 2021. Marketing and demand generation professionals can sign up for an early demo on the company's website, http://www.Byonic.AI.

Media Contact

Bython Media, Bython Media, +1 (214) 295-7729, dw@bython.com

Facebook

SOURCE Bython Media

Visit link:

Byonic.ai Redefines the Future of Digital Marketing - inForney.com

Google will shut down its AI-guided Photos printing service on June 30th – Engadget

Googles automated Photos printing service wasnt long for this world at least, not in its first incarnation. Droid Life has learned (via The Verge) that Google is shutting down the AI-guided trial service on June 30th. It didnt say what prompted the closure in a notice to members, but it said it hoped to evolve this feature and make it more widely available. This isnt the end, then, even if the service is likely to change.

The $8 per month trial had AI pick your 10 best pictures (prioritized by faces, landscapes or a mix) and print them on 4x6 cardstock, with edits if you preferred. They were meant to be gifts, or just fond memories if you wanted more than just digital copies. Google didnt have the best timing, however. The service became public knowledge in February, just a month before much of the world entered pandemic lockdowns its hard to justify spending money on a photo service when you cant socialize or travel. If there is a follow-up service, it might have to wait.

More:

Google will shut down its AI-guided Photos printing service on June 30th - Engadget

3 ways COVID-19 is transforming advanced analytics and AI – World Economic Forum

While the impact of AI on COVID-19 has been widely reported in the press, the impact of COVID-19 on AI has not received much attention. Three key impact areas have helped shape the use of AI in the past five months and will continue to transform advanced analytics and AI in the months and years to come.

Responding to the COVID-19 pandemic requires global cooperation among governments, international organizations and the business community, which is at the centre of the World Economic Forums mission as the International Organization for Public-Private Cooperation.

Since its launch on 11 March, the Forums COVID Action Platform has brought together 1,667 stakeholders from 1,106 businesses and organizations to mitigate the risk and impact of the unprecedented global health emergency that is COVID-19.

The platform is created with the support of the World Health Organization and is open to all businesses and industry groups, as well as other stakeholders, aiming to integrate and inform joint action.

As an organization, the Forum has a track record of supporting efforts to contain epidemics. In 2017, at our Annual Meeting, the Coalition for Epidemic Preparedness Innovations (CEPI) was launched bringing together experts from government, business, health, academia and civil society to accelerate the development of vaccines. CEPI is currently supporting the race to develop a vaccine against this strand of the coronavirus.

The speed of decision-making leads to agile data science

The spread of the pandemic first in China and South Korea and then in Europe and the United States was swift and caught most governments, companies and citizens off-guard. This global health crisis developed into an economic crisis and a supply chain crisis within weeks (demand for toilet paper and paper towels alone rose by 600-750% during the week of March 8 in the US). Less than 100,000 global confirmed cases in early March ballooned to more than 13 million by July with more than 580,000 deaths.

With business leaders needing to act quickly, the crisis provided a chance for advanced analytics and AI-based techniques to augment decision-making. While machine learning models were a natural aid, development time for machine learning models or advanced analytical models typically clock in at four to eight weeks and thats after there is a clear understanding of the scope of the use case, as well as the necessary data to train, validate and test the models. If you add use-case evaluation before model development, and model deployment after the model has been trained, you are looking at three to four months from initial conception to production deployment.

"With business leaders needing to act quickly, the crisis provided a chance for advanced analytics and AI-based techniques to augment decision-making."

For solutions in days - not weeks or months - minimum viable AI models (MVAIM) had to be developed in much shorter time frames. Using agile data science methodologies, PwC was able to compress these times significantly, building a SEIRD (Susceptible-Exposed-Infected-Recovered-Death) model of COVID-19 progression for all 50 US states in one week. We then tested, validated and deployed it in another week. Once this initial model was deployed, we extended it to all counties in the US and made the model more sophisticated.

Uncertainty about the future leads to multi-agent simulations

Uncertainty touched every aspect of life under COVID-19 from health, to behavior to economic impact - and expedited the increased adoption of advanced analytics and AI techniques. Uncertainty feeds emotional reactions such as fear, anger and frustration and such emotionally-driven behavior took precedence over rational decisions and actions, especially in the early days of the pandemic.

Uncertainty along the different dimensions made scenario planning the dominant framework for evaluating plans and decisions. Scenario analysis became the predominant paradigm for evaluating disease progression, economic downturn and recovery (e.g., V-, U-, L, W-shaped economic recoveries), as well as for management decision-making on site openings, contingency planning, demand sensing, supply chain disruptions and workforce planning. While qualitative scenario analysis is quite common in the business world, using AI-based simulations to quantitatively understand the causal linkages of different drivers and develop contingent plans of action was brought to the fore by the pandemic.

Modeling human behavior (rational and emotional) became an important aspect of the scenario analysis. For example, compliance to the stay-at-home orders was one of the primary behavioral drivers to help contain the spread of the disease as well as economic activity. As a result, agent-based modeling and simulation was one of the primary advanced analytics and AI techniques used to perform scenario analysis. Daily mobility data on how many miles were driven within each zip code in the country became a proxy for the effectiveness of the stay-at-home orders. The same data was used to model the mobility behavior of people in different parts of US as the pandemic progressed. Agent-based models were one of the best techniques to capture the time- and location-dependent variations in human behavior during the pandemic.

System dynamic modeling, another well-known modeling technique, was critical in integrating multiple decision-making domains (e.g., COVID-19 disease progression, government interventions, people behavior, demand sensing, supply disruptions etc.). Agent-based simulation has traditionally been used by the Centers for Disease Control and Prevention (CDC) and other health authorities to model disease progression and health behaviors. Both methods have been used successfully in a number of uncertain scenarios to help make strategic and operational management decisions. Some recent examples include the following:

Lack of historical data leads to upsurge of model-based AI

Given the rarity of the pandemic event there was very little historical data at a global level on the disease. As a result, there was little data to power data-rich, model-free approaches to AI, like deep learning that have become popular in recent years.

By necessity, model-based AI (which leverages the data available) saw a resurgence. As the pandemic progressed, and more data was available, data-rich and model-free approaches could be combined, leading to a few key hybrid solutions:

In many ways, the pandemic has highlighted the inadequacies of our systems, processes, governance and behaviors. On the other hand, it has also provided an opportunity for data scientists and AI scientists to put their advanced techniques and tools to use by helping business leaders make decisions in a challenging environment thats dominated by speed, uncertainty and lack of data.

In summary, as organizations manage through this pandemic and transform themselves post-pandemic three key learnings are worth keeping in mind. First, focus on agile data science methods that address the speed, urgency, and uncertainty of decision making. Second, build and manage your business using dynamic and resilient models (e.g., scenario-based simulations using system dynamic and agent-based models) that capture the inter-relationships of multiple domains (e.g., demand, production, supply, finance) and human behavior. Third, combine model-rich and data-rich approaches to obtain the best of both worlds while building AI systems. These approaches can ensure you can seek solutions quickly while maximizing the technologies and processes already in place.

Go here to see the original:

3 ways COVID-19 is transforming advanced analytics and AI - World Economic Forum

AIs Data Hunger Will Drive Intelligence Collection – Breaking Defense

A sensor analyst at work at Joint Space Operations Center, Vandenberg Air Force Base, California

WASHINGTON: Artificial intelligence has an insatiable appetite for data but if you feed it the wrong kind of data, its going to choke. To get clean-enough data in large enough quantities for machine-learning algorithms to actually learn something from it, officials say that the intelligence community needs to change how drones, satellites, and other sensors performs their mission every day.

The turning point will be when we start seeing collection requirements for the production of training-quality datasets, versus the support of a tactical operation, said David Spirk, who became the Defense Departments Chief Data Officer in June and is now finalizing the DoDs new data strategy. I dont know that weve entirely made that turn yet, but I think were talking about it.

For example, the US has collected vast amounts of data on the Central Command theater, said Spirk, who served in Afghanistan himself as a Marine Corps intel specialist. But, he told the AFCEA AI+ML conference yesterday, that data collection was driven by urgent tactical needs, without a systematic approach to archiving it, curating it, and making it accessible for machine learning.

Predator drone over Afghanistan

Thats understandable. Artificial intelligence in the modern sense was in its infancy on Sept. 11th, 2001, and the Pentagon did not systematically embrace AI until 2014, long after the peak of fighting in Afghanistan and Iraq. But it means that the military has vast archives of legacy data from drone video to maintenance records that are in poorly catalogued, inconsistently formatted or otherwise too messy for a machine-learning algorithm to use without a massive and costly clean-up.

Is that juice worth the squeeze? asked Capt. Michael Kanaan, an Air Force intelligence officer who heads the USAF-MIT Artificial Intelligence Accelerator. In many cases, you could spend a lot of time and money cleaning up out of date, low-quality data that no longer reflects how your agency does analysis today. You get a much better return on investment, he told the conference, by digitizing your [current] workflows and ensuring they produce training-quality data going forward.

Its crucial to set up a strong data management culture to govern your data collection from the beginning, said Terrence Busch, the Defense Intelligence Agencys technical director for the Machine-assisted Analytic Rapid-repository System (MARS).

It took DIA years of effort and a lot of back-end money to set up the processes, training, and technology required for data management, Busch told the conference. Its not exciting work, [and] a lot of folks didnt want to invest in it, he said but now that system is in place, the new data that DIA collects is much more accessible for AI.

At the same time, another DIA official warned the conference, you dont want to clean up your data too much, because you might erase a seemingly irrelevant detail that turns out to be useful later on.

We never want to throw it away, [because] we dont know if its going to have value later, said Brian Drake, DIAs director of artificial intelligence. The concept we are socializing inside of our agency is something weve done since World War Two, which is creating a gold copy of that data: a copy of the data as originally collected, with all its flaws, thats archived and kept unchanged in perpetuity for the benefit of future analysts.

We have to have an honest conversation with our vendors on that point, Drake told the conference, [because] we do find some data sets that come to us that have been pre-prepped and labeled, especially when it comes to imagery. While that cleaned-up data is often great for the immediate task at hand in the contract, he said, DIA needs the raw material as well.

Getting everyone from contracting officers to analysts thinking about AI-quality data is a long-term effort, Busch said. Down in the workforce level, culture adaptation is slow, he said. Weve spent at least 10 years getting people acculturated to big data, getting used to automation.

That cultural revolution now needs to spread beyond the intelligence community. Every single soldier, airman, sailor, Coast Guardsmen is really a data officer in the future, said Greg Garcia, the Armys Chief Data Officer. Every single individual, no matter what their specialty is, has to think about data.

Continue reading here:

AIs Data Hunger Will Drive Intelligence Collection - Breaking Defense

For eBay, AI is ride or die – VentureBeat

If youre not doing AI today, dont expect to be around in a few years, says Japjit Tulsi, VP of engineering at eBay,It really is that important for companies to invest in especially commerce companies.

Tulsi will speak next week at MB 2017, July 11 and 12 in SF, MobileBeats flagship event where this year weve gathered more than 30 brands to talk about how AI is being applied in businesses today.

eBay is working to stay ahead of the curve, now that machine learning and AI is growing in importance. It has focused on the potential of AI for the past ten years. The companys approach to AI has been built on a platform of research and development, Tulsi says, plus decades of insights and data about consumer behavior, making even the simplest applications incredibly valuable.

As an example, Tulsi points to the merchandizing strip at the bottom of every item page, which shows similar items that a shopper might be intrigued by, and often leads them down a positive rabbit hole of shopping and buying.

Itsmachine learning and AI at the very simplest level, and weve seen a tremendous amount of return on investment on that. Tulsi says.

However, evolving that into more sophisticated personalization has proven difficult, say Tulsi, because of the limitations on computing power in the past 10 years. Then in 2015 or so, processors hit the event horizon, with game-changing advances in GPUs and the dedicated hardware used for deep learning.

Massive calculations can now be made swiftly and cost-effectively. New algorithms are increasing the speed and depth of learning. And deep learning can now go broad across billions of data points with thousands of aspects and dozens of layers.

eBay has no shortage of data. The company manages about 1 billion live listings and 164 million active buyers daily, and receives 10 million new listings via mobile every week.

So another big bet was born: Investment in AI technologies like natural language understanding, computer vision, and semantic search, to drive growth and, Tulsi says, reinvent the future of commerce.

The future looks pretty much like their engineering team building descriptive and predictive models from the enormous volume of behavioral and description data generated by eBays many buyers, sellers, and products. It requires the complex fusion of massive amounts of behavior log, text, and image data, all with a particular emphasis on on developing data-driven models to improve user experience.

The question now is, can we provide you with even further personalized, relevant information over the course of the next ten years? he says. Were very focused on how AI will impact commerce.

Specifically, how it will impact the primary goal of commerce: understanding consumer buying intent wherever they are, from bricks and mortar to online browsing. Of course, cross-platform understanding of what a shopper wants is the key to delivering a truly personal, contextual shopping experience.

You want an exact item that youre looking for whether you want it, you need it, or you just like it at the price point you care about, Tulsi says. With AI, our aim is to achieve that kind of perfection underneath the hood so you dont have to spend a lot of time finding that ideal match for you.

He points at one of their beta projects, launched last year on Facebook Messenger: the eBay ShopBot. Its essentially a multimodal search engine, or a personalized shopping assistant, powered by contextual understanding, predictive modeling, and machine learning.

Keywords are not enough any more, and dont offer the most optimized shopping experience. With ShopBot, consumers can text, talk, or snap a picture, and then the assistant asks questions to better understand your intent and dig up hyper-personalized recommendations. And it gets smarter about what you want, every time you use it.

These consumer interactions also yield a tremendous amount of intent data, which can be poured right back into the algorithm.

Across the three spectrums of multimodal AI that it represents, were starting to get much much better at understanding you and whichever way that you want to interact with us, Tulsi says.

And as theyre able to improve their ability to simulate human cognitive capabilities like perception, language processing. and visual processing, the company expects that commerce will become increasingly conversational even to the point where the search box becomes redundant.

What I think is really exciting going forward is the machine will actually do the thinking for you, Tulsi laughs. You will just talk naturally to it as if youre talking to a friend and spitballing and the machine should be able to understand your intent.

And just as importantly, commerce will will become present wherever and whenever the user is engaged on their social messaging platforms.

Its an approach that digital assistant-focused companies should sit up and take notice of, Tulsi adds. They need to start investing in commerce capabilities or partnering with commerce companies to really make their assistant pan out from a financial model perspective.

From our perspective, every company should be heavily investing in AI, and it shouldnt just be about using cognitive services but actually developing your own models that keep you on the cutting edge of technology, Tulsi says. And that will hold you in good stead over the course of the next many years to come.

See the rest here:

For eBay, AI is ride or die - VentureBeat

How AI could make living in cities much less miserable – MarketWatch


MarketWatch
How AI could make living in cities much less miserable
MarketWatch
How AI could make living in cities much less miserable. By. Share. Here's how artificial intelligence can be used to create the 'smart city' of the future. Posted August 15, 2017. Latest from SectorWatch. Play All. By 2021, your Lyft ride will likely ...

Read this article:

How AI could make living in cities much less miserable - MarketWatch