Disney Develops Machine Learning Tool to Create Realistic 3D Faces – Digital Information World

3D animation has become a cornerstone of the film industry because of the fact that this is the sort of thing that could potentially end up allowing studios to create movies without having to actually film them but instead develop them on various computers using a wide range of animation techniques. With all of that having been said and now out of the way, it is important to note that a lot of the 3D faces that you see are probably going to make you feel rather uncomfortably due to something called the uncanny valley.

This is a phenomenon wherein if something looks very similar to us without being entirely accurate, it would bring feelings of terror or discomfort when we see them. As a result of the fact that this is the case, a lot of animation studios have struggled to create 3D faces that are as realistic as possible. This is why so few 3D movies feature human beings that look realistic rather than cartoonish, although recent developments made by Disney might just make this sort of thing less of an issue all in all.

The machine learning tool would do this by better analyzing the various intricacies of the human face. There are a lot of subtle variations in our expressions, and another thing to note is that its very unlikely that two different people would end up having the exact same expressions as each other. Hence, the machine learning tool will help to broaden the available options that animators can currently take advantage of.

This shows just how relevant the world of tech is regardless of what industry you may be talking about at any given point in time. Disneys research into machine learning is primarily to help the company make movies but it will also have a widespread impact on other areas as well.

Read next: AI Could Soon Give Speakers Directional Voice Detection

Read the original here:
Disney Develops Machine Learning Tool to Create Realistic 3D Faces - Digital Information World

The 12 Coolest Machine-Learning Startups Of 2020 – CRN

Learning Curve

Artificial intelligence has been a hot technology area in recent years and machine learning, a subset of AI, is one of the most important segments of the whole AI arena.

Machine learning is the development of intelligent algorithms and statistical models that improve software through experience without the need to explicitly code those improvements. A predictive analysis application, for example, can become more accurate over time through the use of machine learning.

But machine learning has its challenges. Developing machine-learning models and systems requires a confluence of data science, data engineering and development skills. Obtaining and managing the data needed to develop and train machine-learning models is a significant task. And implementing machine-learning technology within real-world production systems can be a major hurdle.

Heres a look at a dozen startup companies, some that have been around for a few years and some just getting off the ground, that are addressing the challenges associated with machine learning.

AI.Reverie

Top Executive: Daeil Kim, Co-Founder, CEO

Headquarters: New York

AI.Reverie develops AI and machine -earning technology for data generation, data labeling and data enhancement tasks for the advancement of computer vision. The companys simulation platform is used to help acquire, curate and annotate the large amounts of data needed to train computer vision algorithms and improve AI applications.

In October AI.Reverie was named a Gartner Cool Vendor in AI core technologies.

Anodot

Top Executive: David Drai, Co-Founder, CEO

Headquarters: Redwood City, Calif.

Anodots Deep 360 autonomous business monitoring platform uses machine learning to continuously monitor business metrics, detect significant anomalies and help forecast business performance.

Anodots algorithms have a contextual understanding of business metrics, providing real-time alerts that help users cut incident costs by as much as 80 percent.

Anodot has been granted patents for technology and algorithms in such areas as anomaly score, seasonality and correlation. Earlier this year the company raised $35 million in Series C funding, bringing its total funding to $62.5 million.

BigML

Top Executive: Francisco Martin, Co-Founder, CEO

Headquarters: Corvallis, Ore.

BigML offers a comprehensive, managed machine-learning platform for easily building and sharing datasets and data models, and making highly automated, data-driven decisions. The companys programmable, scalable machine -earning platform automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery and topic modeling tasks.

The BigML Preferred Partner Program supports referral partners and partners that sell BigML and oversee implementation projects. Partner A1 Digital, for example, has developed a retail application on the BigML platform that helps retailers predict sales cannibalizationwhen promotions or other marketing activity for one product can lead to reduced demand for other products.

StormForge

Top Executive: Matt Provo, Founder, CEO

Headquarters: Cambridge, Mass.

StormForge provides machine learning-based, cloud-native application testing and performance optimization software that helps organizations optimize application performance in Kubernetes.

StormForge was founded under the name Carbon Relay and developed its Red Sky Ops tools that DevOps teams use to manage a large variety of application configurations in Kubernetes, automatically tuning them for optimized performance no matter what IT environment theyre operating in.

This week the company acquired German company Stormforger and its performance testing-as-a-platform technology. The company has rebranded as StormForge and renamed its integrated product the StormForge Platform, a comprehensive system for DevOps and IT professionals that can proactively and automatically test, analyze, configure, optimize and release containerized applications.

In February the company said that it had raised $63 million in a funding round from Insight Partners.

Comet.ML

Top Executive: Gideon Mendels, Co-Founder, CEO

Headquarters: New York

Comet.ML provides a cloud-hosted machine-learning platform for building reliable machine-learning models that help data scientists and AI teams track datasets, code changes, experimentation history and production models.

Launched in 2017, Comet.ML has raised $6.8 million in venture financing, including $4.5 million in April 2020.

Dataiku

Top Executive: Florian Douetteau, Co-Founder, CEO

Headquarters: New York

Dataikus goal with its Dataiku DSS (Data Science Studio) platform is to move AI and machine-learning use beyond lab experiments into widespread use within data-driven businesses. Dataiku DSS is used by data analysts and data scientists for a range of machine-learning, data science and data analysis tasks.

In August Dataiku raised an impressive $100 million in a Series D round of funding, bringing its total financing to $247 million.

Dataikus partner ecosystem includes analytics consultants, service partners, technology partners and VARs.

DotData

Top Executive: Ryohei Fujimaki, Founder, CEO

Headquarters: San Mateo, Calif.

DotData says its DotData Enterprise machine-learning and data science platform is capable of reducing AI and business intelligence development projects from months to days. The companys goal is to make data science processes simple enough that almost anyone, not just data scientists, can benefit from them.

The DotData platform is based on the companys AutoML 2.0 engine that performs full-cycle automation of machine-learning and data science tasks. In July the company debuted DotData Stream, a containerized AI/ML model that enables real-time predictive capabilities.

Eightfold.AI

Top Executive: Ashutosh Garg, Co-Founder, CEO

Headquarters: Mountain View, Calif.

Eightfold.AI develops the Talent Intelligence Platform, a human resource management system that utilizes AI deep learning and machine-learning technology for talent acquisition, management, development, experience and diversity. The Eightfold system, for example, uses AI and ML to better match candidate skills with job requirements and improves employee diversity by reducing unconscious bias.

In late October Eightfold.AI announced a $125 million Series round of financing, putting the startups value at more than $1 billion.

H2O.ai

Top Executive: Sri Ambati, Co-Founder, CEO

Headquarters: Mountain View, Calif.

H2O.ai wants to democratize the use of artificial intelligence for a wide range of users.

The companys H2O open-source AI and machine-learning platform, H2O AI Driverless automatic machine-learning software, H20 MLOps and other tools are used to deploy AI-based applications in financial services, insurance, health care, telecommunications, retail, pharmaceutical and digital marketing.

H2O.ai recently teamed up with data science platform developer KNIME to integrate Driverless AI for AutoMl with KNIME Server for workflow management across the entire data science life cyclefrom data access to optimization and deployment.

Iguazio

Top Executive: Asaf Somekh, Co-Founder, CEO

Headquarters: New York

The Iguazio Data Science Platform for real-time machine learning applications automates and accelerates machine-learning workflow pipelines, helping businesses develop, deploy and manage AI applications at scale that improve business outcomeswhat the company calls MLOps.

In early 2020 Iguazio raised $24 million in new financing, bringing its total funding to $72 million.

OctoML

Top Executive: Luis Ceze, Co-Founder, CEO

Headquarters: Seattle

OctoMLs Software-as-a-Service Octomizer makes it easier for businesses and organizations to put deep learning models into production more quickly on different CPU and GPU hardware, including at the edge and in the cloud.

OctoML was founded by the team that developed the Apache TVM machine-learning compiler stack project at the University of Washingtons Paul G. Allen School of Computer Science & Engineering. OctoMLs Octomizer is based on the TVM stack.

Tecton

Top Executive: Mike Del Balso, Co-Founder, CEO

Headquarters: San Francisco

Tecton just emerged from stealth in April 2020 with its data platform for machine learning that enables data scientists to turn raw data into production-ready machine-learning features. The startups technology is designed to help businesses and organizations harness and refine vast amounts of data into the predictive signals that feed machine-learning models.

The companys three founders: CEO Mike Del Balso, CTO Kevin Stumpf and Engineering Vice President Jeremy Hermann previously worked together at Uber where they developed the companys Michaelangelo machine-learning platform the ride-sharing company used to scale its operations to thousands of production models serving millions of transactions per second, according to Tecton.

The company started with $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia.

Read more from the original source:
The 12 Coolest Machine-Learning Startups Of 2020 - CRN

Commentary: Pathmind applies AI, machine learning to industrial operations – FreightWaves

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

In this installment of the AI in Supply Chain series (#AIinSupplyChain), we explore how Pathmind, an early-stage startup based in San Francisco, is helping companies apply simulation and reinforcement learning to industrial operations.

I asked Chris Nicholson, CEO and founder of Pathmind, What is the problem that Pathmind solves for its customers? Who is the typical customer?

Nicholson said: The typical Pathmind customer is an industrial engineer working at a simulation consulting firm or on the simulation team of a large corporation with industrial operations to optimize. This ranges from manufacturing companies to the natural resources sector, such as mining and oil and gas. Our clients build simulations of physical systems for routing, job scheduling or price forecasting, and then search for strategies to get more efficient.

Pathminds software is suited for manufacturing resource management, energy usage management optimization and logistics optimization.

As with every other startup that I have highlighted as a case in this #AIinSupplyChain series, I asked, What is the secret sauce that makes Pathmind successful? What is unique about your approach? Deep learning seems to be all the rage these days. Does Pathmind use a form of deep learning? Reinforcement learning?

Nicholson responded: We automate tasks that our users find tedious or frustrating so that they can focus on whats interesting. For example, we set up and maintain a distributed computing cluster for training algorithms. We automatically select and tune the right reinforcement learning algorithms, so that our users can focus on building the right simulations and coaching their AI agents.

Echoing topics that we have discussed in earlier articles in this series, he continued: Pathmind uses some of the latest deep reinforcement learning algorithms from OpenAI and DeepMind to find new optimization strategies for our users. Deep reinforcement learning has achieved breakthroughs in gaming, and it is beginning to show the same performance for industrial operations and supply chain.

On its website, Pathmind describes saving a large metals processor 10% of its expenditures on power. It also describes the use of its software to increase ore preparation by 19% at an open-pit mining site.

Given how difficult it is to obtain good quality data for AI and machine learning systems for industrial settings, I asked how Pathmind handles that problem.

Simulations generate synthetic data, and lots of it, said Slin Lee, Pathminds head of engineering. The challenge is to build a simulation that reflects your underlying operations, but there are many tools to validate results.

Once you pass the simulation stage, you can integrate your reinforcement learning policy into an ERP. Most companies have a lot of the data they need in those systems. And yes, theres always data cleansing to do, he added.

As the customer success examples Pathmind provides on its website suggest, mining companies are increasingly looking to adopt and implement new software to increase efficiencies in their internal operations. This is happening because the industry as a whole runs on very old technology, and deposits of ore are becoming increasingly difficult to access as existing mines reach maturity. Moreover, the growing trend toward the decarbonization of supply chains, and the regulations that will eventually follow to make decarbonization a requirement, provide an incentive for mining companies to seize the initiative in figuring out how to achieve that goal by implementing new technology

The areas in which AI and machine learning are making the greatest inroads are mineral exploration using geological data to make the process of seeking new mineral deposits less prone to error and waste; predictive maintenance and safety using data to preemptively repair expensive machinery before breakdowns occur; cyberphysical systems creating digital models of the mining operation in order to quickly simulate various scenarios; and autonomous vehicles using autonomous trucks and other autonomous vehicles and machinery to move resources within the area in which mining operations are taking place.

According to Statista, The revenue of the top 40 global mining companies, which represent a vast majority of the whole industry, amounted to some 692 billion U.S. dollars in 2019. The net profit margin of the mining industry decreased from 25 percent in 2010 to nine percent in 2019.

The trend toward mining companies and other natural-resource-intensive industries adopting new technology is going to continue. So this is a topic we will continue to pay attention to in this column.

Conclusion

If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains, wed love to tell your story at FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Dig deeper into the #AIinSupplyChain Series with FreightWaves:

Commentary: Optimal Dynamics the decision layer of logistics? (July 7)

Commentary: Combine optimization, machine learning and simulation to move freight (July 17)

Commentary: SmartHop brings AI to owner-operators and brokers (July 22)

Commentary: Optimizing a truck fleet using artificial intelligence (July 28)

Commentary: FleetOps tries to solve data fragmentation issues in trucking (Aug. 5)

Commentary: Bulgarias Transmetrics uses augmented intelligence to help customers (Aug. 11)

Commentary: Applying AI to decision-making in shipping and commodities markets (Aug. 27)

Commentary: The enabling technologies for the factories of the future (Sept. 3)

Commentary: The enabling technologies for the networks of the future (Sept. 10)

Commentary: Understanding the data issues that slow adoption of industrial AI (Sept. 16)

Commentary: How AI and machine learning improve supply chain visibility, shipping insurance (Sept. 24)

Commentary: How AI, machine learning are streamlining workflows in freight forwarding, customs brokerage (Oct. 1)

Commentary: Can AI and machine learning improve the economy? (Oct. 8)

Commentary: Savitude and StyleSage leverage AI, machine learning in fashion retail (Oct. 15)

Commentary: How Japans ABEJA helps large companies operationalize AI, machine learning (Oct. 26)

Authors disclosure: I am not an investor in any early-stage startups mentioned in this article, either personally or through REFASHIOND Ventures. I have no other financial relationship with any entities mentioned in this article.

View post:
Commentary: Pathmind applies AI, machine learning to industrial operations - FreightWaves

Before machine learning can become ubiquitous, here are four things we need to do now – SiliconANGLE News

It wasnt too long ago that concepts such as communicating with your friends in real time through text or accessing your bank account information all from a mobile device seemed outside the realm of possibility. Today, thanks in large part to the cloud, these actions are so commonplace, we hardly even think about these incredible processes.

Now, as we enter the golden age of machine learning, we can expect a similar boom of benefits that previously seemed impossible.

Machine learning is already helping companies make better and faster decisions. In healthcare, the use of predictive models created with machine learning is accelerating research and discovery of new drugs and treatment regiments. In other industries, its helping remote villages of Southeast Africa gain access to financial services and matching individuals experiencing homelessness with housing.

In the short term, were encouraged by the applications of machine learning already benefiting our world. But it has the potential to have an even greater impact on our society. In the future, machine learning will be intertwined and under the hood of almost every application, business process and end-user experience.

However, before this technology becomes so ubiquitous that its almost boring, there are four key barriers to adoption we need to clear first:

The only way that machine learning will truly scale is if we as an industry make it easier for everyone regardless of skill level or resources to be able to incorporate this sophisticated technology into applications and business processes.

To achieve this, companies should take advantage of tools that have intelligence directly built into applications from which their entire organization can benefit. For example, Kabbage Inc., a data and technology company providing small business cash flow solutions, used artificial intelligence to adapt and help processquickly an unprecedented number of small business loans and unemployment claims caused by COVID-19 while preserving more than 945,000 jobs in America. By folding artificial intelligence into personalization, document processing, enterprise search, contact center intelligence, supply chain or fraud detection, all workers can benefit from machine learning in a frictionless way.

As processes go from manual to automatic, workers are free to innovate and invent, and companies are empowered to be proactive instead of reactive. And as this technology becomes more intuitive and accessible, it can be applied to nearly every problem imaginable from the toughest challenges in the information technology department to the biggest environmental issues in the world.

According to the World Economic Forum, the growth of AI could create 58 million net new jobs in the next few years. However, research suggests that there are currently only 300,000 AI engineers worldwide, and AI-related job postings are three times that of job searches with a widening divergence.

Given this significant gap, organizations need to recognize that they simply arent going to be able to hire all the data scientists they need as they continue to implement machine learning into their work. Moreover, this pace of innovation will open doors and ultimately create jobs we cant even begin to imagine today.

Thats why companies around the world such asMorningstar, Liberty MutualandDBS Bank are finding innovative ways to encourage their employees to gain new machine learning skills with a fun, interactive hands-on approach. Its critical that organizations should not only direct their efforts towards training the workforce they have with machine learning skills, but also invest in training programs that develop these important skills in the workforce of tomorrow.

With anything new, often people are of two minds: Either an emerging technology is a panacea and global savior, or it is a destructive force with cataclysmic tendencies. The reality is, more often than not, a nuance somewhere in the middle. These disparate perspectives can be reconciled with information, transparency and trust.

As a first step, leaders in the industry need to help companies and communities learn about machine learning, how it works, where it can be applied and ways to use it responsibly, and understand what it is not.

Second, in order to gain faith in machine learning products, they need to be built by diverse groups of people across gender, race, age, national origin, sexual orientation, disability, culture and education. We will all benefit from individuals who bring varying backgrounds, ideas and points of view to inventing new machine learning products.

Third, machine learning services should be rigorously tested, measuring accuracy against third party benchmarks. Benchmarks should be established by academia, as well as governments, and be applied to any machine learning-based service, creating a rubric for reliable results, as well as contextualizing results for use cases.

Finally, as a society, we need to agree on what parameters should be put in place governing how and when machine learning can be used. With any new technology, there has to be a balance in protecting civil rights while also allowing for continued innovation and practical application of the technology.

Any organization working with machine learning technology should be engaging customers, researchers, academics and others to determine the benefits of its machine learning technology along with the potential risks. And they should be in active conversation with policymakers, supporting legislation, and creating their own guidelines for the responsible use of machine learning technology. Transparency, open dialogue and constant evaluation must always be prioritized to ensure that machine learning is applied appropriately and is continuously enhanced.

Through machine learning weve already accomplished so much, and yet its still day one (and we havent even had a cup of coffee yet!). If were using machine learning to help endangered orangutans, just imagine how it could be used to help save and preserve our oceans and marine life. If were using this technology to create digital snapshots of the planets forests in real-time, imagine how it could be used to predict and prevent forest fires. If machine learning can be used to help connect small-holding farmers to the people and resources they need to achieve their economic potential, imagine how it could help end world hunger.

To achieve this reality, we as an industry have a lot of work ahead of us. Im incredibly optimistic that machine learning will help us solve some of the worlds toughest challenges and create amazing end-user experiences weve never even dreamed. Before we know it, machine learning will be as familiar as reaching for our phones.

Swami Sivasubramanianis vice president of Amazon AI, running AI and machine learning services for Amazon Web Services Inc. He wrote this article for SiliconANGLE.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Here is the original post:
Before machine learning can become ubiquitous, here are four things we need to do now - SiliconANGLE News

Artificial Intelligence and Machine Learning, 5G and IoT will be the Most Important Technologies in 2021, According to new IEEE Study – PRNewswire

PISCATAWAY, N.J., Nov. 19, 2020 /PRNewswire/ --IEEE, the world's largest technical professional organization dedicated to advancing technology for humanity, today released the results of a survey of Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) in the U.S., U.K., China, India and Brazil regarding the most important technologies for 2021 overall, the impact of the COVID-19 pandemic on the speed of their technology adoption and the industries expected to be most impacted by technology in the year ahead.

2021 Most Important Technologies and ChallengesWhich will be the most important technologies in 2021? Among total respondents, nearly one-third (32%) say AI and machine learning, followed by 5G (20%) and IoT (14%).

Manufacturing (19%), healthcare (18%), financial services (15%) and education (13%) are the industries that most believe will be impacted by technology in 2021, according to CIOs and CTOS surveyed. At the same time, more than half (52%) of CIOs and CTOs see their biggest challenge in 2021 as dealing with aspects of COVID-19 recovery in relation to business operations. These challenges include a permanent hybrid remote and office work structure (22%), office and facilities reopenings and return (17%), and managing permanent remote working (13%). However, 11% said the agility to stop and start IT initiatives as this unpredictable environment continues will be their biggest challenge. Another 11% cited online security threats, including those related to remote workers, as the biggest challenge they see in 2021.

Technology Adoption, Acceleration and Disaster Preparedness due to COVID-19CIOs and CTOs surveyed have sped up adopting some technologies due to the pandemic:

The adoption of IoT (42%), augmented and virtual reality (35%) and video conferencing (35%) technologies have also been accelerated due to the global pandemic.

Compared to a year ago, CIOs and CTOs overwhelmingly (92%) believe their company is better prepared to respond to a potentially catastrophic interruption such as a data breach or natural disaster. What's more, of those who say they are better prepared, 58% strongly agree that COVID-19 accelerated their preparedness.

When asked which technologies will have the greatest impact on global COVID-19 recovery, one in four (25%) of those surveyed said AI and machine learning,

CybersecurityThe top two concerns for CIOs and CTOs when it comes to the cybersecurity of their organization are security issues related to the mobile workforce including employees bringing their own devices to work (37%) and ensuring the Internet of Things (IoT) is secure (35%). This is not surprising, since the number of connected devices such as smartphones, tablets, sensors, robots and drones is increasing dramatically.

Slightly more than one-third (34%) of CIO and CTO respondents said they can track and manage 26-50% of devices connected to their business, while 20% of those surveyed said they could track and manage 51-75% of connected devices.

About the Survey"The IEEE 2020 Global Survey of CIOs and CTOs" surveyed 350 CIOs or CTOs in the U.S., China, U.K., India and Brazil from September 21 - October 9, 2020.

About IEEEIEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics

SOURCE IEEE

https://www.ieee.org

Original post:
Artificial Intelligence and Machine Learning, 5G and IoT will be the Most Important Technologies in 2021, According to new IEEE Study - PRNewswire

DIY Camera Uses Machine Learning to Audibly Tell You What it Sees – PetaPixel

Adafruit Industries has created a machine learning camera built with the Raspberry Pi that can identify objects extremely quickly and audibly tell you what it sees. The group has listed all the necessary parts you need to build the device at home.

The camera is based on Adafruits BrainCraft HAT add-on for the Raspberry Pi 4, and uses TensorFlow Lite object recognition software to be able to recognize what it is seeing. According to Adafruits website, its compatible with both the 8-megapixel Pi camera and the 12.3-megapixel interchangeable lens version of module.

While interesting on its own, DIY Photography makes a solid point by explaining a more practical use case for photographers:

You could connect a DSLR or mirrorless camera from its trigger port into the Pis GPIO pins, or even use a USB connection with something like gPhoto, to have it shoot a photo or start recording video when it detects a specific thing enter the frame.

A camera that is capable of recognizing what it is looking at could be used to only take a photo when a specific object, animal, or even a person comes into the frame. That would mean it could have security system or wildlife monitoring applications. Whenever you might wish your camera knew what it was looking at, this kind of technology would make that a reality.

You can find all the parts you will need to build your own version of this device on Adafruits website here. They also have published an easy machine learning guide for the Raspberry Pi as well as a guide on running TensorFlow Lite.

(via DPReview and DIY Photography)

Continue reading here:
DIY Camera Uses Machine Learning to Audibly Tell You What it Sees - PetaPixel

The way we train AI is fundamentally flawed – MIT Technology Review

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training testsuggesting that they were equally accuratetheir performance varied wildly in the stress test.

The stress test used ImageNet-C, a dataset of images from ImageNet that have been pixelated or had their brightness and contrast altered, and ObjectNet, a dataset of images of everyday objects in unusual poses, such as chairs on their backs, upside-down teapots, and T-shirts hanging from hooks. Some of the 50 models did well with pixelated images, some did well with the unusual poses; some did much better overall than others. But as far as the standard training process was concerned, they were all the same.

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. It pokes some significant holes in the fundamental assumptions we've been making.

DAmour agrees. The biggest, immediate takeaway is that we need to be doing a lot more testing, he says. That wont be easy, however. The stress tests were tailored specifically to each task, using data taken from the real world or data that mimicked the real world. This is not always available.

Some stress tests are also at odds with each other: models that were good at recognizing pixelated images were often bad at recognizing images with high contrast, for example. It might not always be possible to train a single model that passes all stress tests.

One option is to design an additional stage to the training and testing process, in which many models are produced at once instead of just one. These competing models can then be tested again on specific real-world tasks to select the best one for the job.

Thats a lot of work. But for a company like Google, which builds and deploys big models, it could be worth it, says Yannic Kilcher, a machine-learning researcher at ETH Zurich. Google could offer 50 different versions of an NLP model and application developers could pick the one that worked best for them, he says.

DAmour and his colleagues dont yet have a fix but are exploring ways to improve the training process. We need to get better at specifying exactly what our requirements are for our models, he says. Because often what ends up happening is that we discover these requirements only after the model has failed out in the world.

Getting a fix is vital if AI is to have as much impact outside the lab as it is having inside. When AI underperforms in the real-world it makes people less willing to want to use it, says co-author Katherine Heller, who works at Google on AI for healthcare: We've lost a lot of trust when it comes to the killer applications, thats important trust that we want to regain.

See the original post here:
The way we train AI is fundamentally flawed - MIT Technology Review

Utilizing machine learning to uncover the right content at KMWorld Connect 2020 – KMWorld Magazine

At KMWorld Connect 2020 David Seuss, CEO, Northern Light, Sid Probstein, CTO, Keeeb, and Tom Barfield, chief solution architect, Keeb discussed Machine Learning & KM.

KMWorld Connect, November 16-19, and its co-located events, covers future-focused strategies, technologies, and tools to help organizations transform for positive outcomes.

Machine learning can assist KM activities in many ways. Seuss discussed using a semantic analysis of keywords in social posts about a topic of interest to yield clear guidance as to which terms have actual business relevance and are therefore worth investing in.

What are we hearing from our users? Seuss asked. The users hate the business research process.

By using AstraZeneca as an example, Seuss started the analysis of the companys conference presentations. By looking at the topics, Diabetes sank lower as a focus of AstraZenicas focus.

When looking at their twitter account, themes included oncology, COVID-19, and environmental issues. Not one reference was made to diabetes, according to Seuss.

Social media is where the energy of the company is first expressed, Seuss said.

An instant news analysis using text analytics tells us the same story: no mention of diabetes products, clinical trials, marketing, etc.

AI-based automated insight extraction from 250 AstraZeneca oncolcogy conference presentations gives insight into R&D focus.

Let the machine read the content and tell you what it thinks is important, Seuss said.

You can do that with a semantic graph of all the ideas in the conference presentations. Semantic graphs look for relationships between ideas and measure the number and strength of the relationships. Google search results are a real-world example of this in action.

We are approaching the era when users will no longer search for information, they will expect the machine to analyze and then summarize for them what they need to know, Seuss said. Machine-based techniques will change everything.

Probstein and Barfield addressed new approaches to integrate knowledge sharing into work. They looked at collaborative information curation so end users help identify the best content, allowing KM teams to focus on the most strategic knowledge challenges as well as the pragmatic application of AI through text analytics to improve both curation and findability and improve performance.

The super silo is on the rise, Probstein said. It stores files, logs, customer/sales and can be highly variable. He looked at search results for how COVID-19 is having an impact on businesses.

Not only are there many search engines, each one is different, Probstein said.

Probstein said Keeeb can help with this problem. The solution can search through a variety of data sources to find the right information.

One search, a few seconds, one pane of glass, Probstein said. Once you solve the search problem, now you can look through the documents.

Knowledge isnt always a whole document, it can be a few paragraphs or an image, which can then be captured and shared through Keeeb.

AI and machine learning can enable search to be integrated with existing tools or any system. Companies should give end-users simple approaches to organize with content-augmented with AI-benefitting themselves and others, Barfield said.

Read more here:
Utilizing machine learning to uncover the right content at KMWorld Connect 2020 - KMWorld Magazine

Machine Learning Predicts How Cancer Patients Will Respond to Therapy – HealthITAnalytics.com

November 18, 2020 -A machine learning algorithm accurately determined how well skin cancer patients would respond to tumor-suppressing drugs in four out of five cases, according to research conducted by a team from NYU Grossman School of Medicine and Perlmutter Cancer Center.

The study focused on metastatic melanoma, a disease that kills nearly 6,800 Americans each year. Immune checkpoint inhibitors, which keep tumors from shutting down the immune systems attack on them, have been shown to be more effective than traditional chemotherapies for many patients with melanoma.

However, half of patients dont respond to these immunotherapies, and these drugs are expensive and often cause side effects in patients.

While immune checkpoint inhibitors have profoundly changed the treatment landscape in melanoma, many tumors do not respond to treatment, and many patients experience treatment-related toxicity, said corresponding study authorIman Osman, medical oncologist in the Departments of Dermatology and Medicine (Oncology) at New York University (NYU) Grossman School of Medicine and director of the Interdisciplinary Melanoma Program at NYU Langones Perlmutter Cancer Center.

An unmet need is the ability to accurately predict which tumors will respond to which therapy. This would enable personalized treatment strategies that maximize the potential for clinical benefit and minimize exposure to unnecessary toxicity.

READ MORE: How Social Determinants Data Can Enhance Machine Learning Tools

Researchers set out to develop a machine learning model that could help predict a melanoma patients response to immune checkpoint inhibitors. The team collected 302 images of tumor tissue samples from 121 men and women treated for metastatic melanoma with immune checkpoint inhibitors at NYU Langone hospitals.

They then divided these slides into 1.2 million portions of pixels, the small bits of data that make up images. These were fed into the machine learning algorithm along with other factors, such as the severity of the disease, which kind of immunotherapy regimen was used, and whether a patient responded to the treatment.

The results showed that the machine learning model achieved an AUC of 0.8 in both the training and validation cohorts, and was able to predict which patients with a specific type of skin cancer would respond well to immunotherapies in four out of five cases.

Our findings reveal that artificial intelligence is a quick and easy method of predicting how well a melanoma patient will respond to immunotherapy, said study first author Paul Johannet, MD, a postdoctoral fellow at NYU Langone Health and its Perlmutter Cancer Center.

Researchers repeated this process with 40 slides from 30 similar patients at Vanderbilt University to determine whether the results would be similar at a different hospital system that used different equipment and sampling techniques.

READ MORE: Simple Machine Learning Method Predicts Cirrhosis Mortality Risk

A key advantage of our artificial intelligence program over other approaches such as genetic or blood analysis is that it does not require any special equipment, said study co-author Aristotelis Tsirigos, PhD, director of applied bioinformatics laboratories and clinical informatics at the Molecular Pathology Lab at NYU Langone.

The team noted that aside from the computer needed to run the program, all materials and information used in the Perlmutter technique are a standard part of cancer management that most, if not all, clinics use.

Even the smallest cancer center could potentially send the data off to a lab with this program for swift analysis, said Osman.

The machine learning method used in the study is also more streamlined than current predictive tools, such as analyzing stool samples or genetic information, which promises to reduce treatment costs and speed up patient wait times.

Several recent attempts to predict immunotherapy responses do so with robust accuracy but use technologies, such as RNA sequencing, that are not readily generalizable to the clinical setting, said corresponding study authorAristotelis Tsirigos, PhD, professor in the Institute for Computational Medicine at NYU Grossman School of Medicine and member of NYU Langones Perlmutter Cancer Center.

READ MORE: Machine Learning Forecasts Prognosis of COVID-19 Patients

Our approach shows that responses can be predicted using standard-of-care clinical information such as pre-treatment histology images and other clinical variables.

However, researchers also noted that the algorithm is not yet ready for clinical use until they can boost the accuracy from 80 percent to 90 percent and test the algorithm at more institutions. The research team plans to collect more data to improve the performance of the model.

Even at its current level of accuracy, the model could be used as a screening method to determine which patients across populations would benefit from more in-depth tests before treatment.

There is potential for using computer algorithms to analyze histology images and predict treatment response, but more work needs to be done using larger training and testing datasets, along with additional validation parameters, in order to determine whether an algorithm can be developed that achieves clinical-grade performance and is broadly generalizable, said Tsirigos.

There is data to suggest that thousands of images might be needed to train models that achieve clinical-grade performance.

Read more from the original source:
Machine Learning Predicts How Cancer Patients Will Respond to Therapy - HealthITAnalytics.com

This New Machine Learning Tool Might Stop Misinformation – Digital Information World

Misinformation has always been a problem, but the combination of widespread social media as well as a loose definition of what can be seen as factual truth in recent times has lead to a veritable explosion in misinformation over the course of the past few years. The problem is so dire that in a lot of cases websites are made specifically because of the fact that this is the sort of thing that could potentially end up allowing misinformation to spread more easily, and this is a problem that might just have been addressed by a new machine learning tool.

This machine learning tool was developed by researchers at UCL, Berkeley and Cornell will be able to detect domain registration data and use this to ascertain whether the URL is legitimate or if it has been made specifically to legitimize a certain piece of information that people might be trying to spread around. A couple of other factors also come into play here. For example, if the identity of the person that registered the domain is private, this might be a sign that the site is not legitimate. The timing of the domain registration matters to. If it was done around the time a major news event broke out, such as the recent US presidential election, this is also a negative sign.

With all of that having been said and out of the way, it is important to note that this new machine learning tool has a pretty impressive success rate of about 92%, which is the proportion of fake domains it was able to discover. Being able to tell whether or not a news source is legitimate or whether it is direct propaganda is useful because of the fact that it can help reduce the likelihood that people might just end up taking the misinformation seriously.

Read the original here:
This New Machine Learning Tool Might Stop Misinformation - Digital Information World