Daily Archives: May 14, 2020

Our Behaviour in This Pandemic Has Seriously Confused AI Machine Learning Systems – ScienceAlert

Posted: May 14, 2020 at 4:53 pm

The chaos and uncertainty surrounding the coronavirus pandemic have claimed an unlikely victim: the machine learning systems that are programmed to make sense of our online behavior.

The algorithms that recommend products on Amazon, for instance, are struggling to interpret our new lifestyles, MIT Technology Review reports.

And while machine learning tools are built to take in new data, they're typically not so robust that they can adapt as dramatically as needed.

For instance, MIT Tech reports that a company that detects credit card fraud needed to step in and tweak its algorithm to account for a surge of interest in gardening equipment and power tools.

An online retailer found that its AI was ordering stock that no longer matched with what was selling. And a firm that uses AI to recommend investments based on sentiment analysis of news stories was confused by the generally negative tone throughout the media.

"The situation is so volatile," Rael Cline, CEO of the algorithmic marketing consulting firm Nozzle, told MIT Tech.

"You're trying to optimize for toilet paper last week, and this week everyone wants to buy puzzles or gym equipment."

While some companies are dedicating more time and resources to manually steering their algorithms, others see this as an opportunity to improve.

"A pandemic like this is a perfect trigger to build better machine-learning models," Sharma said.

READ MORE: Our weird behavior during the pandemic is messing with AI models

This article was originally published by Futurism. Read the original article.

See the article here:

Our Behaviour in This Pandemic Has Seriously Confused AI Machine Learning Systems - ScienceAlert

Comments Off on Our Behaviour in This Pandemic Has Seriously Confused AI Machine Learning Systems – ScienceAlert

Onix To Help Organizations Uncover the Power of Machine Learning-Driven Search With Amazon Kendra – News-Herald.com

Posted: at 4:53 pm

LAKEWOOD, Ohio, May 14, 2020 /PRNewswire/ --Onix is proud to participate in the launch of Amazon Kendra, a highly accurate and easy to use enterprise search service powered by machine learning from Amazon Web Services (AWS).

Amazon Kendra delivers powerful natural language search capabilities to customer websites and applications so their end users can more easily find the information they need. When users ask a question, Amazon Kendra uses finely tuned machine learning algorithms to understand the context and return the most relevant results, whether that be a precise answer or an entire document.

"Search capabilities have evolved over the years. Users now expect the same experience they get from the semantic and natural language search engines and conversational interfaces they use in their personal lives," notes Onix President and CEO Tim Needles. "Powered by machine learning and natural language understanding, Amazon Kendra improves employee productivity by up to 25%. With more accurate enterprise search, Amazon Kendra opens new opportunities for keyword-based on-premises and SaaS search users to migrate to the cloud and avoid contract lock-ins."

Onix has been a leader in the enterprise search space since 2002. The company provides 1:1 consulting, planning, and deployment of search solutions for hundreds of clients with a team that includes 10 certified deployment engineers. Onix has won six prestigious awards for enterprise search and boasts a 98% Customer Satisfaction Rating.

About Onix

As a leading cloud solutions provider, Onix elevates customers with consulting services for cloud infrastructure, collaboration, devices, enterprise search and geospatial technology. Onix uses its ever-evolving expertise to achieve clients' strategic cloud computing goals.

Onix backs its strategic planning and deployment with incomparable ongoing service, training and support. It also offers its own suite of standalone products to solve specific business challenges, including OnSpend, a cloud billing and budget management software solution.

Headquartered in Lakewood, Ohio, Onix serves its customers with virtual teams in major metro areas, including Atlanta, Austin, San Francisco, Boston, Chicago and New York. Onix also has Canadian offices in Toronto, Montreal and Ottawa. Learn more at http://www.onixnet.com.

Contact: Robin SuttellOnix216-801-4984robin@onixnet.com

Original post:

Onix To Help Organizations Uncover the Power of Machine Learning-Driven Search With Amazon Kendra - News-Herald.com

Comments Off on Onix To Help Organizations Uncover the Power of Machine Learning-Driven Search With Amazon Kendra – News-Herald.com

A Lightning-Fast Introduction to Deep Learning and TensorFlow 2.0 – Built In

Posted: at 4:52 pm

From navigating to a new place to picking out new music, algorithms have laid the foundation for large parts of modern life. Similarly, artificial Intelligence is booming because it automates and backs so many products and applications. Recently, I addressed some analytical applications for TensorFlow. In this article, Im going to lay out a higher-level view of Googles TensorFlow deep learning framework, with the ultimate goal of helping you to understand and build deep learning algorithms from scratch.

Over the past couple of decades, deep learning has evolved rapidly, leading to massive disruption in a range of industries and organizations. The term was coined in 1943 when Warren McCulloch and Walter Pitts created a computer model based on neural networks of a human brain, creating the first artificial neural networks (or ANNs). Deep learning now denotes a branch of machine learning that deploys data-centric algorithms in real-time.

Backpropagation is a popular algorithm that has had a huge impact in the field of deep learning. It allows ANNs to learn by themselves based on the errors they generate while learning. To further enhance the scope of an ANN, architectures like Convolutional Neural Networks, Recurrent Neural Networks, and Generative Networks have come into the picture. Before we delve into them, lets first understand the basic components of a neural network.

Neurons and Artificial Neural Networks

An artificial neural network is a representational framework that extracts features from the data its given. The basic computational unit of an ANN is the neuron. Neurons are connected using artificial layers through which the information passes. As the information flows through these layers, the neural network identifies patterns between the data. This type of processing makes ANNs useful for several applications, such as for prediction and classification.

Now lets take a look at the basic structure of an ANN. It consists of three layers: the input layer, the output layer, which is always fixed or constant, and the hidden layer. Inputs initially pass through an input layer. This layer always accepts a constant set of dimensions. For instance, if we wanted to train a classifier that differentiates between dogs and cats, the inputs (in this case, images) should be of the same size. The input then passes through the hidden layers and the network updates the weights and recognizes the patterns. In the final step, we classify the data at the output layer.

Weights and Biases

Every neuron inside a neural network is associated with parameters, weight and bias. The weight is an integer that controls the signals between any two neurons. If the output is desirable, meaning that the output is in proximity to the one that we expected it to produce, then the weights are ideal. If the same network is generating an erroneous output thats far away from the actual one, then the network alters the weights to improve the subsequent results.

Bias, the other parameter, is the algorithms tendency to consistently learn the wrong thing by not taking into account all the information in the data. For the model to be accurate, bias needs to be low. If there are inconsistencies in the dataset, like missing values, fewer data tuples, or erroneous input data, the bias would be high and the predicted values could be wrong.

Working of a Neural Network

Before we get started with TensorFlow, lets examine how a neural network produces an output with weights, biases, and input by taking a look at the first neural network, called Perceptron, which dates back to 1958. The Perceptron network is a simple binary classifier. Understanding how this works will allow us to comprehend the workings of a modern neuron.

The Perceptron network is a supervised machine learning technique that uses a binary classifier function by mapping a vector of binary variables to a single binary output. It works as follows:

Multiply the inputs (x1, x2, x3) of the network to their corresponding weights (w1, w2, w3).

Add the multiplied weights and inputs together. This is called the weighted sum, denoted by, x1*w1 + x2*w2 +x3*w3

Apply the activation function. Determine whether the weighted sum is greater than a threshold (say, 0.5), if yes, assign 1 as the output, otherwise assign 0. This is a simple step function.

Of course, Perceptron is a simple neural network that doesnt wholly consider all the concepts necessary for an end-to-end neural network. Therefore, lets go over all the phases that a neural network has to go through to build a sophisticated ANN.

Input

A neural network has to be defined with the number of input dimensions, output features, and hidden units. All these metrics fall in a common basket called hyperparameters. Hyperparameters are numeric values that determine and define the neural network structure.

Weights and biases are set randomly for all neurons in the hidden layers.

Feed Forward

The data is sent into the input and hidden layers, where the weights get updated for every iteration. This creates a function that maps the input with the output data. Mathematically, it is defined asy=f(x), where y is the output, x is the input, and f is the activation function.

For every forward pass (when the data travels from the input to the output layer), the loss is calculated (actual value minus predicted value). The loss is again sent back (backpropagation) and the network is retrained using a loss function.

Output error

The loss is gradually reduced using gradient descent and loss function.

The gradient descent can be calculated with respect to any weight and bias.

Backpropagation

We backpropagate the error that traverses through each and every layer using the backpropagation algorithm.

Output

By minimizing the loss, the network re-updates the weights for every iteration (One Forward Pass plus One Backward Pass) and increases its accuracy.

As we havent yet talked about what an activation function is, Ill expand that a bit in the next section.

Activation Functions

An activation function is a core component of any neural network. It learns a non-linear, complex functional mapping between the input and the response variables or output. Its main purpose is to convert an input signal of a node in an ANN to an output signal. That output signal is the input to the subsequent layer in the stack. There are several types of activation functions available that could be used for different use cases. You can find a list comprising the most popular activation functions along with their respective mathematical formulae here.

Now that we understand what a feed forward pass looks like, lets also explore the backward propagation of errors.

Loss Function and Backpropagation

During training of a neural network, there are too many unknowns to be deciphered. As a result, calculating the ideal weights for all the nodes in a neural network is difficult. Therefore, we use an optimization function through which we could navigate the space of possible ideal weights to make good predictions with a trained neural network.

We use a gradient descent optimization algorithm wherein the weights are updated using the backpropagation of error. The term gradient in gradient descent refers to an error gradient, where the model with a given set of weights is used to make predictions and the error for those predictions is calculated. The gradient descent optimization algorithm is used to calculate the partial derivatives of the loss function (errors) with respect to any weight w and bias b. In practice, this means that the error vectors would be calculated commencing from the final layer, and then moving towards the input layer by updating the weights and biases, i.e., backpropagation. This is based on differentiations of the respective error terms along each layer. To make our lives easier, however, these loss functions and backpropagation algorithms are readily available in neural network frameworks such as TensorFlow and PyTorch.

Moreover, a hyperparameter called learning rate controls the rate of adjustment of weights of a network with respect to the gradient descent. The lower the learning rate, the slower we travel down the slope (to reach the optimum, or so-called ideal case) while calculating the loss.

TensorFlow is a powerful neural network framework that can be used to deploy high-level machine learning models into production. It was open-sourced by Google in 2015. Since then, its popularity has increased, making it a common choice for building deep learning models. On October 1st, a new, stable version got released, called TensorFlow 2.0, with a few major changes:

Eager Execution by Default - Instead of creating tf.session(), we can directly execute the code as usual Python code. In TensorFlow 1.x, we had to create a TensorFlow graph before computing any operation. In TensorFlow 2.0, however, we can build neural networks on the fly.

Keras Included - Keras is a high-level neural network built on top of TensorFlow. It is now integrated into TensorFlow 2.0 and we can directly import Keras as tf.keras, and thereby define our neural network.

TF Datasets - A lot of new datasets have been added to work and play with in a new module called tf.data.

1.0 Support: All the existing TensorFlow 1.x code can be executed using TensorFlow 2.0; we need not modify any of our previous code.

Major Documentation and API cleanup changes have also been introduced.

The TensorFlow library was built based on computational graphs a runtime for executing such computational graphs. Now, lets perform a simple operation in TensorFlow.

Here, we declared two variables a and b. We calculated the product of those two variables using a multiplication operation in Python (*) and stored the result in a variable called prod. Next, we calculated the sum of a and b and stored them in a variable named sum. Lastly, we declared the result variable that would divide the product by the sum and then would print it.

This explanation is just a Pythonic way of understanding the operation. In TensorFlow, each operation is considered as a computational graph. This is a more abstract way of describing a computer program and its computations. It helps in understanding the primitive operations and the order in which they are executed. In this case, we first multiply a and b, and only when this expression is evaluated, we take their sum. Later, we take prod and sum, and divide them to output the result.

TensorFlow Basics

To get started with TensorFlow, we should be aware of a few essentials related to computational graphs. Lets discuss them in brief:

Variables and Placeholders: TensorFlow uses the usual variables, which can be updated at any point of time, except that these need to be initialized before the graph is executed. Placeholders, on the other hand, are used to feed data into the graph from outside. Unlike variables, they dont need to be initialized.Consider a Regression equation, y = mx+c, where x and y are placeholders, and m and c are variables.

Constants and Operations: Constants are the numbers that cannot be updated. Operations represent nodes in the graph that perform computations on data.

Graph is the backbone that connects all the variables, placeholders, constants, and operators.

Prior to installing TensorFlow 2.0, its essential that you have Python on your machine. Lets look at its installation procedure.

Python for Windows

You can download it here.

Click on the Latest Python 3 release - Python x.x.x. Select the option that suits your system (32-bit - Windows x86 executable installer, or 64-bit - Windows x86-64 executable installer). After downloading the installer, follow the instructions that are displayed on the setup wizard. Make sure to add Python to your PATH using environment variables.

Python for OSX

You can download it here.

Click on the Latest Python 3 release - Python x.x.x. Select macOS 64-bit installer,and run the file.

Python on OSX can also be installed using Homebrew (package manager).

To do so, type the following commands:

Python for Debian/Ubuntu

Invoke the following commands:

This installs the latest version of Python and pip in your system.

Python for Fedora

Invoke the following commands:

This installs the latest version of Python and pip in your system.

After youve got Python, its time to install TensorFlow in your workspace.

To fetch the latest version, pip3 needs to be updated. To do so, type the command

Now, install TensorFlow 2.0.

This automatically installs the latest version of TensorFlow onto your system. The same command is also applicable to update the older version of TensorFlow.

The argument tensorflow in the above command could be any of these:

tensorflow Latest stable release (2.x) for CPU-only.

tensorflow-gpu Latest stable release with GPU support (Ubuntu and Windows).

tf-nightly Preview build (unstable). Ubuntu and Windows include GPU support.

tensorflow==1.15 The final version of TensorFlow 1.x.

To verify your install, execute the code:

Now that you have TensorFlow on your local machine, Jupyter notebooks are a handy tool for setting up the coding space. Execute the following command to install Jupyter on your system:

Now that everything is set up, lets explore the basic fundamentals of TensorFlow.

Tensors have previously been used largely in math and physics. In math, a tensor is an algebraic object that obeys certain transformation rules. It defines a mapping between objects and is similar to a matrix, although a tensor has no specific limit to its possible number of indices. In physics, a tensor has the same definition as in math, and is used to formulate and solve problems in areas like fluid mechanics and elasticity.

Although tensors were not deeply used in computer science, after the machine learning and deep learning boom, they have become heavily involved in solving data crunching problems.

Scalars

The simplest tensor is a scalar, which is a single number and is denoted as a rank-0 tensor or a 0th order tensor. A scalar has magnitude but no direction.

Vectors

A vector is an array of numbers and is denoted as a rank-1 tensor or a 1st order tensor. Vectors can be represented as either column vectors or row vectors.

A vector has both magnitude and direction. Each value in the vector gives the coordinate along a different axis, thus establishing direction. It can be depicted as an arrow; the length of the arrow represents the magnitude, and the orientation represents the direction.

Matrices

A matrix is a 2D array of numbers where each element is identified by a set of two numbers, row and column. A matrix is denoted as a rank-2 tensor or a 2nd order tensor. In simple terms, a matrix is a table of numbers.

Tensors

A tensor is a multi-dimensional array with any number of indices. Imagine a 3D array of numbers, where the data is arranged as a cube: thats a tensor. When its an nD array of numbers, that's a tensor as well. Tensors are usually used to represent complex data. When the data has many dimensions (>=3), a tensor is helpful in organizing it neatly. After initializing, a tensor of any number of dimensions can be processed to generate the desired outcomes.

TensorFlow represents tensors with ease using simple functionalities defined by the framework. Further, the mathematical operations that are usually carried out with numbers are implemented using the functions defined by TensorFlow.

Firstly, lets import TensorFlow into our workspace. To do so, invoke the following command:

This enables us to use the variable tf thereafter.

Now, lets take a quick overview of the basic operations and math, and you can simultaneously execute the code in the Jupyter playground for a better understanding of the concepts.

tf.Tensor

The primary object in TensorFlow that you play with is tf.Tensor. This is a tensor object that is associated with a value. It has two properties bound to it: data type and shape. The data type defines the type and size of data that will be consumed by a tensor. Possible types include float32, int32, string, et cetera. Shape defines the number of dimensions.

tf.Variable()

The variable constructor requires an argument which could be a tensor of any shape and type. After creating the instance, this variable is added to the TensorFlow graph and can be modified using any of the assign methods. It is declared as follows:

Output:

tf.constant()

The tensor is populated with a value, dtype, and, optionally, a shape. This value remains constant and cannot be modified further.

Follow this link:

A Lightning-Fast Introduction to Deep Learning and TensorFlow 2.0 - Built In

Comments Off on A Lightning-Fast Introduction to Deep Learning and TensorFlow 2.0 – Built In

SparkCognition and Milize to Offer Automated Machine Learning Solutions for Financial Institutions to the APAC Region – PRNewswire

Posted: at 4:52 pm

AUSTIN, Texas, May 14, 2020 /PRNewswire/ --SparkCognition, a leading industrial artificial intelligence (AI) company, is pleased to announce that Japanese AI and Fintech company, MILIZE Co., Ltd. will offer Japanese financial institutions fraud detection and anti-money laundering solutions. These solutions will be built using the automated machine learning software of SparkCognition.

With the enormous increase of online payment, internet banking, and QR code payments, illegal use of credit cards is on the rise. However, there are not many Japanese companies that have introduced advanced solutions for fraud detection that currently exist internationally. In addition, financial authorities and institutions around the world are expected to report strengthened measures against money laundering in August 2020. As a result, taking these steps against money laundering has become an urgent management issue in Japanese financial institutions.

At one credit card company in South America, the ratio of fraudulent use to the total transactions reached about 20%, which reduced the profitability of the business. Therefore the company introduced a fraudulent transaction detection system that utilizes the AI technology of SparkCognition, which has extensive experience working with financial service clients. Though the credit card company did not have a team of data scientists, due to the ease with which analysts on staff were able to apply SparkCognition technology, accurate machine learning models were developed, tested, and operationalized within a few short weeks. As a result, it is now possible to detect fraudulent transactions with about 90% accuracy, which has led to a significant improvement in the credit card company's profitability.

Based on SparkCognition's international success in fielding machine learning systems in financial services, MILIZE will offer a fraud detection and anti-money laundering solution, built with SparkCognition AI technology, along with consulting services, development and operational assistance to local credit card companies, banks and other financial institutions. By submitting transaction data to a MILIZE-operated cloud service, financial institutions will be able to detect suspicious transactions without making large-scale investments in self-hosted infrastructure.

MILIZE makes full use of quantitative techniques, fintech, AI, and big data, and provides a large number of operational support solutions such as risk management, performance forecast, stock price forecast, and more, to a wide range of financial institutions. SparkCognition is a leading company in the field of artificial intelligence and provides AI solutions to companies and government agencies around the world.

To learn more about SparkCognition, visit http://www.sparkcognition.com.

About SparkCognition:

With award-winning machine learning technology, a multinational footprint, and expert teams focused on defense, IIoT, and finance, SparkCognition builds artificial intelligence systems to advance the most important interests of society. Our customers are trusted with protecting and advancing lives, infrastructure, and financial systems across the globe. They turn to SparkCognition to help them analyze complex data, empower decision-making, and transform human and industrial productivity. SparkCognition offers four main products:DarwinTM, DeepArmor, SparkPredict, and DeepNLPTM. With our leading-edge artificial intelligence platforms, our clients can adapt to a rapidly changing digital landscape and accelerate their business strategies. Learn more about SparkCognition's AI applications and why we've been featured in CNBC's 2017 Disruptor 50, and recognized three years in a row on CB Insights AI 100, by visiting http://www.sparkcognition.com.

For Media Inquiries:

Michelle SaabSparkCognitionVP, Marketing Communications [emailprotected] 512-956-5491

SOURCE SparkCognition

http://sparkcognition.com

Read the original here:

SparkCognition and Milize to Offer Automated Machine Learning Solutions for Financial Institutions to the APAC Region - PRNewswire

Comments Off on SparkCognition and Milize to Offer Automated Machine Learning Solutions for Financial Institutions to the APAC Region – PRNewswire

How is Walmart Express Delivery Nailing that 2-Hour Window? Machine Learning – Retail Info Systems News

Posted: at 4:52 pm

Walmart provided more details on its new Express two-hour delivery service, piloted last month and on its way to nearly 2,000 stores.

As agility has become the key to success within a retail landscape extraordinarily disrupted by the spread of COVID-19, the company said it tested, released and scaled the initiative in just over two weeks.

As we continue to add new machine learning-driven capabilities like this in the future, as well as the corresponding customer experiences, well be able to iterate and scale quickly by leveraging the flexible technology platforms weve developed, Janey Whiteside, Walmart chief customer officer and Suresh Kumar, global technology officer and chief development officer, wrote in a company blog post.

The contactless delivery service employs machine learning to fulfill orders from nearly 2,000 stores, fulfilled by 74,000 personal shoppers. Developed by the companys in-house global technology team, the system accounts for such variables as order quantity, staffing levels, the types of delivery vehicles available, and estimated route length between a store and home.

See also: How the Coronavirus Will Shape Retail Over the Next 35 Years

It also pulls in weather data to account for delivery speeds, and Whiteside and Kumar said its consistently refining its estimates for future orders.

Consumers must pay an additional $10, on top of any other delivery charges, to take advantage of the service.

Separately. Walmartannounced it's paying out another $390 million in cash bonuses to its U.S. hourly associates as a way to recognize their efforts during the spread of COVID-19.

Full-time associates employed as of June 5 will receive $300 while part-time and temporary associates will receive $150, paid out on June 25. Associates in stores, clubs, supply chain and offices, drivers, and assistant managers in stores and clubs are all included.

Walmart and Sams Club associates continue to do remarkable work, and its important we reward and appreciate them, said John Furner, president and CEO of Walmart U.S., in a statement. All across the country, theyre providing Americans with the food, medicine and supplies they need, while going above and beyond the normal scope of their jobs diligently sanitizing their facilities, making customers and members feel safe and welcome, and handling difficult situations with professionalism and grace.

The retailer has committed more than $935 million in bonuses for associates so far this year.

See also: Walmart Expands No-Contact Transactions During COVID-19

Read the original post:

How is Walmart Express Delivery Nailing that 2-Hour Window? Machine Learning - Retail Info Systems News

Comments Off on How is Walmart Express Delivery Nailing that 2-Hour Window? Machine Learning – Retail Info Systems News

DIU seeks one form of automation (ML) than can help another (RPA) – FedScoop

Posted: at 4:52 pm

Written by Jackson Barnett May 14, 2020 | FEDSCOOP

Think of it asmachines helping machines: The Defense Innovation Unitwants a machine learning platform that can boost the Pentagons existing uses of robotic process automation (RPA) for business tasks.

The goal of the Silicon Valley-basedagencys solicitationis to help nudge Department of Defense RPAs into more complex problem-solvingterritoryby providing pattern recognitionand instructions on how toadjust automation to fit changing scenarios.

The ML platform will identify and suggest corrections to business processes that are not limited to previously well-defined business logic methods,

The DOD has sought to expand its use of RPAs to reduce some of the tedious work many employees are still required to conduct manually.Current use cases arelimitedtonarrow, well-defined tasks, the department says, but it wants machine learning to help automate less-defined problems like finding abuse or fraud in finical systems, according to the solicitation. The platform will integrate with current RPA technology and be used for data management and algorithmic training.

Machine learning, a type of artificial intelligencesystem that trains computers to make inferences from large data sets, can help by identifying corrections and fixes to automation that gets stumped on less-defined tasks. The advantage of machine learning is that computers can detect subtle changes that would get lost to the human eye.

The appeal of RPA is simple, DOD officials say.

We all generally have more work than we have time to do, Rachael Martin, theJointArtificial Intelligence Centers mission chief for intelligent business automation, augmentation and analytics, said during an April webinar.

Martin said the JAIC is working to help DOD components adopt RPAs. The center is helping coordinate policy and technical solutions for parts of the military touse initsown problem sets, she said in April. Many of the current use cases are being tested in DOD support agencies, she said.

DIU isnot looking for a cloud service provider or new RPAs, just a platform that will simplify data flows and use open architecture to leveragemachine learning, according to the solicitation.

More:

DIU seeks one form of automation (ML) than can help another (RPA) - FedScoop

Comments Off on DIU seeks one form of automation (ML) than can help another (RPA) – FedScoop

Federated Learning Fuses AI and Privacy and It Could Transform Healthcare – Built In

Posted: at 4:52 pm

Its an understatement to say that doctors are swamped right now. At the beginning of April, coronavirus patients had filled New York emergency rooms so thoroughly that doctors across specialties,including dermatologists and orthopedists, had to help out.

Short-term, doctors need reliable, proven technology, like N95 masks. Longer-term, though, machine learning algorithms could help doctors treat patients. These algorithms can function as hyper-specialized doctors assistants, performing key technical tasks like scanning an MRI for signs of brain cancer, or flagging pathology slides that show breast cancer has metastasized to the lymph nodes.

One day, an algorithm could check CT scans for the lung lesions and abnormalities that indicate coronavirus.

Thats a model that could be trained, Mona G.Flores, MD, global head of medical AIat NVIDIA, told Built In.

At least, it could be trained in theory. Training an algorithm fit for a clinical setting, though, requires a large, diverse dataset. Thats hard to achieve in practice, especially when it comes to medical imaging. In the U.S.,HIPAA regulations make it very difficult for hospitals to share patient scans, even anonymized ones; privacy is a top priority at medical institutions.

More on AI and PrivacyDifferential Privacy Injects Noise Into Data Sets. Heres How It Works.

Thats not to say trained algorithms havent made it into clinical settings. A handful have passed muster with the U.S. Food and Drug Administration, according to Dr. Spyridon Bakas, a professor at University of Pennsylvanias Center for Biomedical Imaging Computing and Analytics.

In radiology, for instance, algorithms help some doctors track tumor size and progression, along with things that cannot be seen with the naked eye, Dr. Bakas told Built In like where the tumor will recur, and when.

If algorithms could train on data without puncturing its HIPAA-mandated privacy, though, machine learning could have a much bigger impact on healthcare.

And thats actually possible, thanks to a new algorithm training technique: federated learning.

Federated learning is a way of training machine learning algorithms on private, fragmented data, stored on a variety of servers and devices. Instead of pooling their data, participants all train the same algorithm on their separate data. Then they pool their trained algorithm parameters not their data on a central server, which aggregates all their contributions into a new, composite algorithm. When these steps are repeated, models across institutions converge.

Federated learning is a way of training machine learning algorithms on private, fragmented data, stored on a variety of servers and devices. Instead of pooling their data, participating institutions all train the same algorithm on their in-house, proprietary data. Then they pool their trained algorithm parameters not their data on a central server, which aggregates all their contributions into a new, composite algorithm. This composite gets shipped back to each participating institution for more training, and then shipped back to the central server for more aggregation.

Eventually, all the individual institutions algorithms converge on an optimal, trained algorithm, more generally applicable than any one institutions would have been and nearly identical to the model that would have arisen from training the algorithm on pooled data.

In December of 2019, at a radiology conference in Chicago, NVIDIA unveiled a new feature for Clara SDK. This software development kit, created expressly for the healthcare field, helps medical institutions make and deploy machine learning models with a set of tools and libraries and examples, Dr. Flores said.

The new tool was Clara Federated Learning infrastructure that allowed medical institutions to collaborate on machine learning projects without sharing patient data.

NVIDIAs not the only tech company embracing federated learning. Another medical AI company, Owkin, has rolled out a software stack for federated learning called Owkin Connect, which integrates with NVIDIAs Clara. Meanwhile, at least two general-purpose federated learning frameworks have rolled out recently, too: Googles TensorFlow Federated and the open-source PySyft.

The concept of federated learning, though, dates back to years earlier. Like many innovations, it was born at Google.

In 2017, Google researchers published a paper on a new technique they hoped could improve search suggestions on Gboard, the digital keyboard on Android phones. It was the first paper on federated learning.

In a blog post, Google AI research scientists Brendan McMahan and Daniel Ramage explained the very first federated learning use case like this:

When Gboard shows a suggested query, your phone locally stores information about the current context and whether you clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboards query suggestion model.

In other words, by blending edge computing and machine learning, federated learning offered a way to constantly improve the global query suggestion model without tracking users every move in a central database. In other words, it allowed Google to streamline its data collection processan essential given the Android OS more than 2 billion active users.

Thats just one of many potential applications, though. Dr. Bakas saw potential applications in medical imaging. This should come as no surprise: Dr. Bakas was the lead organizer of the BraTS challenge.

Since 2012, the BraTS challenge an annual data science competition has asked competitors to train algorithms to spot signs of brain tumors, specifically gliomas, on MRIs. All the competing teams use the same benchmark dataset to train, validate and test their algorithms.

In 2018, that dataset consisted of about 2,000 MRIs from roughly 500 patients, pulled from ten different medical institutions, Dr. Bakas said.

Now, this is a tiny fraction of the MRIs in the world relevant to the BraTS contest; about 20,000 people per year get diagnosed with gliomas in the U.S. alone. But obtaining medical images for a competition data set is tricky. For one, it requires the patients consent. For another, it requires approval from the contributing hospitals internal review board, which involves proving the competition serves the greater good.

The BraTS challenge is just one of many data science challenges that navigate labyrinthine bureaucracy to compile datasets of medical images.

Major companies rely on these datasets, too; theyre more robust than what even Google could easily amass on its own. Googles LYNA, a machine learning algorithm that can pinpoint signs of metastatic breast cancer in the lymph nodes, first made headlines by parsing the images from the 2016 ISBI Camelyon challenges dataset more than 10 percent more accurately than the contests original winner. NVIDIA, meanwhile, sent a team to the 2018 BraTS challenge and won.

[A]n accurate algorithm alone is insufficient to improve pathologists workflow or improve outcomes for breast cancer patients.

Even challenge-winning algorithms, though or the algorithms that beat the winning algorithms arent ready for clinical use. Googles LYNA remains in development. Despite 2018 headlines touting it asbetter than humans in detecting advanced breast cancer, it still needs more testing.

[A]n accurate algorithm alone is insufficient to improve pathologists workflow or improve outcomes for breast cancer patients, Google researchers Martin Stumpe and Craig Mermel wrote on the Google AI blog.

For one thing, it was trained to read one slide per patient but in a real clinical setting, doctors look at multiple slides per patient.

For another, accuracy in a challenge context doesnt always mean real-world accuracy. Challenge datasets are small, and biased by the fact that every patient consented to share their data. Before clinical use, even a stellar algorithm may need to train on more data.

Like, much more data.

More on Data ScienceCoronavirus Charts Are Everywhere. But Are They Good?

Federated learning, Dr. Bakas saw, could allow powerful algorithms access to massive stores of data. But how well did it work? In other words, could federated learning train an algorithm as accurate as one trained on pooled data? In 2018, he and a team of researchers from Intel publisheda paper on exactly that.

No one before has attempted to apply federated learning in medicine, he said.

He and his co-authors trained an off-the-shelf, basic algorithm on BraTS 2018 MRI images using four different techniques. One was traditional machine learning, using pooled data; another was federated learning; the other two techniques were alternate collaborative learning techniques that, like federated learning, involved training an algorithm on a fragmented dataset.

We were not married to federated learning, Dr. Bakas said.

It emerged as a clear success story in their research, though the best technique for melding AI with HIPAA-mandated data privacy. In terms of accuracy, the algorithm trained via federated learning was second only to the algorithm trained on conventional, pooled data. (The difference was subtle, too; the federated learning algorithm was 99 percent as accurate as the traditional one.) Federated learning also made all the different institutions algorithms converge more neatly on an optimal model than other collaborative learning techniques.

Once Dr. Bakas and his coauthors validated the concept of federated learning, a team of NVIDIA researchers elaborated on it further, Dr. Bakas explained. Their focus was fusing it with even more ironclad privacy technology. Though federated learning never involves pooling patient data, it does involve pooling algorithms trained on patient data and hackers could, hypothetically, reconstruct the original data from the trained algorithms.

NVIDIA found a way to prevent this with a blend of encryption and differential privacy. The reinvented model aggregation process involves transferring only partial weights... so that people cannot reconstruct the data, Dr. Flores said.

Its worth noting that NVIDIAs paper, like the one Dr. Bakas co-authored, relied on the BraTS 2018 dataset. This was largely a matter of practicality, but the link between data science competitions and federated learning could grow more substantive.

In the long-term, Dr. Bakas sees data science competitions facilitating algorithmic development; thanks to common data sets and performance metrics, these contests help identify top-tier machine learning algorithms. The winners can then progress to federated learning projects and train on much bigger data sets.

In other words, federated learning projects wont replace data science competitions. Instead, they will function as a kind of major league for competition-winning algorithms to play in and theyll improve the odds of useful algorithms making it into clinical settings.

The end goal is really to reach to the clinic, Dr. Bakas said, to help the radiologist [and] to help the clinician do their work more efficiently.

Short answer: a lot. Federated learning is still a new approach to machine learning Clara FL, lets remember, debuted less than six months agoand researchers continue to work out the kinks.

So far, NVIDIAs team has learned that clear, shared data protocols play a key role in federated learning projects.

You have to make sure that the data to each of the sites is labeled in the same fashion, Dr. Flores said, so that you're comparing apples to apples.

Open questions remain, though. For instance when a central server aggregates a group of trained algorithms, how should it do that? Its not as straightforward as taking a mathematical average, because each institutions dataset is different in terms of size, underlying population demographics and other factors.

Which ones do you give more weight to than others? Dr.Flores said. There are many different ways of aggregating the data That's something that we are still researching.

Federated learning has major potential, though, especially in Europe, where privacy regulations have already tightened due to the General Data Protection Regulation. The law, which went into effect back in 2018, is the self-proclaimed toughest privacy and security law in the world so stringent, Dr. Bakas noted, that it would prevent hospitals from contributing patient data to the BraTS challenge, even if the individual patients consented.

So far, the U.S. hasnt cracked down quite as heavily on privacy as the EU has, but federated learning could still transform industries where privacy is paramount. Already, banks can train machine learning models to recognize signs of fraud, using in-house data; however, if each bank has its own model, it will benefit big banks and leave small banks vulnerable.

While individual banks may like this outcome, it is less than ideal for solving the social issue of money laundering, writes B Capital venture capitalist Mike Fernandez.

Federated learning could even the playing field, allowing banks of all sizes to contribute to a global fraud detection model trained on more data than any one bank could amass, all while maintaining their clients privacy.

Federated learning could apply to other industries, too. As browsers like Mozilla and Google Chrome phase out third-party cookies, federated learning of cohorts could become a way of targeting digital ads to groups of like-minded users, while still keeping individual browser histories private. Federated learning could also allow self-driving cars to share the locations of potholes and other road hazards without sharing, say, their exact current location.

One thing Dr. Bakas doesnt see federated learning doing, even in the distant future: automating away doctors. Instead, he sees it freeing up doctors to do what they do best, whether thats connecting with patients or treating novel and complex ailments with innovative treatments. Doctors have already dreamed up creative approaches to the coronavirus, like using massage mattresses for pregnant women to boost patients oxygen levels.

They just dont really excel at scanning medical imaging and diagnosing common, well-documented ailments, like gliomas or metastatic breast cancer.

They can identify something that is already flaring up on a scan, Dr. Bakas said, but there are some ambiguous areas that radiologists are uncertain about.

Machine learning algorithms, too, often make mistakes about these areas. At first. But over time, they can learn to make fewer, spotting patterns in positive cases invisible to the human eye.

This is why they complement doctors so powerfully they can see routine medical protocols in a fresh, robotic way. That may sound like an oxymoron, but its not necessarily one anymore.

More on Data ScienceThe Dos and Donts of Database Design, According to Experts

Excerpt from:

Federated Learning Fuses AI and Privacy and It Could Transform Healthcare - Built In

Comments Off on Federated Learning Fuses AI and Privacy and It Could Transform Healthcare – Built In

What Is Differential Deep Learning? Through The Lens Of Trading – Analytics India Magazine

Posted: at 4:52 pm

The explosion of the internet, in conjunction with the success of neural networks, brought the world of finance closer to more exotic approaches. Deep learning today is one such technique that is being widely adopted to cut down losses and generate profits.

When gut instincts do not do the job, mathematical methods come into play. Differential equations, for instance, can be used to represent a dynamic model. The approximation of pricing functions is a persistent challenge in quantitative finance. By the early 1980s, researchers were already experimenting with Taylor Expansions for stochastic volatility models.

For example, if company A wants to buy a commodity say oil in future from company B but is unsure of the future prices. So company A wants to make a deal with B that no matter what the price of oil is in the future, B should sell it to A for a price according to their contract.

In the world of finance, this is a watered-down version of derivatives trading. Derivatives are the securities made on underlying assets. In the above case, company A predicts a rise in price, and company B predicts a fall in price. Both these companies are making a bet on future prices and agree upon a price that cuts down their losses or can even bring profits (if A sells after price rise). So how do these companies arrive at a certain price or how do they predict the future price?

Taking the same example of derivatives trading, the researchers at Danske Bank of Denmark, have explored the implications of differential deep learning.

Deep learning offers the much needed analytic speeds, which are necessary for an approximation of volatile markets. Machine learning tools can take up the high dimensionality (many parameters) trait of a market and help resolve the computational bottlenecks.

Differential machine learning is an extension of supervised learning, where ML models are trained on differentials of labels to inputs.

In the context of financial derivatives and risk management, pathwise differentials are popularly

computed with automatic adjoint differentiation (AAD). AAD is an algorithm to calculate derivative sensitivities, very quickly. Nothing more, nothing less. AAD is also known in the field of machine learning under the name back-propagation or simply backprop.

Differential machine learning, combined with AAD, wrote the authors, provides extremely effective pricing and risk approximations. They say that fast pricing analytics can be produced and can effectively compute risk management metrics and even simulate hedge strategies.

This work compares differential machine learning to data augmentation in computer vision, where multiple labelled images are produced from a single one, by cropping, zooming, rotating or recolouring.

Data augmentation not only extends the training set but also encourages the machine learning model to learn important invariances (features that stay the same). Similarly, derivatives labels not only increase the amount of information in the training set but also encourage the model to learn the shape of the pricing function. Derivatives from feedforward networks form another neural network, efficiently computing risk sensitivities in the context of pricing approximation. Since the adjoints form a second network, one can use them for training as well as expect significant performance gain.

Risk sensitivities converge considerably slower than values and often remain blatantly wrong, even with hundreds of thousands of examples. We resolve these problems by training ML models on datasets augmented with differentials of labels with respect to the following inputs:

This simple idea, assert the authors, along with the adequate training algorithm, will allow ML models to learn accurate approximations even from small datasets, making machine learning viable in the context of trading.

Differential machine learning learns better from data alone, the vast amount of information contained in the differentials playing a similar role, and often more effective, to manual adjustments from contextual information.

The researchers posit that the unreasonable effectiveness of differential ML is applicable in situations where high-quality first-order derivatives with training inputs are available and in complex computational tasks such as the pricing and risk approximation of complex derivatives trading.

Differentials inject meaningful additional information, eventually resulting in better results with smaller datasets. Learning effectively from small datasets is critical in the context of regulations, where the pricing approximation must be learned quickly, and the expense of a large training set cannot be afforded.

The results from the experiments by Danske banks researchers show that learning the correct shape from differentials is crucial to the performance of regression models, including neural networks.

Know more about differential deep learning here.

comments

See more here:

What Is Differential Deep Learning? Through The Lens Of Trading - Analytics India Magazine

Comments Off on What Is Differential Deep Learning? Through The Lens Of Trading – Analytics India Magazine

Machine Learning Market Growth Trends, Key Players, Analysis, Competitive Strategies and Forecasts to 2026 – News Distinct

Posted: at 4:52 pm

H2O.ai and SAS Institute

Machine Learning Market Competitive Analysis:

Consistent technological developments, surging industrialization, raw material affluence, increasing demand for the Machine Learning , and rising disposable incomes, soaring product awareness are adding considerable revenue to the market. According to the report, the Machine Learning market is expected to report a healthy CAGR from 2020 to 2026. Affairs such as product innovations, industrialization, increasing urbanization in the developing and developed countries are likely to boost market demand in the near future.

The report further sheds light on the current and forthcoming opportunities and challenges in the Machine Learning market and provide succinct analysis that assists clients in improving their business gains. Potential market threats, risks, uncertainties, and obstacles are also highlighted in this report that helps market players to lower the possible losses to their Machine Learning business. The report also employs various analytical models such as Porters Five Forces and SWOT analysis to evaluate several bargaining powers, threats, and opportunities in the market.

Machine Learning Market Segments:

Moreover, the leading Machine Learning manufacturers and companies are illuminated in the report with extensive market intelligence. The report enfolds detailed and precise assessments of companies based on their financial operations, revenue, market size, share, annual growth rates, production cost, sales volume, gross margins, and CAGR. Their manufacturing details are also enlightened in the report, which comprises analysis of their production processes, volume, product specifications, raw material sourcing, key vendors, clients, distribution networks, organizational structure, and global presence.

The report also underscores their strategics planning including mergers, acquisitions, ventures, partnerships, product launches, and brand developments. Additionally, the report renders the exhaustive analysis of crucial market segments, which includes Machine Learning types, applications, and regions. The segmentation sections cover analytical and forecast details of each segment based on their profitability, global demand, current revue, and development prospects. The report further scrutinizes diverse regions including North America, Asia Pacific, Europe, Middle East, and Africa, and South America. The report eventually helps clients in driving their Machine Learning business wisely and building superior strategies for their Machine Learning businesses.

To get Incredible Discounts on this Premium Report, Click Here @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=6487&utm_source=NDN&utm_medium=003

Table of Content

1 Introduction of Machine Learning Market

1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions

2 Executive Summary

3 Research Methodology

3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources

4 Machine Learning Market Outlook

4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis

5 Machine Learning Market, By Deployment Model

5.1 Overview

6 Machine Learning Market, By Solution

6.1 Overview

7 Machine Learning Market, By Vertical

7.1 Overview

8 Machine Learning Market, By Geography

8.1 Overview8.2 North America8.2.1 U.S.8.2.2 Canada8.2.3 Mexico8.3 Europe8.3.1 Germany8.3.2 U.K.8.3.3 France8.3.4 Rest of Europe8.4 Asia Pacific8.4.1 China8.4.2 Japan8.4.3 India8.4.4 Rest of Asia Pacific8.5 Rest of the World8.5.1 Latin America8.5.2 Middle East

9 Machine Learning Market Competitive Landscape

9.1 Overview9.2 Company Market Ranking9.3 Key Development Strategies

10 Company Profiles

10.1.1 Overview10.1.2 Financial Performance10.1.3 Product Outlook10.1.4 Key Developments

11 Appendix

11.1 Related Research

Get Complete Report @ https://www.verifiedmarketresearch.com/product/global-machine-learning-market-size-and-forecast-to-2026/?utm_source=NDN&utm_medium=003

About us:

Verified Market Research is a leading Global Research and Consulting firm servicing over 5000+ customers. Verified Market Research provides advanced analytical research solutions while offering information enriched research studies. We offer insight into strategic and growth analyses, Data necessary to achieve corporate goals and critical revenue decisions.

Our 250 Analysts and SMEs offer a high level of expertise in data collection and governance use industrial techniques to collect and analyse data on more than 15,000 high impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise and years of collective experience to produce informative and accurate research.

We study 14+ categories from Semiconductor & Electronics, Chemicals, Advanced Materials, Aerospace & Defence, Energy & Power, Healthcare, Pharmaceuticals, Automotive & Transportation, Information & Communication Technology, Software & Services, Information Security, Mining, Minerals & Metals, Building & construction, Agriculture industry and Medical Devices from over 100 countries.

Contact us:

Mr. Edwyne Fernandes

US: +1 (650)-781-4080UK: +44 (203)-411-9686APAC: +91 (902)-863-5784US Toll Free: +1 (800)-7821768

Email: [emailprotected]

Tags: Machine Learning Market Size, Machine Learning Market Trends, Machine Learning Market Growth, Machine Learning Market Forecast, Machine Learning Market Analysis NMK, Majhi Naukri, Sarkari Naukri, Sarkari Result

Our Trending Reports

Fresh Food Packaging Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Machine Learning Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Read more:

Machine Learning Market Growth Trends, Key Players, Analysis, Competitive Strategies and Forecasts to 2026 - News Distinct

Comments Off on Machine Learning Market Growth Trends, Key Players, Analysis, Competitive Strategies and Forecasts to 2026 – News Distinct

A New Way To Think About Artificial Intelligence With This ETF – MarketWatch

Posted: at 4:52 pm

Among the myriad thematic exchange traded funds investors have to consider, artificial intelligence products are numerous and some are catching on with investors.

Count the ROBO Global Artificial Intelligence ETF THNQ, +0.72% as the latest member of the artificial intelligence ETF fray. HNQ, which debuted earlier this week, comes from a good gene pool as its stablemate,the Robo Global Robotics and Automation Index ETF ROBO, -0.67%, was the original and remains one of the largest robotics ETFs.

That's relevant because artificial intelligence and robotics are themes that frequently intersect with each other. Home to 72 stocks, the new THNQ follows the ROBO Global Artificial Intelligence Index.

Adding to the case for A.I., even with a new product such as THNQ, is that the technology has hundreds, if not thousands, of applications supporting its growth.

Companies developing AV technology are mainly relying on machine learning or deep learning, or both, according to IHS Markit. A major difference between machine learning and deep learning is that, while deep learning can automatically discover the feature to be used for classification in unsupervised exercises, machine learning requires these features to be labeled manually with more rigid rulesets. In contrast to machine learning, deep learning requires significant computing power and training data to deliver more accurate results.

Like its family ROBO, THNQ offers wide reach with exposure to 11 sub-groups. Those include big data, cloud computing, cognitive computing, e-commerce and other consumer angles and factory automation, among others. Of course, semiconductors are part of the THNQ fold, too.

The exploding use of AI is ushering in a new era of semiconductor architectures and computing platforms that can handle the accelerated processing requirements of an AI-driven world, according to ROBO Global. To tackle the challenge, semiconductor companies are creating new, more advanced AI chip engines using a whole new range of materials, equipment, and design methodologies.

While THNQ is a new ETF, investors may do well to not focus on that rather focus on the fact the AI boom is in its nascent stages.

Historically, the stock market tends to under-appreciate the scale of opportunity enjoyed by leading providers of new technologies during this phase of development, notes THNQ's issuer. This fact creates a remarkable opportunity for investors who understand the scope of the AI revolution, and who take action at a time when AI is disrupting industry as we know it and forcing us to rethink the world around us.

The new ETF charges 0.68% per year, or $68 on a $10,000 investment. That's inline with rival funds.

2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Visit link:

A New Way To Think About Artificial Intelligence With This ETF - MarketWatch

Comments Off on A New Way To Think About Artificial Intelligence With This ETF – MarketWatch