Byrider to Partner With PointPredictive as Machine Learning AI Partner to Prevent Fraud – CloudWedge

Home > News > Byrider to Partner With PointPredictive as Machine Learning AI Partner to Prevent Fraud

Byrider selected PointPredictives machine learning AI scoring after extensive testing of the solution and evaluating retrospective results. In our retrospective test with PointPredictive, we saw a significant lift in identifying defaults tied to misrepresentation and fraud, said Gary Harmon, Chief Risk Officer of Byrider.

Aspart of the integration, Byrider will use the companys scoring solutionAuto Fraud Manager with Auto Fraud Alert Reporting to identifymisrepresentation and prevent default on high-risk applications whilestreamlining the approval process of low-risk applications to improve andexpedite both the consumer and dealer loan funding experience, ultimatelyexpanding their loan portfolio profitably.

PointPredictivelaunched Auto Fraud Manager with Auto Fraud Alert Reporting to help address the$6 billion-dollar annual problem of misrepresentation and fraud that plaguesthe auto lending industry. The solution uses machine learning to minehistorical data from applications across the industry to pinpoint where fraudis happening. Over 60 million applications have been evaluated and scored bythe unique machine learning AI system which is continuously learning newpatterns as they emerge.

PointPredictive is excited to partner with Byrider to help them achieve better relationships with their borrowers and their dealer network, advises Tim Grace, CEO of PointPredictive. Our solutions have proven to help lenders reduce their risk of early defaulted loans and, in the process, help them streamline loans for reduced stipulations and friction in the lending process. By better targeting risk, the end beneficiaries are their dealers and borrowers who can see a reduction in the time it takes to fund loans.

Here is the original post:

Byrider to Partner With PointPredictive as Machine Learning AI Partner to Prevent Fraud - CloudWedge

What is deep learning and why is it in demand? – Express Computer

The human brain is complicated and for all the right reasons. A pathway of neurons, millions of cells, creating a response to stimuli and observation, all of this working simultaneously to help our brain give an appropriate response. Furthering its greatness, the brain is constantly observing and learning.

A well-known learning method- learning by example helps us develop an attitude or coping mechanism towards something we havent fully experienced but have a lot of information about. When we use the same logic for machines, deep learning comes closest to this.

Stemming out of the machine learning algorithm category, deep learning is a computer or a machines ability to learn things by example or data. It eliminates the need for manual feature extraction as it learns features directly from the data.

The most common example would be automated vehicles. An automated car is able to distinguish between people walking on the road, poles, traffic signals, and signboards by understanding the data it receives. How does it manage to detect things?

Unlike traditional learning, deep learning is an artificial neural network that uses many layers to form features from the data it acquires. This data includes image, text, and sound which helps the machine form a clearer picture of the object to easily detect it. Does this remind you of something? Yes, thats exactly how our brain works! However, we naturally possess a neural network.

As we move to an era that demands a higher level of data processing, deep learning justifies its existence for the world.

One major defining moment for it would be the use of artificial neural networks which brings out the best outcome. Unlike machine learning, there is no need to build new features and algorithms because deep learning directly identifies features from the data. It uses 150 layers of information to process features directly from the data received and also monitor its own performance.

Companies that are investing in deep learning are primarily looking to solve complex problems and this form of learning accomplishes that with its collection of large data sets.

Another reason deep learning thrives in the world today is that it powers the functions that need voice and image detection. Companies that require data on face recognition, object identification, voice-to-speech application, and translation can make optimal use of deep learning techniques.

Concluding

While we are not exempted from the fact that deep learning works best only with a huge amount of data and takes time to be set up, there is hope for better performance.

Moving from a structured and fixed architecture to an ever-evolving one, the next few years will see a rise in businesses moving to this new form of machine learning. Based on the existing data and examples of success, there seems to be an indication that companies using deep learning techniques perform better than ones that dont.

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

Read more from the original source:

What is deep learning and why is it in demand? - Express Computer

Machine Learning Answers: Sprint Stock Is Down 15% Over The Last Quarter, What Are The Chances It’ll Rebound? – Trefis

Sprint (NYSE:S) stock has seen significant volatility over recent months declining by about 15% over the last quarter and by close to 25% over the last six months on account of the companys underperforming postpaid wireless business and concerns on whether its proposed merger with larger rival T-Mobile will come to fruition.

We started with a simple question that investors could be asking about the Sprintstock: given a certain drop or rise, say a 5% drop in a week, what should we expect for the next week? Is it very likely that Sprint will recover the next week? What about the next month or a quarter?

In fact, we found that if the Sprint drops 15% in a quarter (63 trading days), there is a ~23% chance that it will rise by 10% over the subsequent month (21 trading days).Want to try other combinations? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if Sprint stock dropped, whats the chance itll rise.

For example, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over say the next month are about 34%. Quite significant, and helpful to know for someone trying to recover from a loss.

Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves. Given the recent volatility in the market owing to a mix of macroeconomic events like the trade war with China and the US Federal Reserves moves, we think investors can prepare better.

Below, we discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in Sprint stock become more likely after a drop?

Answer:

Consider two situations,

Case 1: Sprint stock drops by 5% or more in a week

Case 2: Sprint stock rises by 5% or more in a week

Is the chance of say a 5% rise in Sprint stock over the subsequent month after Case 1 or Case 2 occurs much higher for one versus the other?

The answer is not really. The chance of a 5% rise over a month (21 trading days) is roughly that same at 34% for both cases.

Question 2: What about the other way around, does a drop in Sprint stock become more likely after a rise?

Answer:

Consider, once again, two cases:

Case 1: Sprint stock drops by 5% in a week

Case 2: Sprint stock rises by 5% in a week

The probability of a 5% drop after Case 1 or Case 2 is actually quite similar at 34% and 33%, respectively. The probability is also similar for theS&P 500, and for many other stocks.

Question 3: Does patience pay?

Answer:

If you buy and hold Sprint stock, the expectation is over time the near term fluctuations will cancel out, and the long-term positive trend will favor you at least if the company is otherwise strong. Overall, according to data and Trefis machine learning engines calculations, patience absolutely pays for most stocks!

After a drop of 5% in Sprint stock over a week (5 trading days), while there is only about 23% chance the stock will gain 5% over the subsequent week, there is more than 39% chance this will happen in 6 months, and 45% chance itll gain 5% over a year (about 252 trading days).

The table below shows the trend for Sprint Stock:

Question 4: What about the possibility of a drop after a rise if you wait for a while?

Answer:

After seeing a rise of 5% over 5 days, the chances of a 5% drop in Sprint stock are about 42% over the subsequent quarter of waiting (63 trading days). This chance increases slightly to about 45% when the waiting period is a year (252 trading days).

The table below shows the trend for Sprint Stock:

Whats behind Trefis? See How its Powering New Collaboration and What-Ifs

ForCFOs and Finance Teams|Product, R&D, and Marketing Teams

More Trefis Data

Like our charts? Exploreexample interactive dashboardsand create your own.

Go here to see the original:

Machine Learning Answers: Sprint Stock Is Down 15% Over The Last Quarter, What Are The Chances It'll Rebound? - Trefis

Sports Organizations Using Machine Learning Technology to Drive Sponsorship Revenues – Sports Illustrated

The sports industry has begun to place a greater emphasis on data capture and the use of analytics over the past decade - particularly as it relates to on-field performance, but while sports has become big business, Adam Grossman (founder Block Six Analytics aka B6A) suggests from an economic and financial perspective - in terms of understanding concepts like asset valuation, cash-flow and regression - it remains behind the times. To help bring the industry up to speed, Grossman developed a sponsorship evaluation platform that values sports assets in the same manner that venture capitalists, private equity firms and investment banks look at investment opportunities. Using machine learning technology (think: natural language processing, computer vision), B6A's proprietary sponsorship model translates traditional fit and engagement benchmarks into probabilistic revenue growth metrics. Over the last 10 months, more than a dozen pro sports organizations have begun using Block Six technology - as opposed to relying on antiquated metrics like CPM - to drive sponsorship revenues.

Howie Long-Short: Sellers of sports sponsorships naturally seek brand partners that are demographically aligned. While most teams and media entities have historically managed to gather insights on their own organization, the challenge has always been capturing that of potential partners; the demographic data needed to ensure audience alignment, so that both parties can achieve their goals. Grossman explained that those on the sales side use the insights B6A provides to find new sponsors and to demonstrate their audience is a good fit for [a particular] brand. It should be noted that while were focused on rights holders, B6A also works with corporate partners investing in sports; typically, Fortune 500 companies who use the software to ensure theyre spending their marketing dollars efficiently.

Detailed knowledge about ones own audience can also be beneficial from an engagement perspective. Grossman explained that sports organizations have historically struggled to translate brand metrics into revenue metrics, but if [a seller] can prove that they have the right audience [for a buyer], that the audience is interested in the [prospective partner's] company and in their product(s) and that the seller will publish content that drives engagement and awareness [for the buyer] within the target demo, [they can say with a level of confidence that they are] maximizing the probability of increasing revenues. Statistically speaking (at least according to the way B6A measures lift in brand perception), there is significant correlation between engagement, sentiment, awareness of a brand and revenue growth.

Block Six was kind enough to run a complimentary analysis on thousands of posts attributed to followers of JohnWallStreets Twitter account to demonstrate how the platform's findings could be used. The report they turned over indicated that even in comparison to the golf companies and brands like Amazon and Apple, [JWS] disproportionately reaches a more educated and higher income audience; in fact, from an education perspective, JWS has the most educated following [analyzed to date]. While we know that a significant number of league commissioners, team owners and c-level team/league, media and agency executives read the newsletter daily, from an aggregate perspective, the data shows that JWS content is reaching a much wide range of senior leaders across the business world. Thats particularly valuable information to have as we continue in our search for the right title sponsor. To date, JWS sales efforts have been focused on service companies that seek to reach sports most influential decision makers, but the data born out of the B6A study shows that any business targeting highly educated, high-income earners should be pursued.

Taking it a step further, the psychographic observations gained reflect that technology and gambling are two topics that the JWS audience is particularly interested in. To date, JWS has not targeted brands in either field (technology due to a lack of time/resources, gambling because we incorrectly assumed they would be solely focused on consumer acquisition), but Grossman suggests that we should be as the data indicates businesses within those two sectors are natural advertisers for the brand.

Editor Note: Please note that joining our community (below) will entitle you to receive our free daily email newsletter.

Read more here:

Sports Organizations Using Machine Learning Technology to Drive Sponsorship Revenues - Sports Illustrated

2010 2019: The rise of deep learning – The Next Web

No other technology was more important over the past decade than artificial intelligence. Stanfords Andrew Ng called it the new electricity, and both Microsoft and Google changed their business strategies to become AI-first companies. In the next decade, all technology will be considered AI technology. And we can thank deep learning for that.

Deep learning is a friendly facet of machine learning that lets AI sort through data and information in a manner that emulates the human brains neural network. Rather than simply running algorithms to completion, deep learning lets us tweak the parameters of a learning system until it outputs the results we desire.

The 2019 Turing Award, given for excellence in artificial intelligence research, was awarded to three of deep learnings most influential architects, Facebooks Yann LeCun, Googles Geoffrey Hinton, and University of Montreals Yoshua Bengio. This trio, along with many others over the past decade, developed the algorithms, systems, and techniques responsible for the onslaught of AI-powered products and services that are probably dominating your holiday shopping lists.

Credit: CS231N

Deep learning powers your phones face unlock feature and its the reason Alexa and Siri understand your voice. Its what makes Microsoft Translator and Google Maps work. If it werent for deep learning, Spotify and Netflix would have no clue what you want to hear or watch next.

How does it work? Its actually simpler than you might think. The machine uses algorithms to shake out answers like a series of sifters. You put a bunch of data in one side, it falls through sifters (abstraction layers) that pull specific information from it, and the machine outputs whats basically a curated insight. A lot of this happens in whats called the black box, a place where the algorithm crunches numbers in a way that we cant explain with simple math. But since the results can be tuned to our liking, it usually doesnt matter whether we can show our work or not when it comes to deep learning.

Deep learning, like all artificial intelligence technology, isnt new. The term was brought to prominence in the 1980s by computer scientists. And by 1986 a team of researchers including Geoffrey Hinton managed to come up with a back propagation-based training method that tickled at the beginnings of an unsupervised artificial neural network. Scant a few years later a young Yann LeCun would train an AI to recognize handwritten letters using similar techniques.

Credit: Harvard Magazine

But, as those of us over 30 can attest, Siri and Alexa werent around in the late 1980s and we didnt have Google Photos there to touch up our 35mm Kodak prints. Deep learning, in the useful sense we know it now, was still a long ways off. Eventually though, the next generation of AI superstars came along and put their mark on the field.

In 2009, the beginning of the modern deep learning era, Stanfords Fei-Fei Li created ImageNet. This massive training dataset made it easier than ever for researchers to develop computer vision algorithms and directly lead to similar paradigms for natural language processing and other bedrock AI technologies that we take for granted now. This lead to an age of friendly competition that saw teams around the globe competing to see which could train the most accurate AI.

The fire was lit. By 2010 there were thousands of AI startups focused on deep learning and every big tech company from Amazon to Intel was completely dug in on the future. AI had finally arrived.Young academics with notable ideas were propelled from campus libraries to seven and eight figure jobs at Google and Apple. Deep learning was well on its way to becoming a backbone technology for all sorts of big data problems.

And then 2014 came and Apples Ian Goodfellow (then at Google) invented the generative adverserial network (GAN). This is a type of deep learning artificial neural network that plays cat-and-mouse with itself in order create an output that appears to be a continuation of its input.

Credit: Obvious

When you hear about an AI painting a picture, the machine in question is probably running a GAN that takes thousands or millions of images of real paintings and then tries to imitate them all at once. A developer tunes the GAN to be more like one style or another so that it doesnt spit out blurry gibberish and then the AI tries to fool itself. Itll make a painting and then compare the painting to all the real paintings in its dataset, if it cant tell the difference then the painting passes. But if the AI discriminator can tell its own fake, it scraps that one and starts over. Its a bit more complex than that, but the technology is useful in myriad circumstances.

Rather than just spitting out paintings, Goodfellows GANs are also directly behind DeepFakes and just about any other AI tech that seeks to blur the line between human-generated and AI-made.

In the five years since the GAN was invented, weve seen the field of AI rise from parlor tricks to producing machines capable of full-fledged superhuman feats. Thanks to deep learning,Boston Dynamics has developed robots capable of traversing rugged terrain autonomously, to include an impressive amount of gymnastics. And Skydio developed the worlds first consumer drone capable of truly autonomous navigation. Were in the safety testing phase of truly useful robots, and driverless cars feel like theyre just around the corner.

Furthermore, deep learning is at the heart of current efforts to produce general artificial intelligence (GAI) otherwise known as human-level AI. As most of us dream of living in a world where robot butlers, maids, and chefs attend to our every need, AI researchers and developers across the globe are adapting deep learning techniques to develop machines that can think. While its clear well need more than just deep learning to achieve GAI, we wouldnt be on the cusp of the golden age of AI if it werent for deep learning and the dedicated superheroes of machine learning responsible for its explosion over the past decade.

AI defined the 2010s and deep learning was at the coreof its influence. Sure, big data companies have used algorithms and AI for decades to rule the world, but the hearts and minds of the consumer class the rest of us was captivated more by the disembodied voices that are our Google Assistant, Siri, and Alexa virtual assistants than any other AI technology. Deep learning may be a bit of a dinosaur, on its own, at this point. But wed be lost without it.

The next ten years will likely see the rise of a new class of algorithm, one thats better suited for use at the edge and, perhaps, one that harnesses the power of quantum computing. But you can be sure well still be using deep learning in 2029 and for the foreseeable future.

Read next: Dell's XPS 13 gets even better with thinners bezels and a bigger keyboard

More here:

2010 2019: The rise of deep learning - The Next Web

Dr. Max Welling on Federated Learning and Bayesian Thinking – Synced

Introduced by Google in 2017, Federated Learning (FL) enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on the device, decoupling the ability to do machine learning from the need to store the data in the cloud. Two years have passed, and several new research papers have proposed novel systems to boost FL performance. This March for example a team of researchers from Google suggested a scalable production system for FL to enable increasing workload and output through the addition of resources such as compute, storage, bandwidth, etc.

Earlier this month, NeurIPS 2019 in Vancouver hosted the workshop Federated Learning for Data Privacy and Confidentiality,where academic researchers and industry practitioners discussed recent and innovative work in FL, open problems and relevant approaches.

Professor Dr. Max Welling is the research chair in Machine Learning at the University of Amsterdam and VP Technologies at Qualcomm. Welling is known for his research in Bayesian Inference, Generative modeling, Deep Learning, Variational autoencoders, Graph Convolutional Networks.

Below are excerpts from the workshop talk Dr. Welling gave on Ingredients for Bayesian, Privacy Preserving, Distributed Learning, where the professor shares his views on FL, the importance of distributed learning, and the Bayesian aspects of the domain.

The question can be separated in two parts. Why do we need distributed or federated inferencing? Maybe that is easier to answer. We need it because of reliability. If you in a self-driving car, you clearly dont want to rely on a bad connection to the cloud in order to figure out whether you should brake. Latency. If you have your virtual reality glasses on and you have just a little bit of latency youre not going to have a very good user experience. And then theres, of course, privacy, you dont want your data to get off your device. Also compute maybe because its close to where you are, and personalization you want models to be suited for you.

It took a little bit more thinking why distributed learning is so important, especially within a company how are you going to sell something like that? Privacy is the biggest factor here, there are many companies and factories that simply dont want their data to go off site, they dont want to have it go to the cloud. And so you want to do your training in-house. But theres also bandwidth. You know, moving around data is actually very expensive and theres a lot of it. So its much better to keep the data where it is and move the computation to the data. And also, personalization plays a role.

There are many challenges when you want to do this. The data could be extremely heterogeneous, so you could have a completely different distribution on one device than you have on another device. Also, the data sizes could be very different. One device could contain 10 times more data than another device. And the compute could be heterogeneous, you could have small devices with a little bit of compute that now and then or you cant use because the batterys down. There are other bigger servers that you also want to have in your in your distribution of compute devices.

The bandwidth is limited, so you dont want to send huge amounts of even parameters. Lets say we dont move data, but we move parameters. Even then you dont want to move loads and loads of parameters over the channel. So you want to maybe quantize it to see this. I believe Bayesian thinking is going to be very helpful. And again, the data needs to be private so you wouldnt want to send parameters that contain a lot of information about the data.

So first of all, of course, were going to move model parameters, were not going to move data. We have data stored at places and were going to move the algorithm to that data. So basically you get your learning update, maybe privatized, and then you move it back to your central place where youre going to update it.And of course, bandwidth is another challenge that you have to solve.

We have these heterogeneous data sources and we have very variability in the speed in which we can sync these updates. Here I think the Bayesian paradigm is going to come in handy because, for instance, if you have been running an update on a very large dataset, you can shrink your posterior parameters to a very small posterior. Where on another device, you might have much less data, and you might have a very wide posterior distribution for those parameters. Now, how to combine that? You shouldnt average them, its silly. You should do a proper posterior update where the one that has a small peaked posterior has a lot more weight than the one with a very wide posterior. Also uncertainty estimates are important in that aspect.

The other thing is that with Bayesian update, if you have a very wide posterior distribution, then you know that parameter is not going be very important for making predictions. And so if youre going to send that parameter over a channel, you will have to quantize it, especially to save bandwidth. The ones that are very uncertain anyway you can quantize at a very coarse level, and the ones which have a very peak posterior need to be encoded very precisely, and so you need much higher resolution for that. So also there, the Bayesian paradigm is going to be helpful.

In terms of privacy, there is this interesting result that if you have an uncertain parameter and you draw a sample from that posterior parameter, then that single sample is more private than providing the whole distribution. Theres results that show that you can get a certain level of differential privacy by just drawing a single sample from that posterior distribution. So effectively youre adding noise to your parameter, making it more private. Again, Bayesian thinking is synergistic with this sort of Bayesian federated learning scenario.

We can do MCMC (Markov chain Monte Carlo) and variational based distributed learning. And as theres advantages to do that because it makes the updates more principled and you can combine things which, one of them might be based on a lot more data than another one.

Then we have private and Bayesian to privatize the updates of a variational Bayesian model. Many people have worked on many other of these intersections, so we have deep learning models which have been privatized, we have quantization, which is important if you want to send your parameters over a noisy channel. And its nice because the more you quantize, the more private things become. You can compute the level of quantization from your Bayesian posterior, so all these things are very nicely tied together.

People have looked at the relation between quantized models and Bayesian models how can you use Bayesian estimates to quantized better? People have looked at quantized versus deep to make your deep neural network run faster on a mobile phone you want to quantize it. People have looked at distributed versus deep, distributed deep learning. So many of these intersections have actually been researched, but it hasnt been put together. This is what I want to call for. We can try to put these things together and at the core of all of this is Bayesian thinking, we can use it to execute better on this program.

Journalist: Fangyu Cai | Editor: Michael Sarazen

Like Loading...

Here is the original post:

Dr. Max Welling on Federated Learning and Bayesian Thinking - Synced

Can machine learning take over the role of investors? – TechHQ

As we dive deeper into the Fourth Industrial Revolution, there is no disputing how technology serves as a catalyst for growth and innovation for many businesses across a range of functions and industries.

But one technology that is steadily gaining prominence across organizations includes machine learning (ML).

In the simplest terms, ML is the science of getting computers to learn and act like humans do without being programmed. It is a form of artificial intelligence (AI) and entails feeding machine data, enabling the computer program to learn autonomously and enhance its accuracy in analyzing data.

The proliferation of technology means AI is now commonplace in our daily lives, with its presence in a panoply of things, such as driverless vehicles, facial recognition devices, and in the customer service industry.

Currently, asset managers are exploring the potential that AI/ML systems can bring to the finance industry; close to 60 percent of managers predict that ML will have a medium-to-large impact across businesses.

MLs ability to analyze large data sets and continuously self-develop through trial and error translates to increased speed and better performance in data analysis for financial firms.

For instance, according to the Harvard Business Review, ML can spot potentially outperforming equities by identifying new patterns in existing data sets and examine the collected responses of CEOs in quarterly earnings calls of the S&P 500 companies for the past 20 years.

Following this, ML can then formulate a review of good and bad stocks, thus providing organizations with valuable insights to drive important business decisions. This data also paves the way for the system to assess the trustworthiness of forecasts from specific company leaders and compare the performance of competitors in the industry.

Besides that, ML also has the capacity to analyze various forms of data, including sound and images. In the past, such formats of information were challenging for computers to analyze, but todays ML algorithms can process images faster and better than humans.

For example, analysts use GPS locations from mobile devices to pattern foot traffic at retail hubs or refer to the point of sale data to trace revenues during major holiday seasons. Hence, data analysts can leverage on this technological advancement to identify trends and new areas for investment.

It is evident that ML is full of potential, but it still has some big shoes to fil if it were to replace the role of an investor.

Nishant Kumar aptly explained this in Bloomberg, Financial data is very noisy, markets are not stationary and powerful tools require deep understanding and talent thats hard to get. One quantitative analyst, or quant, estimates the failure rate in live tests at about 90 percent. Man AHL, a quant unit of Man Group, needed three years of workto gain enough confidence in a machine-learning strategy to devote client money to it. It later extended its use to four of its main money pools.

In other words, human talent and supervision are still essential to developing the right algorithm and in exercising sound investment judgment. After all, the purpose of a machine is to automate repetitive tasks. In this context, ML may seek out correlations of data without understanding their underlying rationale.

One ML expert said, his team spends days evaluating if patterns by ML are sensible, predictive, consistent, and additive. Even if a pattern falls in line with all four criteria, it may not bear much significance in supporting profitable investment decisions.

The bottom line is ML can streamline data analysis steps, but it cannot replace human judgment. Thus, active equity managers should invest in ML systems to remain competitive in this innovate or die era. Financial firms that successfully recruit professionals with the right data skills and sharp investment judgment stands to be at the forefront of the digital economy.

Visit link:

Can machine learning take over the role of investors? - TechHQ

Are We Overly Infatuated With Deep Learning? – Forbes

Deep Learning

One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?

The Origins of Deep Learning

AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!

Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.

During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.

Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.

Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.

In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.

Deep Learnings Shortcomings

The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.

The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.

The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.

As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.

Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.

Machine Learning is not (just) Deep Learning

Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.

The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.

Read more:

Are We Overly Infatuated With Deep Learning? - Forbes

Machine Learning Market Accounted for US$ 1,289.5 Mn in 2016 and is expected to grow at a CAGR of 49.7% during the forecast period 2017 2025 – The…

Machine Learning Market indicates a top level view of current market scenario and offers a deep analysis on Machine Learning, focus at the readers point of view, turning in specified market facts and information insights. It contains inclusive crucial points that significantly have an effect on the increase of the market at global level. The report is made after a pin-point Market research and in-depth investigation of the market development in different sectors that requires correct analysis, technology-based ideas, and its validity.

Get Sample PDF @ https://www.theinsightpartners.com/sample/TIPTE100000804/

Market Key Players:

Are you looking for thorough analysis of the competition in the Machine Learning market? Well, this research report offers the right analysis which you are looking for. The authors of the report are subject matter experts and hold strong knowledge and experience in market research. The report provides enough information and data to help readers to gain understanding of the vendor landscape.

This Research gives the idea to aim at your targeted customers understanding, needs and demands. The Machine Learning industry is becoming increasingly dynamic and innovative, with more number of private players enrolling the industry.

Reason to Buy

The report presents a detailed overview of the enterprise which includes both qualitative and quantitative records. It offers assessment and forecast of the Machine Learning market primarily based on product and alertness. The file evaluates market dynamics effecting the market in the course of the forecast period i. E., drivers, restraints, opportunities, and destiny fashion and provides exhaustive PEST analysis for all five regions.

Ask for Discount @ https://www.theinsightpartners.com/discount/TIPTE100000804/

About The Insight Partners:

The Insight Partners is a one stop industry research provider of actionable intelligence. We help our clients in getting solutions to their research requirements through our syndicated and consulting research services. We specialize in industries such as Semiconductor and Electronics, Aerospace and Defense, Automotive and Transportation, Biotechnology, Healthcare IT, Manufacturing and Construction, Medical Device, Technology, Media and Telecommunications, Chemicals and Materials.

Contact Us:

Call: +1-646-491-9876Email: [emailprotected]

This post was originally published on The Picayune Current

Continued here:

Machine Learning Market Accounted for US$ 1,289.5 Mn in 2016 and is expected to grow at a CAGR of 49.7% during the forecast period 2017 2025 - The...

Ten Predictions for AI and Machine Learning in 2020 – Database Trends and Applications

In 2019 artificial intelligence and machine learning continued its upward trajectory in the market, promising to change the future as we know it. To help support data management processes and decision making, artificial and augmented intelligence is being infused into products and services.

Machine learning sits in the center of all AI conversations, as combining machine learning with AI and cognitive technologies can make it even more effective in processing large volumes of information. Both technologies can lead to automation of tasks inside and outside the enterprise-another subject that promises to make waves in the future. Here, executives of leading companies offer 10 predictions for what's ahead in 2020.

The Rise of the AI-enabled Business Analyst; AI is No Longer for the Precious Few ML Experts and Data Scientists: Businesses have been working to break through the logjam of AI projects that have been back-burned in the face of machine learning skills shortages. However, were seeing the real world reach of AI expand with more companies looking at ways to foster collaboration, gain economies of scale and accelerate their AI paths from concept to production with maturing tools. AI is no longer for the small minority of machine learning experts and data scientists. With data at their core, business analysts are also eager for a slice of the pie. With AI and ML tools at their disposal, the skills of business analysts are expanding towards data science to explore insights from more diverse and richer data sets through the use of machine learning. Technology and automated machine learning techniques will begin shifting the use of data and AI to a greater proportion of a companys business analysts. The demand for these skills are also starting to shape higher-ed curriculums to contend with this new wave of expectations. - Per Nyberg, chief commercial officer, Stradigi AI

ML gets operationalized: Companies adopt best practices to operationalize machine learning and go-live in production for mission-critical processes. Silos will be broken and multi-disciplinary will teams emerge with data engineers, application developers, data scientists, and subject-matter experts. Companies will kill the data lake process and start focusing on applications. New tools to track data science workflow will become the standard (e.g., MLFLow) and new comprehensive data platforms kill the Lambda Architecture.- Monte Zweben, CEO, Splice Machine

AI with Focus: There will be a shift to narrow AI that focuses on a single problem within an industry. Broad AI providers that promise to do everything AI will diminish as more narrowed and expert-level solutions will be offered. The new offerings will produce tangible value for companies as others scramble to keep up. - Vance, director of AI, data science and emerging technology, TD Ameritrade.

Object Storage will be Key to Processing AI and ML Workloads: As data volumes continue to explode, one of the key challenges is how to get the full strategic value of this data. In 2020, we will see a growing number of organizations capitalizing on object storage to create structured/tagged data from unstructured data, allowing metadata to be used to make sense of the tsunami of data generated by AI and ML workloads.While traditional file storage defines data with limited metadata tags (file name, date created, date last modified, etc.) and organizes it into different folders, object storage defines data with unconstrained types of metadata and locates it all from a single API, searchable and easy to analyze. - Jon Toor, CMO, Cloudian

AI is becomes a standard technique: Between random forests, linear regression, and other search patterns, AI has become a standard technique. AI, like standard numeric techniques, is best done with compute close to data. This means the techniques of big data (separating compute and data) are a poor choice just like they were for a majority of analytics. Running AI as code on a compute grid, or within your database, does not allow the kinds of optimizations that an AI framework, or an AI-centric query system can provide. In 5 years, well wonder why custom code lasted so long in the AI space. - Brian Bulkowski, CTO, Yellowbrick Data

AI/ML will overcome challenges related to stream processing: To achieve low latency stream processing and high throughput at scale and in real-time, AI models and applications must be iterative and responsive to change. 44% of IT decision-makers find it extremely difficult to manage advances in technology speed, a recent Hazelcast survey found. The marriage of stream processing and AI will enable companies dealing with a massive volume of real-time events to generate immediate value from their data, opening up more opportunities for innovation. It will also allow developers to more quickly identify anomalies, respond to events or publish events in a data repository for storage and historical analyses, ultimately impacting business outcomes. - John DesJardins, VP of solution architecture & CTO, Hazelcast

The next wave of digital transformation will be led by AI modeling and natural language processing: As of 2019, AI modelling and language processing technologies are robust but not packaged accessibly enough to everyone that could make use of it. When everyone from business analysts to data scientists begin to have full accessibility, then real improvements will rapidly accelerate. Its not about coding the future is now about how abilities are packaged to transfer skills and enable people to get moving faster - Alan Jacobson, chief data and analytics officer, Alteryx

AI will go from identifying trends to making intelligent decisions: AI will begin to drive real-world productivity across all aspects of business in 2020. As companies start using AI to gain deeper insights and understand trends, the technology will lead to more prescriptive actions and further automation of tasks. As AI continues to improve, we will see AI taking automatic actions that are intelligent. As humans become more familiar with this newfound intelligence, they will remove themselves from the equation, and businesses will benefit from greater productivity gains. For example, right now AI can predict when a printer needs a new toner cartridge, but taking a step further, AI can order the toner before it runs out, creating a seamless experience. - Dave Wright, chief innovation officer, ServiceNow

AI can tackle climate change: Climate change may be the biggest challenge of our time. And it will continue to be a significant topic of discussion in the year ahead. A challenge this big and literally Earth-changing calls on humanity to use every tool at our disposal to address it. Artificial intelligence can play a critical role on this front. AI can contribute to everything from CO2 removal to creating more energy-efficient buildings and optimizing energy production. It can also enable better climate predictions, better illustrate the effects of extreme weather and track the sources of pollution. - Asheesh Mehra, CEO, AntWorks

AI Knowledge Graphs will Debunk Fake News:Knowledge Graphs in combination with deep learning will be used to identify photos and video that have been altered by superimposing existing images and videos onto source images. Machine learning knowledge graphs will also unveil the origin of digital information that has been published by a foreign source. Media outlets and social networks will use AI Knowledge Graphs as a tool to determine whether to publish information or remove it. - Dr. Jans Aasman, CEO ofFranz, Inc.

Photo byCharlesonUnsplash

Continue reading here:

Ten Predictions for AI and Machine Learning in 2020 - Database Trends and Applications