Data to the Rescue! Predicting and Preventing Accidents at Sea – JAXenter

Watch Dr. Yonit Hoffman's Machine Learning Conference session

Accidents at sea happen all the time. Their costs in terms of lives, money and environmental destruction are huge. Wouldnt it be great if they could be predicted and perhaps prevented? Dr. Yonit Hoffmans Machine Learning Conference session discusses new ways of preventing sea accidents with the power of data science.

Does machine learning hold the key to preventing accidents at sea?

With more than 350 years of history, the marine insurance industry is the first data science profession to try to predict accidents and estimate future risk. Yet the old ways no longer work, new waves of data and algorithms can offer significant improvements and are going to revolutionise the industry.

In her Machine Learning Conference session, Dr. Yonit Hoffman will show that it is now possible to predict accidents, and how data on a ships behaviour such as location, speed, maps and weather can help. She will show how fragments of information on ship movements can be gathered and taken all the way to machine learning models. In this session, she discusses the challenges, including introducing machine learning to an industry that still uses paper and quills (yes, really!) and explaining the models using SHAP.

Dr. Yonit Hoffman is a Senior Data Scientist at Windward, a world leader in maritime risk analytics. Before investigating supertanker accidents, she researched human cells and cancer at the Weizmann Institute, where she received her PhD and MSc. in Bioinformatics. Yonit also holds a BSc. in computer science and biology from Tel Aviv University.

Go here to see the original:
Data to the Rescue! Predicting and Preventing Accidents at Sea - JAXenter

What is machine learning? Everything you need to know | ZDNet

Machine learning is enabling computers to tackle tasks that have, until now, only been carried out by people.

From driving cars to translating speech, machine learning is driving an explosion in the capabilities of artificial intelligence -- helping software make sense of the messy and unpredictable real world.

But what exactly is machine learning and what is making the current boom in machine learning possible?

At a very high level, machine learning is the process of teaching a computer system how to make accurate predictions when fed data.

Those predictions could be answering whether a piece of fruit in a photo is a banana or an apple, spotting people crossing the road in front of a self-driving car, whether the use of the word book in a sentence relates to a paperback or a hotel reservation, whether an email is spam, or recognizing speech accurately enough to generate captions for a YouTube video.

The key difference from traditional computer software is that a human developer hasn't written code that instructs the system how to tell the difference between the banana and the apple.

Instead a machine-learning model has been taught how to reliably discriminate between the fruits by being trained on a large amount of data, in this instance likely a huge number of images labelled as containing a banana or an apple.

Data, and lots of it, is the key to making machine learning possible.

Machine learning may have enjoyed enormous success of late, but it is just one method for achieving artificial intelligence.

At the birth of the field of AI in the 1950s, AI was defined as any machine capable of performing a task that would typically require human intelligence.

AI systems will generally demonstrate at least some of the following traits: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

Alongside machine learning, there are various other approaches used to build AI systems, including evolutionary computation, where algorithms undergo random mutations and combinations between generations in an attempt to "evolve" optimal solutions, and expert systems, where computers are programmed with rules that allow them to mimic the behavior of a human expert in a specific domain, for example an autopilot system flying a plane.

Machine learning is generally split into two main categories: supervised and unsupervised learning.

This approach basically teaches machines by example.

During training for supervised learning, systems are exposed to large amounts of labelled data, for example images of handwritten figures annotated to indicate which number they correspond to. Given sufficient examples, a supervised-learning system would learn to recognize the clusters of pixels and shapes associated with each number and eventually be able to recognize handwritten numbers, able to reliably distinguish between the numbers 9 and 4 or 6 and 8.

However, training these systems typically requires huge amounts of labelled data, with some systems needing to be exposed to millions of examples to master a task.

As a result, the datasets used to train these systems can be vast, with Google's Open Images Dataset having about nine million images, its labeled video repository YouTube-8M linking to seven million labeled videos and ImageNet, one of the early databases of this kind, having more than 14 million categorized images. The size of training datasets continues to grow, with Facebook recently announcing it had compiled 3.5 billion images publicly available on Instagram, using hashtags attached to each image as labels. Using one billion of these photos to train an image-recognition system yielded record levels of accuracy -- of 85.4 percent -- on ImageNet's benchmark.

The laborious process of labeling the datasets used in training is often carried out using crowdworking services, such as Amazon Mechanical Turk, which provides access to a large pool of low-cost labor spread across the globe. For instance, ImageNet was put together over two years by nearly 50,000 people, mainly recruited through Amazon Mechanical Turk. However, Facebook's approach of using publicly available data to train systems could provide an alternative way of training systems using billion-strong datasets without the overhead of manual labeling.

In contrast, unsupervised learning tasks algorithms with identifying patterns in data, trying to spot similarities that split that data into categories.

An example might be Airbnb clustering together houses available to rent by neighborhood, or Google News grouping together stories on similar topics each day.

The algorithm isn't designed to single out specific types of data, it simply looks for data that can be grouped by its similarities, or for anomalies that stand out.

The importance of huge sets of labelled data for training machine-learning systems may diminish over time, due to the rise of semi-supervised learning.

As the name suggests, the approach mixes supervised and unsupervised learning. The technique relies upon using a small amount of labelled data and a large amount of unlabelled data to train systems. The labelled data is used to partially train a machine-learning model, and then that partially trained model is used to label the unlabelled data, a process called pseudo-labelling. The model is then trained on the resulting mix of the labelled and pseudo-labelled data.

The viability of semi-supervised learning has been boosted recently by Generative Adversarial Networks ( GANs), machine-learning systems that can use labelled data to generate completely new data, for example creating new images of Pokemon from existing images, which in turn can be used to help train a machine-learning model.

Were semi-supervised learning to become as effective as supervised learning, then access to huge amounts of computing power may end up being more important for successfully training machine-learning systems than access to large, labelled datasets.

A way to understand reinforcement learning is to think about how someone might learn to play an old school computer game for the first time, when they aren't familiar with the rules or how to control the game. While they may be a complete novice, eventually, by looking at the relationship between the buttons they press, what happens on screen and their in-game score, their performance will get better and better.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has beaten humans in a wide range of vintage video games. The system is fed pixels from each game and determines various information about the state of the game, such as the distance between objects on screen. It then considers how the state of the game and the actions it performs in game relate to the score it achieves.

Over the process of many cycles of playing the game, eventually the system builds a model of which actions will maximize the score in which circumstance, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Everything begins with training a machine-learning model, a mathematical function capable of repeatedly modifying how it operates until it can make accurate predictions when given fresh data.

Before training begins, you first have to choose which data to gather and decide which features of the data are important.

A hugely simplified example of what data features are is given in this explainer by Google, where a machine learning model is trained to recognize the difference between beer and wine, based on two features, the drinks' color and their alcoholic volume (ABV).

Each drink is labelled as a beer or a wine, and then the relevant data is collected, using a spectrometer to measure their color and hydrometer to measure their alcohol content.

An important point to note is that the data has to be balanced, in this instance to have a roughly equal number of examples of beer and wine.

The gathered data is then split, into a larger proportion for training, say about 70 percent, and a smaller proportion for evaluation, say the remaining 30 percent. This evaluation data allows the trained model to be tested to see how well it is likely to perform on real-world data.

Before training gets underway there will generally also be a data-preparation step, during which processes such as deduplication, normalization and error correction will be carried out.

The next step will be choosing an appropriate machine-learning model from the wide variety available. Each have strengths and weaknesses depending on the type of data, for example some are suited to handling images, some to text, and some to purely numerical data.

Basically, the training process involves the machine-learning model automatically tweaking how it functions until it can make accurate predictions from data, in the Google example, correctly labeling a drink as beer or wine when the model is given a drink's color and ABV.

A good way to explain the training process is to consider an example using a simple machine-learning model, known as linear regression with gradient descent. In the following example, the model is used to estimate how many ice creams will be sold based on the outside temperature.

Imagine taking past data showing ice cream sales and outside temperature, and plotting that data against each other on a scatter graph -- basically creating a scattering of discrete points.

To predict how many ice creams will be sold in future based on the outdoor temperature, you can draw a line that passes through the middle of all these points, similar to the illustration below.

Once this is done, ice cream sales can be predicted at any temperature by finding the point at which the line passes through a particular temperature and reading off the corresponding sales at that point.

Bringing it back to training a machine-learning model, in this instance training a linear regression model would involve adjusting the vertical position and slope of the line until it lies in the middle of all of the points on the scatter graph.

At each step of the training process, the vertical distance of each of these points from the line is measured. If a change in slope or position of the line results in the distance to these points increasing, then the slope or position of the line is changed in the opposite direction, and a new measurement is taken.

In this way, via many tiny adjustments to the slope and the position of the line, the line will keep moving until it eventually settles in a position which is a good fit for the distribution of all these points, as seen in the video below. Once this training process is complete, the line can be used to make accurate predictions for how temperature will affect ice cream sales, and the machine-learning model can be said to have been trained.

While training for more complex machine-learning models such as neural networks differs in several respects, it is similar in that it also uses a "gradient descent" approach, where the value of "weights" that modify input data are repeatedly tweaked until the output values produced by the model are as close as possible to what is desired.

Once training of the model is complete, the model is evaluated using the remaining data that wasn't used during training, helping to gauge its real-world performance.

To further improve performance, training parameters can be tuned. An example might be altering the extent to which the "weights" are altered at each step in the training process.

A very important group of algorithms for both supervised and unsupervised machine learning are neural networks. These underlie much of machine learning, and while simple models like linear regression used can be used to make predictions based on a small number of data features, as in the Google example with beer and wine, neural networks are useful when dealing with large sets of data with many features.

Neural networks, whose structure is loosely inspired by that of the brain, are interconnected layers of algorithms, called neurons, which feed data into each other, with the output of the preceding layer being the input of the subsequent layer.

Each layer can be thought of as recognizing different features of the overall data. For instance, consider the example of using machine learning to recognize handwritten numbers between 0 and 9. The first layer in the neural network might measure the color of the individual pixels in the image, the second layer could spot shapes, such as lines and curves, the next layer might look for larger components of the written number -- for example, the rounded loop at the base of the number 6. This carries on all the way through to the final layer, which will output the probability that a given handwritten figure is a number between 0 and 9.

See more: Special report: How to implement AI and machine learning (free PDF)

The network learns how to recognize each component of the numbers during the training process, by gradually tweaking the importance of data as it flows between the layers of the network. This is possible due to each link between layers having an attached weight, whose value can be increased or decreased to alter that link's significance. At the end of each training cycle the system will examine whether the neural network's final output is getting closer or further away from what is desired -- for instance is the network getting better or worse at identifying a handwritten number 6. To close the gap between between the actual output and desired output, the system will then work backwards through the neural network, altering the weights attached to all of these links between layers, as well as an associated value called bias. This process is called back-propagation.

Eventually this process will settle on values for these weights and biases that will allow the network to reliably perform a given task, such as recognizing handwritten numbers, and the network can be said to have "learned" how to carry out a specific task

An illustration of the structure of a neural network and how training works.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently devising a more efficient design for an effective type of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The AI technique of evolutionary algorithms is even being used to optimize neural networks, thanks to a process called neuroevolution. The approach was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

While machine learning is not a new technique, interest in the field has exploded in recent years.

This resurgence comes on the back of a series of breakthroughs, with deep learning setting new records for accuracy in areas such as speech and language recognition, and computer vision.

What's made these successes possible are primarily two factors, one being the vast quantities of images, speech, video and text that is accessible to researchers looking to train machine-learning systems.

But even more important is the availability of vast amounts of parallel-processing power, courtesy of modern graphics processing units (GPUs), which can be linked together into clusters to form machine-learning powerhouses.

Today anyone with an internet connection can use these clusters to train machine-learning models, via cloud services provided by firms like Amazon, Google and Microsoft.

As the use of machine-learning has taken off, so companies are now creating specialized hardware tailored to running and training machine-learning models. An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train models for Google DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end GPUs, and the recently announced third-generation TPUs able to accelerate training and inference even further.

As hardware becomes increasingly specialized and machine-learning software frameworks are refined, it's becoming increasingly common for ML tasks to be carried out on consumer-grade phones and computers, rather than in cloud datacenters. In the summer of 2018, Google took a step towards offering the same quality of automated translation on phones that are offline as is available online, by rolling out local neural machine translation for 59 languages to the Google Translate app for iOS and Android.

Perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, a feat that wasn't expected until 2026. Go is an ancient Chinese game whose complexity bamboozled computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational standpoint. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training the deep-learning networks needed can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

DeepMind continue to break new ground in the field of machine learning. In July 2018, DeepMind reported that its AI agents had taught themselves how to play the 1999 multiplayer 3D first-person shooter Quake III Arena, well enough to beat teams of human players. These agents learned how to play the game using no more information than the human players, with their only input being the pixels on the screen as they tried out random actions in game, and feedback on their performance during each game.

More recently DeepMind demonstrated an AI agent capable of superhuman performance across multiple classic Atari games, an improvement over earlier approaches where each AI agent could only perform well at a single game. DeepMind researchers say these general capabilities will be important if AI research is to tackle more complex real-world domains.

Machine learning systems are used all around us, and are a cornerstone of the modern internet.

Machine-learning systems are used to recommend which product you might want to buy next on Amazon or video you want to may want to watch on Netflix.

Every Google search uses multiple machine-learning systems, to understand the language in your query through to personalizing your results, so fishing enthusiasts searching for "bass" aren't inundated with results about guitars. Similarly Gmail's spam and phishing-recognition systems use machine-learning trained models to keep your inbox clear of rogue messages.

One of the most obvious demonstrations of the power of machine learning are virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

Each relies heavily on machine learning to support their voice recognition and ability to understand natural language, as well as needing an immense corpus to draw upon to answer queries.

But beyond these very visible manifestations of machine learning, systems are starting to find a use in just about every industry. These exploitations include: computer vision for driverless cars, drones and delivery robots; speech and language recognition and synthesis for chatbots and service robots; facial recognition for surveillance in countries like China; helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs in healthcare; allowing for predictive maintenance on infrastructure by analyzing IoT sensor data; underpinning the computer vision that makes the cashierless Amazon Go supermarket possible, offering reasonably accurate transcription and translation of speech for business meetings -- the list goes on and on.

Deep-learning could eventually pave the way for robots that can learn directly from humans, with researchers from Nvidia recently creating a deep-learning system designed to teach a robot to how to carry out a task, simply by observing that job being performed by a human.

As you'd expect, the choice and breadth of data used to train systems will influence the tasks they are suited to.

For example, in 2016 Rachael Tatman, a National Science Foundation Graduate Research Fellow in the Linguistics Department at the University of Washington, found that Google's speech-recognition system performed better for male voices than female ones when auto-captioning a sample of YouTube videos, a result she ascribed to 'unbalanced training sets' with a preponderance of male speakers.

As machine-learning systems move into new areas, such as aiding medical diagnosis, the possibility of systems being skewed towards offering a better service or fairer treatment to particular groups of people will likely become more of a concern.

A heavily recommended course for beginners to teach themselves the fundamentals of machine learning is this free Stanford University and Coursera lecture series by AI expert and Google Brain founder Andrew Ng.

Another highly-rated free online course, praised for both the breadth of its coverage and the quality of its teaching, is this EdX and Columbia University introduction to machine learning, although students do mention it requires a solid knowledge of math up to university level.

Technologies designed to allow developers to teach themselves about machine learning are increasingly common, from AWS' deep-learning enabled camera DeepLens to Google's Raspberry Pi-powered AIY kits.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to the hardware needed to train and run machine-learning models, with Google letting Cloud Platform users test out its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

This cloud-based infrastructure includes the data stores needed to hold the vast amounts of training data, services to prepare that data for analysis, and visualization tools to display the results clearly.

Newer services even streamline the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise, similar to Microsoft's Azure Machine Learning Studio. In a similar vein, Amazon recently unveiled new AWS offerings designed to accelerate the process of training up machine-learning models.

For data scientists, Google's Cloud ML Engine is a managed machine-learning service that allows users to train, deploy and export custom machine-learning models based either on Google's open-sourced TensorFlow ML framework or the open neural network framework Keras, and which now can be used with the Python library sci-kit learn and XGBoost.

Database admins without a background in data science can use Google's BigQueryML, a beta service that allows admins to call trained machine-learning models using SQL commands, allowing predictions to be made in database, which is simpler than exporting data to a separate machine learning and analytics environment.

For firms that don't want to build their own machine-learning models, the cloud platforms also offer AI-powered, on-demand services -- such as voice, vision, and language recognition. Microsoft Azure stands out for the breadth of on-demand services on offer, closely followed by Google Cloud Platform and then AWS.

Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella.

Early in 2018, Google expanded its machine-learning driven services to the world of advertising, releasing a suite of tools for making more effective ads, both digital and physical.

While Apple doesn't enjoy the same reputation for cutting edge speech recognition, natural language processing and computer vision as Google and Amazon, it is investing in improving its AI services, recently putting Google's former chief in charge of machine learning and AI strategy across the company, including the development of its assistant Siri and its on-demand machine learning service Core ML.

In September 2018, NVIDIA launched a combined hardware and software platform designed to be installed in datacenters that can accelerate the rate at which trained machine-learning models can carry out voice, video and image recognition, as well as other ML-related services.

The NVIDIA TensorRT Hyperscale Inference Platform uses NVIDIA Tesla T4 GPUs, which delivers up to 40x the performance of CPUs when using machine-learning models to make inferences from data, and the TensorRT software platform, which is designed to optimize the performance of trained neural networks.

There are a wide variety of software frameworks for getting started with training and running machine-learning models, typically for the programming languages Python, R, C++, Java and MATLAB.

Famous examples include Google's TensorFlow, the open-source library Keras, the Python library Scikit-learn, the deep-learning framework CAFFE and the machine-learning library Torch.

Read more here:
What is machine learning? Everything you need to know | ZDNet

Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages – Business Wire

TAMPA, Fla. & SEATTLE--(BUSINESS WIRE)--Syniverse, the worlds most connected company, and RealNetworks, a leader in digital media software and services, today announced they have incorporated sophisticated machine learning (ML) features into their integrated offering that gives carriers visibility and control over mobile messaging traffic. By integrating RealNetworks Kontxt application-to-person (A2P) message categorization capabilities into Syniverse Messaging Clarity, mobile network operators (MNOs), internet service providers (ISPs), and messaging aggregators can identify and block spam, phishing, and malicious messages by prioritizing legitimate A2P traffic, better monetizing their service.

Syniverse Messaging Clarity, the first end-to-end messaging visibility solution, utilizes the best-in-class grey route firewall, and clearing and settlement tools to maximize messaging revenue streams, better control spam traffic, and closely partner with enterprises. The solution analyzes the delivery of messages before categorizing them into specific groupings, including messages being sent from one person to another person (P2P), A2P messages, or outright spam. Through its existing clearing and settlement capabilities, Messaging Clarity can transform upcoming technologies like Rich Communication Services (RCS) and chatbots into revenue-generating products and services without the clutter and cost of spam or fraud.

The foundational Kontxt technology adds natural language processing and deep learning techniques to Messaging Clarity to continually update and improve its understanding of messages and clarification. This new feature adds to Messaging Claritys ability to identify, categorize, and ascribe a monetary value to the immense volume and complexity of messages that are delivered through text messaging, chatbots, and other channels.

The Syniverse and RealNetworks Kontxt message classification provides companies the ability to ensure that urgent messages, like one-time passwords, are sent at a premium rate compared with lower-priority notifications, such as promotional offers. The Syniverse Messaging Clarity solution also helps eliminate instances of extreme message spam phishing (smishing). This type of attack recently occurred with a global shipping company when spam texts were sent to consumers with the request to click a link to receive an update on a package delivery for a phantom order.

CLICK TO TWEET: Block #spam and categorize & prioritize #textmessages with @Syniverse & @RealNetworks #Kontxt. #MNO #ISPs #Messaging #MachineLearning #AI http://bit.ly/2HalZkv

Supporting Quotes

Syniverse offers companies the capability to use machine learning technologies to gain insight into what traffic is flowing through their networks, while simultaneously ensuring consumer privacy and keeping the actual contents of the messages hidden. The Syniverse Messaging Clarity solution can generate statistics examining the type of traffic sent and whether it deviates from the senders traffic pattern. From there, the technology analyzes if the message is a valid one or spam and blocks the spam.

The self-learning Kontxt algorithms within the Syniverse Messaging Clarity solution allow its threat-assessment techniques to evolve with changes in message traffic. Our analytics also verify that sent messages conform to network standards pertaining to spam and fraud. By deploying Messaging Clarity, MNOs and ISPs can help ensure their compliance with local regulations across the world, including the U.S. Telephone Consumer Protection Act, while also avoiding potential costs associated with violations. And, ultimately, the consumer -- who is the recipient of more appropriate text messages and less spam -- wins as well, as our Kontxt technology within the Messaging Clarity solution works to enhance customer trust and improve the overall customer experience.

Digital Assets

Supporting Resources

About Syniverse

As the worlds most connected company, Syniverse helps mobile operators and businesses manage and secure their mobile and network communications, driving better engagements and business outcomes. For more than 30 years, Syniverse has been the trusted spine of mobile communications by delivering the industry-leading innovations in software and services that now connect more than 7 billion devices globally and process over $35 billion in mobile transactions each year. Syniverse is headquartered in Tampa, Florida, with global offices in Asia Pacific, Africa, Europe, Latin America and the Middle East.

About RealNetworks

Building on a legacy of digital media expertise and innovation, RealNetworks has created a new generation of products that employ best-in-class artificial intelligence and machine learning to enhance and secure our daily lives. Kontxt (www.kontxt.com) is the foremost platform for categorizing A2P messages to help mobile carriers build customer loyalty and drive new revenue through text message classification and antispam. SAFR (www.safr.com) is the worlds premier facial recognition platform for live video. Leading in real world performance and accuracy as tested by NIST, SAFR enables new applications for security, convenience, and analytics. For information about our other products, visit http://www.realnetworks.com.

RealNetworks, Kontxt, SAFR and the companys respective logos are trademarks, registered trademarks, or service marks of RealNetworks, Inc. Other products and company names mentioned are the trademarks of their respective owners.

Results shown from NIST do not constitute an endorsement of any particular system, product, service, or company by NIST: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-ongoing.

Continued here:
Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages - Business Wire

Pluto7, a Google Cloud Premier Partner, Achieved the Machine Learning Specialization and is Recognized by Google Cloud as a Machine Learning…

Pluto7 is a services and solutions company focused on accelerating business transformation. As a Google Cloud Premier Partner, we service the retail, manufacturing, healthcare, and hi-tech industries.

Pluto7 just achieved the Google Cloud Machine Learning Specialization for combining business consultancy and unique machine learning solutions built on Google Cloud.

With Pluto7 comes unique capabilities for machine learning, artificial intelligence, and analytics. Brought to you by a company that contains some of the finest minds in data science, able to draw on its surroundings in the very heart of Silicon Valley, California.

Businesses are looking for practical solutions to real-world challenges. And by that, we do not just mean providing the tech and leaving you to stitch it all together. Instead, Pluto7s approach is to apply innovation to your desired outcome, alongside the experience needed to make it all happen. This is where their range of consultancy services comes into play. These are designed to create an interconnected tech stack and to champion data empowerment through ML/AI.

Pluto7s services and solutions allow businesses to speed up and scale-out sophisticated machine learning models. They have successfully guided many businesses through the digital transformation process by leveraging the power of artificial intelligence, analytics, and IoT solutions.

What does this mean for a partner to be specialized?

When you see a Google Cloud partner with a Specialization, it indicates proficiency and experience with Google Cloud. Pluto7 is recognized by Google Cloud as a machine learning specialist with deep technical capabilities. The organizations that receive this distinction, demonstrates their ability to lead a customer through the entire AI journey. Pluto7 designs, builds, migrates, tests, and operates industry-specific solutions for their customers.

Pluto7 has a plethora of previous experience in deploying accelerated solutions and custom applications in machine learning and AI. The many proven success stories from industry leaders like ABinBev, DxTerity, L-Nutra, CDD, USC, UNM are publically available on their website. These customers have leveraged Pluto7 and Google Cloud technology to see tangible and transformative results.

On top of all this, Pluto7 has a business plan that aligns with the Specialization. Because of their design, build, and implementation methodologies they are able to successfully drive innovation, accelerate business transformation, and boost human creativity.

ML Services and Solutions

Pluto7 has created Industry-specific use cases for marketing, sales, and supply chains and integrated these to deliver a game-changing customer experience. These capabilities are brought to life through their partnership with Google Cloud, one of the most innovative platforms for AI and ML out there. The following solution suites are created to solve some of the most difficult problems through a combination of innovative technology and deep industry expertise.

Demand ML - Increase efficiency and lower costs

Pluto7 helps supply chain leaders manage unpredictable fluctuations. These solutions allow businesses to achieve demand forecast accuracy of more than 90%, manage complex and unpredictable fluctuations while delivering the right product at the right time -- all using AI to predict and recommend based on real-time data at scale.

Preventive Maintenance - Improve quality, production and reduce associated costs

Pluto7 improves the production efficiency of production plants from 45-80% to reduce downtime and maintain quality. They leverage machine learning and predictive analytics to determine the remaining value of assets and accurately determine when a manufacturing plant, machine, component or part is likely to fail, and thus needs to be replaced.

Marketing ML - Increase marketing ROI

Pluto7s marketing solutions improve click-through rates and predict traffic rates accurately. Pluto7 can help you analyze marketing data in real-time to transform prospect and customer engagement with hyper-personalization. Businesses are able to leverage machine learning for better customer segmentation, campaign targeting, and content optimization.

Contact Pluto7

If you would like to begin your AI journey, Pluto7 recommends starting with a discovery workshop. This workshop is co-driven by Pluto7 and Google Cloud to understand business pain points and set up a strategy to begin solving. Visit the website at http://www.pluto7.com and contact us to get started today!

View source version on businesswire.com: https://www.businesswire.com/news/home/20200219005054/en/

Contacts

Sierra ShepardGlobal Marketing Teammarketing@pluto7.com

See more here:
Pluto7, a Google Cloud Premier Partner, Achieved the Machine Learning Specialization and is Recognized by Google Cloud as a Machine Learning...

Mind Analytics creates the first technology platform for the travel industry that combines AI, Machine Learning and Big Data – Travel Daily News…

BARCELONA - Mind Analytics, a Spanish start-up specialising in data analytics to optimise decision making in the tourism industry, has launched the first tool in the travel industry that combines AI, Machine Learning and Big Data, designed to improve the conversion of hotel distribution wholesalers. The Travel Intelligence Engine (Travel/ie) solution captures, processes and analyses data in real time and uses that knowledge to improve distribution, detect errors and behaviour patterns, in order to improve the distribution of available product inventory and adapt offers to your customers.

This is technology developed in Spain which combines the advantages of advanced descriptive analytics, artificial intelligence and automated learning. The combination of these three functionalities allows you to better understand the tourism market and give an immediate response, optimising conversion by up to 30%.

Thanks to the analysis of customers and supplier data in real time, Travel/ie is a powerful tool to optimise the management of products offered by wholesalers. For example, it allows you to know the most requested destinations and dates, analyse the remaining rooms and, in turn, measure the infrastructure performance in detail or even detect integration and data mapping errors through an alarm system.

In this way, distributors can identify when a product is not being displayed correctly, detect a problem with a customer's request for a reservation, even a failure in network performance, and act immediately to avoid losing revenue.

For Joaquin Orono, CEO of the company, "Decisions based on real data are key to addressing the challenges of the tourism industry. Up to now, this process of analysis and interpretation of the data offered by Travel Intelligence Engine was done manually, an inefficient practice in terms of resource consumption that also generates errors. Therefore, we wanted to develop a state-of-the-art technological product that was the lever that companies in the tourism industry needed to optimise their profitability."

Mind Analytics has developed Travel/ie so wholesalers such as bed banks can manage large volumes of data. However, the data intelligence platform is expected to diversify to other segments of the travel industry such as hotels, travel agencies, car rental companies and airlines.

Integration with the company systemThe implementation of Travel/ie is carried out in a short time frame and does not affect each distributor's individual platform. Therefore, it integrates naturally with the system. First, the information relevant to the company is identified and a data collector is set up. Travel/ie obtains only the data necessary to optimise the business and does so in a non-invasive way, so that a panel adapted to the needs of the company is created.

To develop the integration, comparison and analysis of data, Travel/ie uses market leading technologies such as Google Cloud and Looker.

Read more here:
Mind Analytics creates the first technology platform for the travel industry that combines AI, Machine Learning and Big Data - Travel Daily News...

Deploying Machine Learning to Handle Influx of IoT Data – Analytics Insight

The Internet of Things is gradually penetrating every aspect of our lives. With the growth in numbers of internet-connected sensors built into cars, planes, trains, and buildings, we can say it is everywhere. Be it smart thermostats or smart coffee makers, IoT devices are marching ahead into mainstream adoption.

But, these devices are far from perfect. Currently, there is a lot of manual input required to achieve optimal functionality there is not a lot of intelligence built-in. You must set your alarm, tell your coffee maker when to start brewing, and manually set schedules for your thermostat, all independently and precisely.

These machines rarely communicate with each other, and you are left playing the role of master orchestrator, a labor-intensive job.

Every time the IoT sensors gather data, there has to be someone at the backend to classify the data, process them and ensure information is sent out back to the device for decision making. If the data set is massive, how could an analyst handle the influx? Driverless cars, for instance, have to make rapid decisions when on autopilot and relying on humans is completely out of the picture. Here, Machine Learning comes to play.

Tapping into that data to extract useful information is a challenge thats starting to be met using the pattern-matching abilities of machine learning. Firms are increasingly feeding data collected by Internet of Things (IoT) sensors situated everywhere from farmers fields to train tracks into machine-learning models and using the resulting information to improve their business processes, products, and services.

In this regard, one of the most significant leaders is Siemens, whose Internet of Trains project has enabled it to move from simply selling trains and infrastructure to offering a guarantee its trains will arrive on time.

Through this project, the company has embedded sensors in trains and tracks in selected locations in Spain, Russia, and Thailand, and then used the data to train machine-learning models to spot tell-tale signs that tracks or trains may be failing. Having granular insights into which parts of the rail network are most likely to fail, and when, has allowed repairs to be targeted where they are most needed a process called predictive maintenance. That, in turn, has allowed Siemens to start selling what it calls outcome as a service a guarantee that trains will arrive on-time close to 100 percent of the time.

Besides, Thyssenkrupp is one of the earliest firms to pair IoT sensor data with machine learning models, which runs 1.1 million elevators worldwide and has been feeding data collected by internet-connected sensors throughout its elevators into trained machine-learning models for several years. Such models provide real-time updates on the status of elevators and predict which are likely to fail and when, allowing the company to target maintenance where its needed, reducing elevator outages and saving money on unnecessary servicing. Similarly, Rolls-Royce collects more than 70 trillion data points from its engines, feeding that data into machine-learning systems that predict when maintenance is required.

In a recent report, IDC analysts Andrea Minonne, Marta Muoz, Andrea Siviero say that applying artificial intelligence the wider field of study that encompasses machine learning to IoT data is already delivering proven benefits for firms.

Given the huge amount of data IoT connected devices collect and analyze, AI finds fertile ground across IoT deployments and use cases, taking analytics level to uncovered insights to help lower operational costs, provide better customer service and support, and create product and service innovation, they say.

According to IDC, the most common use cases for machine learning and IoT data will be predictive maintenance, followed by analyzing CCTV surveillance, smart home applications, in-store contextualized marketing and intelligent transportation systems.

That said, companies using AI and IoT today are outliers, with many firms neither collecting large amounts of data nor using it to train machine-learning models to extract useful information.

Were definitely still in the very early stages, says Mark Hung, research VP at analyst Gartner.

Historically, in a lot of these use cases in the industrial space, smart cities, in agriculture people have either not been gathering data or gathered a large trove of data and not really acted on it, Hung says. Its only fairly recently that people understand the value of that data and are finding out whats the best way to extract that value.

The IDC analysts agree that most firms are yet to exploit IoT data using machine learning, pointing out that a large portion of IoT users are struggling to go beyond a mere data collection due to a lack of analytics skills, security concerns, or simply because they dont have a forward-looking strategic vision.

The reason machine learning is currently so prominent is because of advances over the past decade in the field of deep learning a subset of ML. These breakthroughs were applied to areas from computer vision to speech and language recognition, allowing computers to see the world around them and understand human speech at a level of accuracy not previously possible.

Machine learning uses different approaches for harnessing trainable mathematical models to analyze data, and for all the headlines ML receives, its also only one of many different methods available for interrogating data and not necessarily the best option.

Dan Bieler, the principal analyst at Forrester, says: We need to recognize that AI is currently being hyped quite a bit. You need to look very carefully whether itd generate the benefits youre looking for whether itd create the value that justifies the investment in machine learning.

Visit link:
Deploying Machine Learning to Handle Influx of IoT Data - Analytics Insight

Manchester Digital unveils 72% growth for digital businesses in the region – Education Technology

Three quarters of Greater Manchester's digital tech businesses have experienced significant growth in the last 12 months

New figures from Manchester Digital, the independent trade body for digital and tech businesses in Greater Manchester, have revealed that 72% of businesses in the region have experienced growth in the last year, up from 54% in 2018.

Despite such prosperous results, companies are still calling out for talent, with developer roles standing out as the most in-demand for the seventh consecutive year. The other most sought-after skills in the next three years include data science (15%), UX (15%), and AI and machine learning (11%).

In the race to acquire top talent, almost 25% of Manchester vacancies advertised in the last 12 months remained unfilled, largely due to a lack of suitable candidates and inflated salary demands.

Unveiled at Manchester Digitals annual Skills Festival last week, the Annual Skills Audit, which evaluates data from 250 digital and tech companies and employees across the region, also analysed the various professional pathways into the sector.

The majority (77%) of candidates entering the sector harbour a degree of some sort; however, of the respondents who possessed a degree, almost a quarter claimed it was not relevant to tech, while a further 22% reported traversing through the sector from another career.

In other news: Jisc report calls for an end to pen and paper exams by 2025

On top of this, almost one in five respondents said they had self-taught or upskilled their way into the sector a positive step towards boosting diversity in terms of both the people and experience pools entering the sector.

Its positive to see a higher number of businesses reporting growth this year, particularly from SMEs. While the political and economic landscape is by no means settled, it seems that businesses have strategies in place to help them navigate through this uncertainty, said Katie Gallagher, managing director of Manchester Digital.

Whats particularly interesting in this years audit are the data sets around pathways into the tech sector, added Gallagher. While a lot of people still do report having degrees and wed like to see more variation here in terms of more people taking up apprenticeships, work experience placements etc. its interesting to see that a fair percentage are retraining, self-training or moving to the sector with a degree thats not directly related. Only by creating a talent pool from a wide and diverse range of people and backgrounds can we ensure that the sector continues to grow and thrive sustainably.

When asked what they liked about working for their current employer, employees across the region mentioned flexible work as the number one perk they value (40%). Career progression was also a crucial factor to those aged 18-21, with these respondents also identifying brand prestige as a reason to choose a particular employer.

For this first time this year, weve expanded the Skills Audit to include opinions from employees, as well as businesses. With the battle for talent still one of the biggest challenges employers face, were hoping that this part of the data set provides some valuable insights into why people choose employers and what they value most and consequently helps businesses set successful recruitment and retention strategies, Gallagher concluded.

See the original post here:
Manchester Digital unveils 72% growth for digital businesses in the region - Education Technology

How AI Is Tracking the Coronavirus Outbreak – WIRED

With the coronavirus growing more deadly in China, artificial intelligence researchers are applying machine-learning techniques to social media, web, and other data for subtle signs that the disease may be spreading elsewhere.

The new virus emerged in Wuhan, China, in December, triggering a global health emergency. It remains uncertain how deadly or contagious the virus is, and how widely it might have already spread. Infections and deaths continue to rise. More than 31,000 people have now contracted the disease in China, and 630 people have died, according to figures released by authorities there Friday.

John Brownstein, chief innovation officer at Harvard Medical School and an expert on mining social media information for health trends, is part of an international team using machine learning to comb through social media posts, news reports, data from official public health channels, and information supplied by doctors for warning signs the virus is taking hold in countries outside of China.

The program is looking for social media posts that mention specific symptoms, like respiratory problems and fever, from a geographic area where doctors have reported potential cases. Natural language processing is used to parse the text posted on social media, for example, to distinguish between someone discussing the news and someone complaining about how they feel. A company called BlueDot used a similar approachminus the social media sourcesto spot the coronavirus in late December, before Chinese authorities acknowledged the emergency.

We are moving to surveillance efforts in the US, Brownstein says. It is critical to determine where the virus may surface if the authorities are to allocate resources and block its spread effectively. Were trying to understand whats happening in the population at large, he says.

The rate of new infections has slowed slightly in recent days, from 3,900 new cases on Wednesday to 3,700 cases on Thursday to 3,200 cases on Friday, according to the World Health Organization. Yet it isnt clear if the spread is really slowing or if new infections are simply becoming more difficult to track.

So far, other countries have reported far fewer cases of coronavirus. But there is still widespread concern about the virus spreading. The US has imposed a travel ban on China even though experts question the effectiveness and ethics of such a move. Researchers at Johns Hopkins University have created a visualization of the viruss progress around the world based on official numbers and confirmed cases.

Health experts did not have access to such quantities of social, web, and mobile data when seeking to track previous outbreaks such as severe acute respiratory syndrome (SARS). But finding signs of the new virus in a vast soup of speculation, rumor, and posts about ordinary cold and flu symptoms is a formidable challenge. The models have to be retrained to think about the terms people will use and the slightly different symptom set, Brownstein says.

Even so, the approach has proven capable of spotting a coronavirus needle in a haystack of big data. Brownstein says colleagues tracking Chinese social media and news sources were alerted to a cluster of reports about a flu-like outbreak on December 30. This was shared with the WHO, but it took time to confirm the seriousness of the situation.

Beyond identifying new cases, Brownstein says the technique could help experts learn how the virus behaves. It may be possible to determine the age, gender, and location of those most at risk more quickly than using official medical sources.

Alessandro Vespignani, a professor at Northeastern University who specializes in modeling contagion in large populations, says it will be particularly challenging to identify new instances of the coronavirus from social media posts, even using the most advanced AI tools, because its characteristics still arent entirely clear. Its something new. We dont have historical data, Vespignani says. There are very few cases in the US, and most of the activity is driven by the media, by peoples curiosity.

Excerpt from:
How AI Is Tracking the Coronavirus Outbreak - WIRED

Artnome Wants to Predict the Price of a Masterpiece. The Problem? There’s Only One. – Built In

Buying a Picasso is like buying a mansion.

Theres not that many of them, so it can be hard to know what a fair price should be. In real estate, if the house last sold in 2008 right before the lending crisis devastated the real estate market basing todays price on the last sale doesnt make sense.

Paintings are also affected by market conditions and a lack of data. Kyle Waters, a data scientist at Artnome, explained to us how his Boston-area firm is addressing this dilemma and in doing so, aims to do for the art world what Zillow did for real estate.

If only 3 percent of houses are on the market at a time, we only see the prices for those 3 percent. But what about the rest of the market? Waters said. Its similar for art too. We want to price the entire market and give transparency.

We want to price the entire market and give transparency.

Artnome is building the worlds largest database of paintings by blue-chip artists like Georgia OKeeffe, including her super famous works, lesser-known items, those privately held, and artworks publicly displayed. Waters is tinkering with the data to create a machine learning model that predicts how much people will pay for these works at auctions. Because this model includes an artists entire collection, and not just those works that have been publicly sold before, Artnome claims its machine learning model will be more accurate than the auction industrys previous practice of simply basing current prices on previous sales.

The companys goal is to bring transparency to the auction house industry. But Artnomes new model faces the old problem: Its machine learning system performs poorly on the works that typically sell for the most the ones that people are the most interested in since its hard to predict the price of a one-of-a-kind masterpiece.

With a limited dataset, its just harder to generalize, Waters said.

We talked to Waters about how he compiled, cleaned and created Artnomes machine learning model for predicting auction prices, which launched in late January.

Most of the information about artists included in Artnomes model comes from the dusty basement libraries of auction houses, where they store their catalog raissons, which are books that serve as complete records of an artists work. Artnome is compiling and digitizing these records representing the first time these books have ever been brought online, Waters said.

Artnomes model currently includes information from about 5,000 artists whose works have been sold over the last 15 years. Prices in the dataset range from $100 at the low end to Leonardo DaVincis record-breaking Salvator Mundi a painting thatsold for $450.3 million in 2017, making it the most expensive work of art ever sold.

How hard was it to predict what DaVincis 500-year-old Mundi would sell for? Before the sale, Christies auction house estimated his portrait of Jesus Christ was worth around $100 million less than a quarter of the price.

It was unbelievable, Alex Rotter, chairman of Christies postwar and contemporary art department, told The Art Newspaper after the sale. Rotter reported the winning phone bid.

I tried to look casual up there, but it was very nerve-wracking. All I can say is, the buyer really wanted the painting and it was very adrenaline-driven.

The buyer really wanted the painting and it was very adrenaline-driven.

A piece like Salvatore Mundi could come to market in 2017 and then not go up for auction again for 50 years. And because a machine learning model is only as good as the quality and quantity of the data it is trained on, market, condition and changes in availability make it hard to predict a future price for a painting.

These variables are categorized into two types of data: structured and unstructured. And cleaning all of it represents a major challenge.

Structured data includes information like what artist painted which painting on what medium, and in whichyear.

Waters intentionally limited the types of structured information he included in the model to keep the system from becoming too unruly to work with. But defining paintings as solely two-dimensional works on only certain mediums proved difficult, since there are so many different types of paintings (Salvador Dali famously painted on a cigar box, after all). Artnomes problem represents an issue of high cardinality, Waters said, since there are so many different categorical variables he could include in the machine learning system.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit, Waters said, adding that large models also become more unruly to work with.

Other structured data focuses on the artist herself, denoting details like when the creator was born or if they were alive during the time of auction. Waters also built a natural language processing system that analyzes the type and frequency of the words an artist used in her paintings titles, noting trends like Georgia OKeeffe using the word white in many of her famous works.

Including information on market conditions, like current stock prices or real estate data, was important from a structured perspective too.

How popular is an artist, are they exhibiting right now? How many people are interested in this artist? Whats the state of the market? Waters said. Really getting those trends and quantifying those could be just as important as more data.

Another type of data included in the model is unstructured data which, as the name might suggest, is a little less concrete than the structured items. This type of data is mined from the actual painting, and includes information like the artworks dominant color, number of corner points and if faces are pictured.

Waters created a pre-trained convolutional neural network to look for these variables, modeling the project after the ResNet 50 model, which famously won the ImageNet Large Scale Visual Recognition Challenge in 2012 after it correctly identified and classified nearly all of the 14 billion objects featured.

Including unstructured data helps quantify the complexity of an image, Waters said, giving it what he called an edge score.

An edge score helps the machine learning system quantify the subjective points of a painting thatseem intuitive to humans, Waters said. An example might be Vincent Van Goghs series of paintings of red-haired men posing in front of a blue background. When youre looking at the painting, its not hard to see youre looking at self portraits of Van Gogh, by Van Gogh.

Including unstructured data in Artnomes system helps the machine spot visual cues that suggest images are part of a series, which has an impact on their value, Waters said.

When you start interacting with different variables, then you can start getting into more granular details.

Knowing that thats a self-portrait would be important for that artist, Waters said. When you start interacting with different variables, then you can start getting into more granular details that, for some paintings by different artists, might be more important than others.

Artnomes convoluted neural network is good at analyzing paintings for data that tells a deeper story about the work. Butsometimes, there are holes inthe story being told.

In its current iteration, Artnomes model includes both paintings with and without frames it doesnt specify which work falls into which category. Not identifying the frame could affect the dominant color the system discovers, Waters said, adding an error to its results.

That could maybe skew your results and say, like, the dominant color was yellow when really the painting was a landscape and it was green, Waters said.

Interested in convolutional neural networks?Convolutional Neural Networks Explained: Using Pytorch to Understand CNNS

The model also lacks information on the condition of the painting which, again, could impact the artworks price. If the model cant detect a crease in the painting, it might overestimate its value. Also missing is data on an artworks provenance, or its ownership history. Some evidence suggests that paintings that have been displayed by prominent institutions sell for more. Theres also the issue of popularity. Waters hasnt found a concrete way to tell the system that people like the work of Georgia OKeeffe more than the paintings by artist and actor James Franco.

Im trying to think of a way to come up with a popularity score for these very popular artists, Waters said.

An auctioneer hits the hammer to indicate a sale has been made. But the last price the bidder shouts isnt what theyactually pay.

Buyers also must pay the auction house a commission, which varies between auction houses and has changed over time. Waters has had to dig up the commission rates for these outlets over the years and add them to the sales price listed. Hes also had to make sure all sales prices are listed in dollars, converting those listed in other currencies. Standardizing each sale ensures the predictions the model makes are accurate, Waters said.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did, Waters said. It would be clearly wrong to start comparing the two.

Once Artnomes data has been gleaned and cleaned, information is input into the machine learning system, which Waters structured into a random forest model, an algorithm that builds and merges multiple decision trees to arrive at an accurate prediction. Waters said using a random forest model keeps the system from overfitting paintings into one category, and also offers a level of explainability through its permutation score a metric that basically decides the most important aspects of a painting.

Waters doesnt weigh the data he puts into the model. Instead, he lets the machine learning system tell him whats important, with the model weighing factors like todays S&P prices more heavily than the dominant color of a work.

Thats kind of one way to get the feature importance, for kind of a black box estimator, Waters said.

Although Artnome has been approached by private collectors, gallery owners and startups in the art tech world interested in its machine learning system, Waters said its important this dataset and model remain open to the public.

His aim is for Artnomes machine learning model to eventually function like Zillows Zestimate, which estimates real estate prices for homes on and off the market, and act as a general starting point for those interested in finding out the price of an artwork.

When it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

We might not catch a specific genre, or era, or point in the art history movement, Waters said. I dont think itll ever be perfect. But when it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

Want to learn more about machine learning? A Tour of the Top 10 Algorithms for Machine Learning Newbies

See the original post here:
Artnome Wants to Predict the Price of a Masterpiece. The Problem? There's Only One. - Built In

The 17 Best AI and Machine Learning TED Talks for Practitioners – Solutions Review

The editors at Solutions Review curated this list of the best AI and machine learning TED talks for practitioners in the field.

TED Talks are influential videos from expert speakers in a variety of verticals. TED began in 1984 as a conference where Technology, Entertainment and Design converged, and today covers almost all topics from business to technology to global issues in more than 110 languages. TED is building a clearinghouse of free knowledge from the worlds top thinkers, and their library of videos is expansive and rapidly growing.

Solutions Review has curated this list of AI and machine learning TED talks to watch if you are a practitioner in the field. Talks were selected based on relevance, ability to add business value, and individual speaker expertise. Weve also curated TED talk lists for topics like data visualization and big data.

Erik Brynjolfsson is the director of the MIT Center for Digital Business and a research associate at the National Bureau of Economic Research. He asks how IT affects organizations, markets and the economy. His books include Wired for Innovation and Race Against the Machine. Brynjolfsson was among the first researchers to measure the productivity contributions of information and community technology (ICT) and the complementary role of organizational capital and other intangibles.

In this talk, Brynjolfsson argues that machine learning and intelligence are not the end of growth its simply the growing pains of a radically reorganized economy. A riveting case for why big innovations are ahead of us if we think of computers as our teammates. Be sure to watch the opposing viewpoint from Robert Gordon.

Jeremy Howard is the CEO ofEnlitic, an advanced machine learning company in San Francisco. Previously, he was the president and chief scientist atKaggle, a community and competition platform of over 200,000 data scientists. Howard is a faculty member atSingularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on Jobs for the Machines.

Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis.

Nick Bostrom is a professor at the Oxford University, where he heads theFuture of Humanity Institute, a research group of mathematicians, philosophers and scientists tasked with investigating the big picture for the human condition and its future. Bostrom was honored as one ofForeign Policys 2015Global Thinkers. His bookSuperintelligenceadvances the ominous idea that the first ultraintelligent machine is the last invention that man need ever make.

In this talk, Nick Bostrom calls machine intelligence the last invention that humanity will ever need to make. Bostrom asks us to think hard about the world were building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values or will they have values of their own?

Lis work with neural networks and computer vision (with Stanfords Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations. Fei-Fei was honored as one ofForeign Policys 2015Global Thinkers.

This talk digs into how computers are getting smart enough to identify simple elements. Computer vision expert Fei-Fei Li describes the state of the art including the database of 15 million photos her team built to teach a computer to understand pictures and the key insights yet to come.

Anthony Goldbloom is the co-founder and CEO ofKaggle. Kaggle hosts machine learning competitions, where data scientists download data and upload solutions to difficult problems. Kaggle has a community of over 600,000 data scientists. In 2011 and 2012,Forbesnamed Anthony one of the 30 under 30 in technology; in 2013 theMIT Tech Reviewnamed him one of top 35 innovators under the age of 35, and the University of Melbourne awarded him an Alumni of Distinction Award.

This talk by Anthony Goldbloom describes some of the current use cases for machine learning, far beyond simple tasks like assessing credit risk and sorting mail.

Tufekci is a contributing opinion writer at theNew York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvards Berkman Klein Center for Internet and Society. Her book,Twitter and Tear Gas was published in 2017 by Yale University Press.

Machine intelligence is here, and were already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that dont fit human error patterns and in ways we wont expect or be prepared for.

In his bookThe Business Romantic, Tim Leberecht invites us to rediscover romance, beauty and serendipity by designing products, experiences, and organizations that make us fall back in love with our work and our life. The book inspired the creation of the Business Romantic Society, a global collective of artists, developers, designers and researchers who share the mission of bringing beauty to business.

In this talk, Tim Leberecht makes the case for a new radical humanism in a time of artificial intelligence and machine learning. For the self-described business romantic, this means designing organizations and workplaces that celebrate authenticity instead of efficiency and questions instead of answers. Leberecht proposes four principles for building beautiful organizations.

Grady Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBMs research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

Grady Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how well teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

Tom Gruberis a product designer, entrepreneur, and AI thought leader who uses technology to augment human intelligence. He was co-founder, CTO, and head of design for the team that created theSiri virtual assistant. At Apple for over 8 years, Tom led the Advanced Development Group that designed and prototyped new capabilities for products that bring intelligence to the interface.

This talk introduces the idea of Humanistic AI. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function from turbocharging our design skills to helping us remember everything weve ever read. The idea of an AI-powered personal memory also extends to relationships, with the machine helping us reflect on our interactions with people over time.

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His bookArtificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty.

His talk centers around the question of whether we can harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover. As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.

Dr. Pratik Shahs research creates novel intersections between engineering, medical imaging, machine learning, and medicine to improve health and diagnose and cure diseases. Research topics include: medical imaging technologies using unorthodox artificial intelligence for early disease diagnoses; novel ethical, secure and explainable artificial intelligence based digital medicines and treatments; and point-of-care medical technologies for real world data and evidence generation to improve public health.

TED Fellow Pratik Shah is working on a clever system to do just that. Using an unorthodox AI approach, Shah has developed a technology that requires as few as 50 images to develop a working algorithm and can even use photos taken on doctors cell phones to provide a diagnosis. Learn more about how this new way to analyze medical information could lead to earlier detection of life-threatening illnesses and bring AI-assisted diagnosis to more health care settings worldwide.

Margaret Mitchells research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Researchs Cognition group, focused on advancing artificial intelligence, and a researcher in Microsoft Researchs Natural Language Processing group.

Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI and asks us to consider what the technology we create today will mean for tomorrow.

Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. Sharma was recently named in theForbes 30 Under 30 list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Governments Centre for Data Ethics and Innovation.

AI algorithms make important decisions about you all the time like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.

Matt Beane does field research on work involving robots to help us understand the implications of intelligent machines for the broader world of work. Beane is an Assistant Professor in the Technology Management Program at the University of California, Santa Barbara and a Research Affiliate with MITs Institute for the Digital Economy. He received his PhD from the MIT Sloan School of Management.

The path to skill around the globe has been the same for thousands of years: train under an expert and take on small, easy tasks before progressing to riskier, harder ones. But right now, were handling AI in a way that blocks that path and sacrificing learning in our quest for productivity, says organizational ethnographer Matt Beane. Beane shares a vision that flips the current story into one of distributed, machine-enhanced mentorship that takes full advantage of AIs amazing capabilities while enhancing our skills at the same time.

Leila Pirhaji is the founder ofReviveMed, an AI platform that can quickly and inexpensively characterize large numbers of metabolites from the blood, urine and tissues of patients. This allows for the detection of molecular mechanisms that lead to disease and the discovery of drugs that target these disease mechanisms.

Biotech entrepreneur and TED Fellow Leila Pirhaji shares her plan to build an AI-based network to characterize metabolite patterns, better understand how disease develops and discover more effective treatments.

Janelle Shane is the owner of AIweirdness.com. Her book, You Look Like a Thing and I Love Youuses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.

The danger of artificial intelligence isnt that its going to rebel against us, but that its going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems like creating new ice cream flavors or recognizing cars on the road Shane shows why AI doesnt yet measure up to real brains.

Sylvain Duranton is the global leader of BCG GAMMA, a unit dedicated to applying data science and advanced analytics to business. He manages a team of more than 800 data scientists and has implemented more than 50 custom AI and analytics solutions for companies across the globe.

In this talk, business technologist Sylvain Duranton advocates for a Human plus AI approach using AI systems alongside humans, not instead of them and shares the specific formula companies can adopt to successfully employ AI while keeping humans in the loop.

For more AI and machine learning TED talks, browse TEDs complete topic collection.

Timothy is Solutions Review's Senior Editor. He is a recognized thought leader and influencer in enterprise BI and data analytics. Timothy has been named a top global business journalist by Richtopia. Scoop? First initial, last name at solutionsreview dot com.

See original here:
The 17 Best AI and Machine Learning TED Talks for Practitioners - Solutions Review

Adventures With Artificial Intelligence and Machine Learning – Toolbox

Since October of last year I have had the opportunity to work with an startup working on automated machine learning and I thought that I would share some thoughts on the experience and the details of what one might want to consider around the start of a journey with a data scientist in a box.

Ill start by saying that machine learning and artificial intelligence has almost forced itself into my work several times in the past eighteen months, all in slightly different ways.

The first brush was back in June 2018 when one of the developers I was working with wanted to demonstrate to me a scoring model for loan applications based on the analysis of some other transactional data that indicated loans that had been previously granted. The model had no explanation and no details other than the fact that it allowed you to stitch together a transactional dataset which it assessed using a nave Bayes algorithm. We had a run at showing this to a wider audience but the palate for examination seemed low and I suspect that in the end the real reason was we didnt have real data and only had a conceptual problem to be solved.

The second go was about six months later when another colleague in the same team came up with a way to classify data sets and in fact developed a flexible training engine and data tagging approach to determining whether certain columns in data sets were likely to be names, addresses, phone numbers and email addresses. On face value you would think this to be something simple but in reality, it is of course only as good as the training data and in this instance we could easily confuse the system and the data tagging with things like social security numbers that looked like phone numbers, postcodes that were simply numbers and ultimately could be anything and so on. Names were only as good as the locality from which the names training data was sourced and cities, towns. Streets and provinces all proved to most work ok but almost always needed region-specific training data. At any rate, this method of classifying contact data for the most part met the rough objectives of the task at hand and so we soldiered on.

A few months later I was called over to a developers desk and asked for my opinion on a side project that one of the senior developers and architects had been working on. The objective was ambitious but impressive. The solution had been built in response to three problems in the field. The first problem to be solved was decoding why certain records were deemed to be related to one another when with the naked eye they seemed to not be, or vice versa. While this piece didnt involve any ML per se, the second part of the solution did, in that it self-configured thousands of combinations of alternative fuzzy matching criteria to determine an optimal set of duplicate record matching rules.

This was understandably more impressive and practically understandable almost self-explanatory. This would serve as a great utility for a consultant, a data analyst or a relative layperson to find explainability in how potential duplicate records were determined to have a relationship. This was specifically important because it immediately could provide value to field services personnel and clients. In addition, the developer had cunningly introduced a manual matching option that allowed a user to evaluate two records and make a decision through visual assessment as to whether two records could potentially be considered related to one another.

In some respects what was produced was exactly the way that I like to see products produced. The field describes the problem; the product management organization translates that into more elaborate stories and looks for parallels in other markets, across other business areas and for ubiquity. Once those initial requirements have been gathered it is then to engineering and development to come up with a prototype that works toward solving the issue.

The more experienced the developer of course the more comprehensive the result may be and even the more mature the initial iteration may be. Product is then in a position to pitch the concept back at the field, to clients and a selective audience to get their perspective on the solution and how well it matches the for solving the previously articulated problem.

The challenge comes when you have a less tightly honed intent, a less specific message and a more general problem to solve and this comes now to the latest aspect of machine learning and artificial intelligence that I picked up.

One of the elements with dealing with data validation and data preparation is the last mile of action that you have in mind for that data. If your intent is as simple as one of, lets evaluate our data sources, clean them up and makes them suitable for online transaction processing then thats a very specific mission. You need to know what you want to evaluate, what benchmark you wish to evaluate them against and then have some sort of remediation plan for them so that they support the use case for which theyre intended say, supporting customer calls into a call centre. The only areas where you might consider artificial intelligence and machine learning for applicability in this instance might be for determining matches against the baseline but then the question is whether you simply have a Boolean decision or whether in fact, some sort of stack ranking is relevant at all. It could be argued either way, depending on the application.

When youre preparing data for something like a decision beyond data quality though, the mission is perhaps a little different. Effectively your goal may be to cut the cream of opportunities off the top of a pile of contacts, leads, opportunities or accounts. As such, you want to use some combination of traits within the data set to determine influencing factors that would determine a better (or worse) outcome. Here, linear regression analysis for scoring may be sufficient. The devil, of course, lies in the details and unless youre intimately familiar with the data and the proposition that youre trying to resolve for you have to do a lot of trial and error experimentation and validation. For statisticians and data scientists this is all very obvious and you could say, is a natural part of the work that they do. Effectively the challenge here is feature selection. A way of reducing complexity in the model that you will ultimately apply to the scoring.

The journey I am on right now with a technology partner, focuses on ways to actually optimise the features in a way that only the most necessary and optimised features will need to be considered. This, in turn, makes the model potentially simpler and faster to execute, particularly at scale. So while the regression analysis still needs to be done, determining what matters, what has significance and what should be retained vs discarded in terms of the model design, is being all factored into the model building in an automated way. This doesnt necessarily apply to all kinds of AI and ML work but for this specific objective it is perhaps more than adequate and one that doesnt require a data scientist to start delivering a rapid yield.

More here:
Adventures With Artificial Intelligence and Machine Learning - Toolbox

How Machine Learning Is Changing The Future Of Fiber Optics – DesignNews

The high bandwidth demands created by our mobile and smart devices, data storage, and cloud computing centers is growing by leaps and bounds. And the ubiquity of fiber optics is a big part of this. Analysts are predicting the global fiber optics market will be worth $9 billion USD by 2025. Muchof this will be driven by the aforementioned technologies but also by new technologies such as VR/AR.

But none will have more impact than machine learning. The compute power needed and the demand for machine learning performance is driving more and more developers to move AI applications to the edge and away from the cloud. One of those companies is Luminous Computing, a machine learning startup that has set itself on the lofty goal of leveraging photonics to fit the computing power of the world's largest supercomputers onto a single chip for AI processing.

Ahead of his DesignCon 2020 keynote, The Future of Fiber Optic Communications: Datacenter & Mobile, Chris Cole, vice president of systems engineering at Luminous Computing, spoke with DesignCon brand director Suzanne Deffree about the rapid changes coming to data centers and mobile.

Check out the video interview below, where Cole discusses how fiber optics and machine learning are transforming each other, how new technologies like Silicon Photonics (SiPh) and co-packaging play into the communications landscape, why you can't be religious about technology, and more.

Read the original here:
How Machine Learning Is Changing The Future Of Fiber Optics - DesignNews

Seton Hall Announces New Courses in Text Mining and Machine Learning – Seton Hall University News & Events

Professor Manfred Minimair, Data Science, Seton Hall University

As part of its online M.S. in Data Science program, Seton Hall University in South Orange, New Jersey, has announced new courses in Text Mining and Machine Learning.

Seton Hall's master's program in Data Science is the first 100% online program of its kind in New Jersey and one of very few in the nation.

Quickly emerging as a critical field in a variety of industries, data science encompasses activities ranging from collecting raw data and processing and extracting knowledge from that data, to effectively communicating those findings to assist in decision making and implementing solutions. Data scientists have extensive knowledge in the overlapping realms of business needs, domain knowledge, analytics, and software and systems engineering.

"We're in the midst of a pivotal moment in history," said Professor Manfred Minimair, director of Seton Hall's Data Science program. "We've moved from being an agrarian society through to the industrial revolution and now squarely into the age of information," he noted. "The last decade has been witness to a veritable explosion in data informatics. Where once business could only look at dribs and drabs of customer and logistics dataas through a glass darklynow organizations can be easily blinded by the sheer volume of data available at any given moment. Data science gives students the tools necessary to collect and turn those oceans of data into clear and readily actionable information."

These tools will be provided by Seton Hall in new ways this spring, when Text Mining and Machine Learning make their debut.

Text MiningTaught by Professor Nathan Kahl, text mining is the process of extracting high-quality information from text, which is typically done by developing patterns and trends through means such as statistical pattern learning. Professor Nathan Kahl is an Associate Professor in the Department of Mathematics and Computer Science. He has extensive experience in teaching data analytics at Seton Hall University. Some of his recent research lies in the area of network analysis, another important topic which is also taught in the M.S. program.

Professor Kahl notes, "The need for people with these skills in business, industry and government service has never been greater, and our curriculum is specifically designed to prepare our students for these careers." According to EAB (formerly known as the Education Advisory Board), the national growth in demand for data science practitioners over the last two years alone was 252%. According to Glassdoor, the median base salary for these jobs is $108,000.

Machine LearningIn many ways, machine learning represents the next wave in data science. It is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. The course will be taught by Sophine Clachar, a data engineer with more than 10 years of experience. Her past research has focused on aviation safety and large-scale and complex aviation data repositories at the University of North Dakota. She was also a recipient of the Airport Cooperative Research Program Graduate Research Award, which fostered the development of machine learning algorithms that identify anomalies in aircraft data.

"Machine learning is profoundly changing our society," Professor Clachar remarks. "Software enhanced with artificial intelligence capabilities will benefit humans in many ways, for example, by helping design more efficient treatments for complex diseases and improve flight training to make air travel more secure."

Active Relationships with Google, Facebook, Celgene, Comcast, Chase, B&N and AmazonStudents in the Data Science program, with its strong focus on computer science, statistics and applied mathematics, learn skills in cloud computing technology and Tableau, which allows them to pursue certification in Amazon Web Services and Tableau. The material is continuously updated to deliver the latest skills in artificial intelligence/machine learning for automating data science tasks. Their education is bolstered by real world projects and internships, made possible through the program's active relationships with such leading companies as Google, Facebook, Celgene, Comcast, Chase, Barnes and Noble and Amazon. The program also fosters relationships with businesses and organizations through its advisory board, which includes members from WarnerMedia, Highstep Technologies, Snowflake Computing, Compass and Celgene. As a result, students are immersed in the knowledge and competencies required to become successful data science and analytics professionals.

"Among the members of our Advisory Board are Seton Hall graduates and leaders in the field," said Minimair. "Their expertise at the cutting edge of industry is reflected within our curriculum and coupled with the data science and academic expertise of our professors. That combination will allow our students to flourish in the world of data science and informatics."

Learn more about the M.S. in Data Science at Seton Hall

See the rest here:
Seton Hall Announces New Courses in Text Mining and Machine Learning - Seton Hall University News & Events

Break into the field of AI and Machine Learning with the help of this training – Boing Boing

It seems like AI is everywhere these days, from the voice recognition software in our personal assistants to the ads that pop up seemingly at just the right time. But believe it or not, the field is still in its infancy.

That means there's no better time to get in on the ground floor. The Essential AI & Machine Learning Certification Training Bundle is a one-course package that can give you a broad overview of AI's many uses in the modern marketplace and how to implement them.

The best place to dive into this four-course master class is with the Artificial Intelligence (AI) & Machine Learning (ML) Foundation Course. This walkthrough gives you all the terms and concepts that underpin the entire science of AI.

Later courses let you get your hands dirty with some coding, as in the data visualization class that focuses on the role of Python in the interpretive side of data analytics. There are also separate courses on computer vision (the programming that lets machines "see" their surroundings) and natural language processing (the science of getting computers to understand speech).

The entire package is now available for Boing Boing readers at 93% off the MSRP.

Former Vice President and current 2020 Democratic presidential hopeful Joe Biden says U.S. Section 230 should be immediately revoked for Facebook and other social media platforms, and that Mark Zuckerberg should be submitted to civil liability.

FBI needs to be able to hack into your iphone, Trumps sham AG William Barr says

Gee, thanks.

The latest iPhone cameras are undeniably impressive, but theyre still no match for a professional camera when it comes to taking clear, wide-angle shots. These six accessories will transform your iPhone into a pro-level camera in seconds, thanks to powerful and easy-to-attach lenses. 1. Lemuro 18MM iPhone Wide Lens MSRP: $100 | Sale Price: $80 []

Few things in life are more universally dreaded than going to the gym, which is unfortunate since a new year usually means making new resolutions to get in shape. Thankfully, this BodyBoss 2.0: Portable Home Gym has everything you need to burn fat and build muscle in the comfort of your own home. With just []

If one of your New Years resolutions is to travel more, you owe it to yourself to learn the language of the place youre visiting. If youre not sure where to start, give these resources a look. From mobile apps to online courses, these products can get you conversant in a new language before you []

Read the original:
Break into the field of AI and Machine Learning with the help of this training - Boing Boing

What is the role of machine learning in industry? – Engineer Live

In 1950, Alan Turing developed the Turing test to answer the question can machines think? Since then, machine learning has gone from being just a concept, to a process relied on by some of the worlds biggest companies. Here Sophie Hand, UK country manager at industrial parts supplier EU Automation, discusses the applications of the different types of machine learning that exist today.

Machine learning is a subset of artificial intelligence (AI) where computers independently learn to do something they were not explicitly programmed to do. They do this by learning from experience leveraging algorithms and discovering patterns and insights from data. This means machines dont need to be programmed to perform exact tasks on a repetitive basis.

Machine learning is rapidly being adopted across several industries according to Research and Markets, the market is predicted to grow to US$8.81 billion by 2022, at a compound annual growth rate of 44.1 per cent. One of the main reasons for its growing use is that businesses are collecting Big Data, from which they need to obtain valuable insights. Machine learning is an efficient way of making sense of this data, for example the data sensors collect on the condition of machines on the factory floor.

As the market develops and grows, new types of machine learning will emerge and allow new applications to be explored. However, many examples of current machine learning applications fall into two categories; supervised learning and unsupervised learning.

A popular type of machine learning is supervised learning, which is typically used in applications where historical data is used to develop training models predict future events, such as fraudulent credit card transactions. This is a form of machine learning which identifies inputs and outputs and trains algorithms using labelled examples. Supervised learning uses methods like classification, regression, prediction and gradient boosting for pattern recognition. It then uses these patterns to predict the values of the labels on the unlabelled data.

This form of machine learning is currently being used in drug discovery and development with applications including target validation, identification of biomarkers and the analysis of digital pathology data in clinical trials. Using machine learning in this way promotes data-driven decision making and can speed up the drug discovery and development process while improving success rates.

Unlike supervised learning, unsupervised learning works with datasets without historical data. Instead, it explores collected data to find a structure and identify patterns. Unsupervised machine learning is now being used in factories for predictive maintenance purposes. Machines can learn the data and algorithms responsible for causing faults in the system and use this information to identify problems before they arise.

Using machine learning in this way leads to a decrease in unplanned downtime as manufacturers are able to order replacement parts from an automation equipment supplier before a breakdown occurs, saving time and money. According to a survey by Deloitte, using machine learning technologies in the manufacturing sector reduces unplanned machine downtime between 15 and 30 per cent, reducing maintenance costs by 30 per cent.

Its no longer just humans that can think for themselves machines, such as Googles Duplex, are now able to pass the Turing test. Manufacturers can make use of machine learning to improve maintenance processes and enable them to make real-time, intelligent decisions based on data.

The rest is here:
What is the role of machine learning in industry? - Engineer Live

JG Wentworth Welcomes Andrey Zelenovsky as their Vice President of Artificial Intelligence and Machine Learning – PRNewswire

"We are thrilled to have Andrey's leadership and experience and believe he will be instrumental in continuing to expand the use of systems and technology within the company," said Ajai Nair, CIO. "His extensive background in application development and business robotic automation software brings a wealth of knowledge to the team that is necessary to accelerate a successful digital transformation, allowing us to faster determine measurable business benefits and better serve our customers."

Andrey joins the JG Wentworth team from UiPath where he served as Director on their Competitive and Market Intelligence team. During his tenure at UiPath he utilized data mining techniques to analyze the marketplaces, enable sales and predict cashflows.

"I am excited to join a market leader focused on helping customers improve their financial health. I look forward to this unique opportunity to be part of the evolution of JG Wentworth by leveraging AI and automation to positively impact our customers' lives," said Andrey.

Andrey earned his Bachelor of Science in both Information & Systems Engineering and Analytical Finance from the Lehigh University and holds a Master of Science from The George Washington University and a Master of Business Administration from New York University, Leonard N. Stern School of Business.

About JG WentworthJG Wentworth is a financial services company that focuses on helping customers who are experiencing financial hardship or need to quickly access cash. Its services include debt relief, structured settlement payment purchasing, annuity payment purchasing, lottery and casino payment purchasing. J.G. Wentworth was founded in 1991 and currently has offices in Chesterbrook, Pennsylvania, Radnor, Pennsylvania and Rockville, Maryland. For more information about J.G. Wentworth visit http://www.jgwentworth.com or use the information provided below.

SOURCE The JG Wentworth Company

Home

Read more:
JG Wentworth Welcomes Andrey Zelenovsky as their Vice President of Artificial Intelligence and Machine Learning - PRNewswire

CMSWire’s Top 10 AI and Machine Learning Articles of 2019 – CMSWire

PHOTO: tiffany terry

Would you believe me if I told you artificial intelligence (AI) wrote this article?

With 2020 on the horizon, and with all the progress made in AI and machine learning (ML) already, it probably wouldnt surprise you if that were indeed the case which is bad news for writers like me (or not).

As we transition into a new year, its worth noting that 73% of global consumers say they are open to businesses using AI if it makes life easier, and 83% of businesses say that AI is a strategic priority for their businesses already. If thats not a recipe for even more progress in 2020 and beyond, then my name isnt CMSWire-Bot-927.

Today, were looking back at the AI and ML articles which resonated with CMSWire's audience in 2019. Strap yourself in, because this list is about to blast you into the future.

ML and, more broadly, AI have become the tech industry's most important trends over the past 18 months. And despite the hype and, to some extent, fear surrounding the technology, many businesses are now embracing AI at an impressive speed.

Despite this progress, many of the pilot schemes are still highly experimental, and some organizations are struggling to understand how they can really embrace the technology.

As the business world grapples with the potential of AI and machine learning, new ethical challenges arise on a regular basis related to its use.

One area where tensions are being played out is in talent management: a struggle between relying on human expertise or in deferring decisions to machines so as to better understand employee needs, skills and career potential.

Marketing technology has evolved rapidly over the past decade, with one of the most exciting developments being the creation of publicly-available, cost-effective cognitive APIs by companies like Microsoft, IBM, Alphabet, Amazon and others. These APIs make it possible for businesses and organizations to tap into AI and ML technology for both customer-facing solutions as well as internal operations.

The workplace chatbots are coming! The workplace chatbots are coming!

OK, well, theyre already here. And in a few years, there will be even more. According to Gartner, by 2021 the daily use ofvirtual assistants in the workplacewill climb to 25%. That will be up from less than 2% this year.Gartneralso identified a workplace chatbot landscape of more than 1,000 vendors, so choosing a workplace chatbot wont be easy. IT leaders need to determine the capabilities they need from such a platform in the short term and select a vendor on that basis, according to Gartner.

High-quality metadata plays an outsized role in improving enterprise search results. But convincing people to consistently apply quality metadata has been an uphill battle for most companies. One solution that has been around for a long time now is to automate metadata's creation, using rules-based content auto-classification products.

Although enterprise interest in bots seems to be at an all-time high,Gartner reports that 68%of customer service leaders believe bots and virtual assistants will become even more important in the next two years. As bots are called upon to perform a greater range of tasks, chatbots will increasingly rely on back-office bots to find information and complete transactions on behalf of customers.

If digital workplaces are being disrupted by the ongoing development of AI driven apps, by 2021 those disruptors could end up in their turn being disrupted. The emergence of a new form of AI, or a second wave of AI, known as augmented AI is so significant Gartner predicts that by 2021 it will be creating up to $2.9 trillion of business value and 6.2 billion hours of worker productivity globally.

AI and ML took center stage at IBM Think this year, the shows major AI announcements served as a reminder that the company has some of the most differentiated and competitive services for implementing AI in enterprise operational processes in the market. But if Big Blue is to win the AI race against AWS, Microsoft and Google Cloud in 2019 and beyond, it must improve its developer strategy and strengthen its communications, especially in areas such as trusted AI and governance

Sentiment analysis is the kind of tool a marketer dreams about. By gauging the publics opinion of an event or product through analysis of data on a scale no human could achieve, it gives your team the ability to figure out what people really think. Backed by a growing body of innovative research, sentiment-analysis tools have the ability to dramatically improve your ROI yet many companies are overlooking it.

Pop quiz: Can you define the differences between AI and automation?

I wont judge you if the answer is no. There's a blurry line between AI and automation, with the terms often used interchangeably, even in tech-forward professions. But there's a very real difference between the two and its one thats becoming evermore critical for organizations to understand.

View post:

CMSWire's Top 10 AI and Machine Learning Articles of 2019 - CMSWire

What Is Machine Learning? | How It Works, Techniques …

Supervised Learning

Supervised machine learning builds a model that makes predictions based on evidence in the presence of uncertainty. A supervised learning algorithm takes a known set of input data and known responses to the data (output) and trains a model to generate reasonable predictions for the response to new data. Use supervised learning if you have known data for the output you are trying to predict.

Supervised learning uses classification and regression techniques to develop predictive models.

Classification techniques predict discrete responsesfor example, whether an email is genuine or spam, or whether a tumor is cancerous or benign. Classification models classify input data into categories. Typical applications include medical imaging, speech recognition, and credit scoring.

Use classification if your data can be tagged, categorized, or separated into specific groups or classes. For example, applications for hand-writing recognition use classification to recognize letters and numbers. In image processing and computer vision, unsupervised pattern recognition techniques are used for object detection and image segmentation.

Common algorithms for performing classification include support vector machine (SVM), boosted and bagged decision trees, k-nearest neighbor, Nave Bayes, discriminant analysis, logistic regression, and neural networks.

Regression techniques predict continuous responsesfor example, changes in temperature or fluctuations in power demand. Typical applications include electricity load forecasting and algorithmic trading.

Use regression techniques if you are working with a data range or if the nature of your response is a real number, such as temperature or the time until failure for a piece of equipment.

Common regression algorithms include linear model, nonlinear model, regularization, stepwise regression, boosted and bagged decision trees, neural networks, and adaptive neuro-fuzzy learning.

Here is the original post:
What Is Machine Learning? | How It Works, Techniques ...

Call for netizens to demand scraped pics from Clearview, ML weather forecasts, and Star Trek goes high def with AI – The Register

Roundup Hello Reg readers. Here's a quick roundup of bits and pieces from the worlds of machine learning and AI.

Are you in Clearview's database? Probably: Folks covered by the EUs GDPR, the California Consumer Privacy Act, and similar laws, can ask Clearview the controversial face-recognition startup that scraped three billion images of people from the internet to reveal what images it may have of you in its database and delete them.

Thats what Thomas Smith, co-founder and CEO of Gado Images, a computer vision startup, did for OneZero. As a resident of America's Golden State, Smith filled out a California Consumer Privacy Act (CCPA) form demanding Clearview send him the profile they had on him. He could see what images Clearview had managed to scrape from the internet, and where they got them from.

He had to provide Clearview with a picture of himself along with a copy of his drivers license. Clearview had collected 10 images of Smith; some were taken from social media, such as Facebook, but it also went as far as to download snaps from he and his wifes personal blog and a Python meetup group in San Francisco. One of the 10 images, however, looks like a case of mistaken identity.

The images in Smiths profile are accompanied by URLs pointing to where each photo was nabbed. By clicking through these links, a Clearview customer typically the police running a search using Smith's photo would be able to figure out personal details like where he works, where he went to university, whom hes married to, and who some of his friends are. That means things like stills from CCTV could be used to pull up the entire life of those pictured in the image.

The app has been served cease-and-desist letters from Google, YouTube, Twitter, and Facebook to stop lifting images from their platforms, and to delete any existing ones it has in its database.

If you want to get your data from Clearview, and are eligible under CCPA or GDPR, Smith recommends sending Clearview an email to privacy@clearview.ai to request your profile. Follow any instructions you receive, he said.

Expect your request to take up to two months to process. Be persistent in following up. And remember that once you receive your data, you have the option to demand that Clearview delete it or amend it if youd like them to do so.

But if you dont live in California or in the European Union, or somewhere with similar laws, the best thing to do to prevent startups like Clearview from snaffling your data is to make your social media profiles private. Dont post snaps of your mug anywhere on the internet that is available for anyone to see.

This isn't totally avoidable, however. If your friends upload pictures of you, Clearview can still scrape them as long as theyre public.

Hey AI, is it going to rain today? Training machine learning models to predict whether it's going to rain or not by looking at the movement of clouds gathered by weather stations or satellites is all the rage at the moment.

Researchers over at Google have developed MetNet, a deep neural network that can forecast where its going to rain in the US up to eight hours before it happens. The team claims that its system was more accurate than the predictive tools employed by the National Oceanic and Atmospheric Administration (NOAA) a US federal scientific agency that monitors the weather, oceans, and the atmosphere on Earth when it comes to forecasting rain.

MetNet inspects data recorded by the radar stations in the Multi-Radar/Multi-Sensor System (MRMS) and the Geostationary Operational Environmental Satellite system, both operated by the NOAA. Images of a top down view of clouds, and atmospheric measurements are given as inputs and MetNet spits out a probability distribution of precipitation over an area spanning 64 square kilometers, covering the entire US at one kilometer resolution.

There are advantages and disadvantages to using neural networks like MetNet to forecast the weather. Although machine learning models provide a cheap alternative to supercomputers, which have to carry out complex calculations, they are generally less accurate and dont deal well with freak weather events that they havent been trained on.

We are actively researching how to improve global weather forecasting, especially in regions where the impacts of rapid climate change are most profound, the researchers said.

While we demonstrate the present MetNet model for the continental US, it could be extended to cover any region for which adequate radar and optical satellite data are available.

You can read more about how MetNet works here.

Star Trek Voyager and Deep Space Nine get an AI makeover: Heres something that will please Star Trek fans: you can now watch clips from Star Trek Voyager and Deep Space Nine in much better quality now that theyve been revamped with the help of AI algorithms.

A YouTube user, going by the name Billy Reichard, has posted a series of videos for Trekkies to watch. Old clips taken from both TV series have been run through Gigapixel AI, a commercial AI tool developed by Topaz Labs, a computer vision company based in Texas, to increase the quality. This is necessary because, it appears, portions of the Voyager and DS9 archives are NTSC-grade and it would be too much faff to restore them in full high definition.

Reichard explained his work on Reddit's r/StarTrek group and compared the AI-generated quality to 4K. He said he planned to play around with the Gigapixel AI software more and will be producing more Star Trek clips for people to enjoy.

Heres one from Voyager...

Youtube Video

And one from Deep Space Nine. Enjoy

Youtube Video

Sponsored: Webcast: Why you need managed detection and response

Go here to read the rest:
Call for netizens to demand scraped pics from Clearview, ML weather forecasts, and Star Trek goes high def with AI - The Register

The Top Machine Learning WR Prospect Will Surprise You – RotoExperts

What Can Machine Learning Tell Us About WR Prospects?

One of my favorite parts of draft season is trying to model the incoming prospects. This year, I wanted to try something new, so I dove into the world of machine learning models. Using machine learning to detail the value of a WR prospect is very useful for dynasty fantasy football.

Machine learning leverages artificial intelligence to identify patterns (learn) from the data, and build an appropriate model. I took over 60 different variables and 366 receiving prospects between the 2004 and 2016 NFL Drafts, and let the machine do its thing. As with any machine, some human intervention is necessary, and I fine-tuned everything down to a 24-model ensemble built upon different logistic regressions.

Just like before, the model presents the likelihood of a WR hitting 200 or more PPR points in at least one of his first three seasons. Here are the nine different components featured, in order of significance:

This obviously represents a massive change from the original model, proving once again that machines are smarter than humans. I decided to move over to ESPN grades and ranks instead of NFL Draft Scout for a few reasons:

Those changes alone made strong improvements to the model, and it should be noted that the ESPN overall ranks have been very closely tied to actual NFL Draft position.

Having an idea of draft position will always help a model since draft position usually begets a bunch of opportunity at the NFL level.

Since the model is built on drafts up until 2016, I figured perhaps youd want to see the results from the last three drafts before seeing the 2020 outputs.

It is encouraging to see some hits towards the top of the model, but there are obviously some misses as well. Your biggest takeaway here should be just how difficult it is to hit that 200 point threshold. Only two prospects the last three years have even a 40% chance of success. The model is telling us not to be over-confident, and that is a good thing.

Now that youve already seen some results, here are the 2020 model outputs.

Tee Higgins as the top WR is likely surprising for a lot of people, but it shouldnt be. Higgins had a fantastic career at Clemson, arguably the best school in the country over the course of his career. He is a proven touchdown scorer, and is just over 21 years old with a prototypical body-type.

Nobody is surprised that the second WR on this list is from Alabama, but they are likely shocked to see that a data-based model has Henry Ruggs over Jerry Jeudy. The pair is honestly a lot closer that many people think in a lot of the peripheral statistics. The major edge for Ruggs comes on the ground. He had a 75 yard rushing touchdown, which really underlines his special athleticism and play-making ability.

The name that likely stands out the most is Geraud Sanders, who comes in ahead of Jerry Jeudy despite being a relative unknown out of Air Force. You can mentally bump him down a good bit. The academy schools are a bit of a glitch in the system, as their offensive approach usually yields some outrageous efficiency. Since 2015, 12 of the top 15 seasons in adjusted receiving yards per pass attempt came from either an academy school or Georgia Techs triple-option attack. Sanders isnt a total zero, his profile looks very impressive, but I would have him closer to a 10% chance of success given his likely Day 3 or undrafted outcome in the NFL Draft.

Read more here:
The Top Machine Learning WR Prospect Will Surprise You - RotoExperts