Combating the coronavirus with Twitter, data mining, and machine learning – TechRepublic

Social media can send up an early warning sign of illness, and data analysis can predict how it will spread.

The coronavirus illness (nCoV) is now an international public health emergency, bigger than the SARS outbreak of 2003. Unlike SARS, this time around scientists have better genome sequencing, machine learning, and predictive analysis tools to understand and monitor the outbreak.

During the SARS outbreak, it took five months for scientists to sequence the virus's genome. However, the first 2019-nCoV case was reported in December, and scientists had the genome sequenced by January 10, only a month later.

Researchers have been using mapping tools to track the spread of disease for several years. Ten European countries started Influenza Net in 2003 to track flu symptoms as reported by individuals, and the American version, Flu Near You, started a similar service in 2011.

Lauren Gardner, a civil engineering professor at Johns Hopkins and the co-director of the Center for Systems Science and Engineering, led the effort to launch a real-time map of the spread of the 2019-nCoV. The site displays statistics about deaths and confirmed cases of coronavirus on a worldwide map.

Este Geraghty, MD, MS, MPH, GISP, and chief medical officer and health solutions director at Esri, said that since the SARS outbreak in 2003 there has been a revolution in applied geography through web-based tools.

"Now as we deploy these tools to protect human lives, we can ingest real-time data and display results in interactive dashboards like the coronavirus dashboard built by Johns Hopkins University using ArcGIS," she said.

SEE:The top 10 languages for machine learning hosted on GitHub (free PDF)

With this outbreak, scientists have another source of data that did not exist in 2003: Twitter and Facebook. In 2014, Chicago's Department of Innovation and Technology built an algorithm that used social media mining and illness prediction technologies to target restaurants inspections. It worked: The algorithm found violations about 7.5 days before the normal inspection routine did.

Theresa Do, MPH, leader of the Federal Healthcare Advisory and Solutions team at SAS, said that social media can be used as an early indicator that something is going on.

"When you're thinking on a world stage, a lot of times they don't have a lot of these technological advances, but what they do have is cell phones, so they may be tweeting out 'My whole village is sick, something's going on here,' she said.

Do said an analysis of social media posts can be combined with other data sources to predict who is most likely to develop illnesses like the coronavirus illness.

"You can use social media as a source but then validate it against other data sources," she said. "It's not always generalizable (is generalizable a word?), but it can be a sentinel source."

Do said predictive analytics has made significant advances since 2003, including refining the ability to combine multiple data sources. For example, algorithms can look at names on plane tickets and compare that information with data from other sources to predict who has been traveling to certain areas.

"Algorithms can allow you to say 'with some likelihood' it's likely to be the same person," she said.

The current challenge is identifying gaps in the data. She said that researchers have to balance between the need for real-time data and privacy concerns.

"If you think about the different smartwatches that people wear, you can tell if people are active or not and use that as part of your model, but people aren't always willing to share that because then you can track where someone is at all times," she said.

Do said that the coronavirus outbreak resembles the SARS outbreak, but that governments are sharing data more openly this time.

"We may be getting a lot more positives than they're revealing and that plays a role in how we build the models," she said. "A country doesn't want to be looked at as having the most cases but that is how you save lives."

Get expert tips on mastering the fundamentals of big data analytics, and keep up with the latest developments in artificial intelligence. Delivered Mondays

This map from Johns Hopkins shows reported cases of 2019-nCoV as of January 30, 2020 at 9:30 pm. The yellow line in the graph is cases outside of China while the orange line shows reported cases inside the country.

Image: 2019-nCoV Global Cases by Johns Hopkins Center for Systems Science and Engineering

More here:
Combating the coronavirus with Twitter, data mining, and machine learning - TechRepublic

The ML Times Is Growing A Letter from the New Editor in Chief – Machine Learning Times – machine learning & data science news – The Predictive…

Dear Reader,

As of the beginning of January 2020, its my great pleasure to join The Machine Learning Times as editor in chief! Ive taken over the main editorial duties from Eric Siegel, who founded the ML Times (also the founder of the Predictive Analytics World conference series). As youve likely noticed, weve renamed to The Machine Learning Times what until recently was The Predictive Analytics Times. In addition to a new, shiny name, this rebranding corresponds with new efforts to expand and intensify our breadth of coverage. As editor in chief, Im taking the lead in this growth initiative. Were growing the MLTimes both quantitatively and qualitatively more articles, more writers, and more topics. One particular area of focus will be to increase our coverage of deep learning.

And speaking of deep learning, please consider joining me at this summers Deep Learning World 2020 May 31 June 4 in Las Vegas the co-located sister conference of Predictive Analytics World and part of Machine Learning Week. For the third year, I am chairing and moderating a broad ranging lineup of the latest industry use cases and applications in deep learning. This year, DLW features a new track on large scale deep learning deployment. You can view the full agenda here. In the coming months, the MLTimes will be featuring interviews with the speakers giving you sneak peeks into the upcoming conference presentations.

In addition to supporting the community in these two roles with the MLTimes and Deep Learning World, I am a fellow analytics practitioner yes, I practice what I preach! To learn more about my work leading and executing on advanced data science projects for high tech firms and major research universities in Silicon Valley, click here.

And finally, Attention All Writers: Whether youve published with us in the past or are considering publishing for the very first time, wed love to see original content submissions from you. Published articles gain strong exposure on our site, as well as within the monthly MLTimes email send. If you currently publish elsewhere, such as on a personal blog, consider publishing items as an article with us first, and then in your own blog two weeks thereafter (per our editorial guidelines). Doing so would provide you the opportunity to gain our readers eyes in addition to those you already reach.

Im excited to lead the MLTimes into a strong year. Weve already got a good start with greater amounts of exciting original content lined up for this and coming months. Please feel free to reach out to me with any feedback on our published content or if you are interested in submitting articles for consideration. For general inquiries, see the information on our editorial page and the contact information there. And to reach out to me directly, connect with me on LinkedIn.

Thanks for reading!

Best Regards,

Luba GloukhovaEditor in Chief, The Machine Learning TimesFounding Chair, Deep Learning World

Here is the original post:
The ML Times Is Growing A Letter from the New Editor in Chief - Machine Learning Times - machine learning & data science news - The Predictive...

The Human-Powered Companies That Make AI Work – Forbes

Machine learning models require human labor for data labeling

The hidden secret of artificial intelligence is that much of it is actually powered by humans. Well, to be specific, the supervised learning algorithms that have gained much of the attention recently are dependent on humans to provide well-labeled training data that can be used to train machine learning algorithms. Since machines have to first be taught, they cant teach themselves (yet), so it falls upon the capabilities of humans to do this training. This is the secret achilles heel of AI: the need for humans to teach machines the things that they are not yet able to do on their own.

Machine learning is what powers todays AI systems. Organizations are implementing one or more of the seven patterns of AI, including computer vision, natural language processing, predictive analytics, autonomous systems, pattern and anomaly detection, goal-driven systems, and hyperpersonalization across a wide range of applications. However, in order for these systems to be able to create accurate generalizations, these machine learning systems must be trained on data. The more advanced forms of machine learning, especially deep learning neural networks, require significant volumes of data to be able to create models with desired levels of accuracy. It goes without saying then, that the machine learning data needs to be clean, accurate, complete, and well-labeled so the resulting machine learning models are accurate. Whereas it has always been the case that garbage in is garbage out in computing, it is especially the case with regards to machine learning data.

According to analyst firm Cognilytica, over 80% of AI project time is spent preparing and labeling data for use in machine learning projects:

Percentage of time allocated to machine learning tasks (Source: Cognilytica)

(Disclosure: Im a principal analyst at Cognilytica)

Fully one quarter of this time is spent providing the necessary labels on data so that supervised machine learning approaches will actually achieve their learning objectives. Customers have the data, but they dont have the resources to label large data sets, nor do they have a mechanism to insure accuracy and quality. Raw labor is easy to come by, but its much harder to guarantee any level of quality from a random, mostly transient labor force. Third party managed labeling solution providers address this gap by providing the labor force to do the labeling combined with the expertise in large-scale data labeling efforts and an infrastructure for managing labeling workloads and achieving desired quality levels.

According to a recent report from research firm Cognilytica, over 35 companies are currently engaged in providing human labor to add labels and annotation to data to power supervised learning algorithms. Some of these firms use general, crowdsourced approaches to data labeling, while others bring their own, managed and trained labor pools that can address a wide range of general and domain-specific data labeling needs.

As detailed in the Cognilytica report, the tasks for data labeling and annotation depend highly on the sort of data to be labeled for machine learning purposes and the specific learning task that is needed. The primary use cases for data labeling fall into the following major categories:

These labeling tasks are getting increasingly more complicated and domain-specific as machine learning models are developed that can handle more general use cases. For example, innovative medical technology companies are building machine learning models that can identify all manner of concerns within medical images, such as clots, fractures, tumors, obstructions, and other concerns. To build these models requires first training machine learning algorithms to identify those issues within images. To train the machine learning models requires lots of data that has been labeled with the specific areas of concern identified. To accomplish that labeling task requires some level of knowledge as to how to identify a particular issue and the knowledge of how to appropriately label it. This is not a task for the random, off-the-street individual. This requires some amount of domain expertise.

Consequently, labeling firms have evolved to provide more domain-specific capabilities and expanded the footprint of their offerings. As machine learning starts to be applied to ever more specific areas, the needs for this sort of domain-specific data labeling will only increase. According to the Cognilytica report, the demand for data labeling services from third parties will grow from $1.7 Billion (USD) in 2019 to over $4.1B by 2024. This is a significant market, much larger than most might be aware of.

Increasingly, machines are doing this work of data labeling as well. Data labeling providers are applying machine learning to their own labeling efforts to perform some of the work of labeling, perform quality control checks on human labor, and optimize the labeling process. These firms use machine learning inferencing to identify data types, things that dont match the structure of a data column, potential data quality or formatting issues, and provides recommendations to users for how they could clean the data. In this way, machine learning is helping the process of improving machine learning. AI applied to AI. Quite interesting.

For the foreseeable future, the need for human-based data labeling for machine learning will not diminish. If anything, the use of machine learning continues to grow into new domains that require new knowledge to be built and learned by systems. This in turn requires well-labeled data to learn in those new domains, and in turn, requires the services of the hidden army of human laborers making AI work as well as it does today.

Read more:
The Human-Powered Companies That Make AI Work - Forbes

3 books to get started on data science and machine learning – TechTalks

Image credit: Depositphotos

This post is part of AI education, a series of posts that review and explore educational content on data science and machine learning.

With data science and machine learning skills being in high demand, theres increasing interest in careers in both fields. But with so many educational books, video tutorials and online courses on data science and machine learning, finding the right starting point can be quite confusing.

Readers often ask me for advice on the best roadmap for becoming a data scientist. To be frank, theres no one-size-fits-all approach, and it all depends on the skills you already have. In this post, I will review three very good introductory books on data science and machine learning.

Based on your background in math and programming, the two fundamental skills required for data science and machine learning, youll surely find one of these books a good place to start.

Data scientists and machine learning engineers sit at the intersection of math and programming. To become a good data scientist, you dont need to be a crack coder who knows every single design pattern and code optimization technique. Neither do you need to have an MSc in math. But you must know just enough of both to get started. (You do need to up your skills in both fields as you climb the ladder of learning data science and machine learning.)

If you remember your high school mathematics, then you have a strong base to begin the data science journey. You dont necessarily need to recall every formula they taught you in school. But concepts of statistics and probability such as medians and means, standard deviations, and normal distributions are fundamental.

On the coding side, knowing the basics of popular programming languages (C/C++, Java, JavaScript, C#) should be enough. You should have a solid understanding of variables, functions, and program flow (if-else, loops) and a bit of object-oriented programming. Python knowledge is a strong plus for a few reasons: First, most data science books and courses use Python as their language of choice. Second, the most popular data science and machine learning libraries are available for Python. And finally, Pythons syntax and coding conventions are different from other languages such as C and Java. Getting used to it takes a bit of practice, especially if youre used to coding with curly brackets and semicolons.

Written by Sinan Ozdemir, Principles of Data Science is one of the best intros to data science that Ive read. The book keeps the right balance between math and coding, theory and practice.

Using examples, Ozdemir takes you through the fundamental concepts of data science such as different types of data and the stages of data science. You will learn what it means to clean your data, normalize it and split it between training and test datasets.

The book also contains a refresher on basic mathematical concepts such as vector math, matrices, logarithms, Bayesian statistics, and more. Every mathematical concept is interspersed with coding examples and introduction to relevant Python data science libraries for analyzing and visualizing data. But you have to bring your own Python skills. The book doesnt have any Python crash course or introductory chapter on the programming language.

What makes the learning curve of this book especially smooth is that it doesnt go too deep into the theories. It gives you just enough knowledge so that you can make optimal uses of Python libraries such as Pandas and NumPy, and classes such as DataFrame and LinearRegression.

Granted, this is not a deep dive. If youre the kind of person who wants to get to the bottom of every data science and machine learning concept and learn the logic behind every library and function, Principles of Data Science will leave you a bit disappointed.

But again, as I mentioned, this is an intro, not a book that will put you on a data science career level. Its meant to familiarize you with what this growing field is. And it does a great job at that, bringing together all the important aspects of a complex field in less than 400 pages.

At the end of the book, Ozdemir introduces you to machine learning concepts. Compared to other data science textbooks, this section of Principles of Data Science falls a bit short, both in theory and practice. The basics are there, such as the difference between supervised and unsupervised learning, but I would have liked a bit more detail on how different models work.

The book does give you a taste of different ML algorithms such as regression models, decision trees, K-means, and more advanced topics such as ensemble techniques and neural networks. The coverage is enough to whet your appetite to learn more about machine learning.

As the name suggests, Data Science from Scratch takes you through data science from the ground up. The author, Joel Grus, does a great job of showing you all the nitty-gritty details of coding data science. And the book has plenty of examples and exercises to go with the theory.

The book provides a Python crash course, which is good for programmers who have good knowledge of another programming language but dont have any background in Python. Whats really good about Gruss intro to Python is that aside from the very basic stuff, he takes you through some of the advanced features for handling arrays and matrices that you wont find in general Python tutorial textbooks, such as list comprehensions, assertions, iterables and generators, and other very useful tools.

Moreover, the Second Edition of Data Science from Scratch, published in 2019, leverages some of the advanced features of Python 3.6, including type annotations (which youll love if you come from a strongly typed language like C++).

What makes Data Science from Scratch a bit different from other data science textbooks is its unique way to do everything from scratch. Instead of introducing you to NumPy and Pandas functions that will calculate coefficients and, say, mean absolute errors (MAE) and mean square errors (MSE), Grus shows you how to code it yourself.

He does, of course, remind you that the books sample code is meant for practice and education and will not match the speed and efficiency of professional libraries. At the end of each chapter, he provides references to documentation and tutorials of the Python libraries that correspond to the topic you have just learned. But the from-scratch approach is fun nonetheless, especially if youre one of those I-have-to-know-what-goes-on-under-the-hood type of people.

One thing youll have to consider before diving into this book is, youll need to bring your math skills with you. In the book, Grus codes fundamental math functions, starting from simple vector math to more advanced statistic concepts such as calculating standard deviations, errors, and gradient descent. However, he assumes that you already know how the math works. I guess its okay if youre fine with just copy-pasting the code and seeing it work. But if youve picked up this book because you want to make sense of everything, then have your calculus textbook handy.

After the basics, Data Science from Scratch goes into machine learning, covering various algorithms, including the different flavors of regression models and decision trees. You also get to delve into the basics of neural networks followed by a chapter on deep learning and an introduction to natural language processing.

In short, I would describe Data Science with Python as a fully hands-on introduction to data science and machine learning. Its the most practice-driven book on data science and machine learning that Ive read. The authors have done a great job of bringing together the right data samples and practice code to get you acquainted with the principles of data science and machine learning.

The book contains minimal theoretical content and mostly teaches you by taking you through coding labs. If you have a decent computer and an installation of Anaconda or another Python package that has comes bundled with Jupyter Notebooks, then you can probably go through all the exercises with minimal effort. I highly recommend writing the code yourself and avoiding copy-pasting it from the book or sample files, since the entire goal of the book is to learn through practice.

Youll find no Python intro here. Youll dive straight into NumPy, Pandas, and scikit-learn. Theres also no deep dive into mathematical concepts such as correlations, error calculations, z-scores, etc., so youll need to get help from your math book whenever you need a refresher on any of the topics.

Alternatively, you can just type in the code and see Pythons libraries work their magic. Data Science with Python does a decent job of showing you how to put together the right pieces for any data science and machine learning project.

Data Science with Python provides a solid intro to data preparation and visualization, and then takes you through a rich assortment of machine learning algorithms as well as deep learning. There are plenty of good examples and templates you can use for other projects. The book also gives an intro on XGBoost, a very useful optimization library, and the Keras neural network library. Youll also get to fiddle around with convolutional neural networks (CNN), the cornerstone of current advances in computer vision.

Before starting this book, I strongly recommend that you go through a gentler introductory book that covers more theory, such as Ozdemirs Principles of Data Science. It will make the ride less confusing. The combination of the two will leave you with a very strong foundation to tackle more advanced topics.

These are just three of the many data science books that are out there. If youve read other awesome books on the topic, please share your experience in the comments section. There are also plenty of great interactive online courses, like Udemys Machine Learning A-Z: Hands-On Python & R In Data Science (I will be reviewing this one in the coming weeks).

While an intro to data science will give you a good foothold into the world of machine learning and the broader field of artificial intelligence, theres a lot of room for expanding that knowledge.

To build on this foundation, you can take a deeper dive into machine learning. There are plenty of good books and courses out there. One of my favorites is Aurelien Gerons Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow (also scheduled for review in the coming months). You can also go deeper on one of the sub-disciplines of ML and deep learning such as CNNs, NLP or reinforcement learning.

Artificial intelligence is complicated, confusing, and exciting at the same time. The best way to understand it is to never stop learning.

Originally posted here:
3 books to get started on data science and machine learning - TechTalks

New Project at Jefferson Lab Aims to Use Machine Learning to Improve Up-Time of Particle Accelerators – HPCwire

NEWPORT NEWS, Va., Jan. 30, 2020 More than 1,600 nuclear physicists worldwide depend on the Continuous Electron Beam Accelerator Facility for their research. Located at the Department of Energys Thomas Jefferson National Accelerator Facility in Newport News, Va., CEBAF is a DOE User Facility that is scheduled to conduct research for limited periods each year, so it must perform at its best during each scheduled run.

But glitches in any one of CEBAFs tens of thousands of components can cause the particle accelerator to temporarily fault and interrupt beam delivery, sometimes by mere seconds but other times by many hours. Now, accelerator scientists are turning to machine learning in hopes that they can more quickly recover CEBAF from faults and one day even prevent them.

Anna Shabalina is a Jefferson Lab staff member and principal investigator on the project, which has been funded by theLaboratory Directed Research & Development programfor the fiscal year 2020. The program provides the resources for Jefferson Lab personnel to make rapid and significant contributions to critical science and technology problems of mission relevance to the lab and the DOE.

Shabalina says her team is specifically concerned with the types of faults that most often bring CEBAF grinding to a halt: those that concern the superconducting radiofrequency acceleration cavities.

Machine learning is quickly gaining popularity, particularly for optimizing, automating and speeding up data analysis, Shabalina says. This is exactly what is needed to reduce the workload for SRF cavity fault classification.

SRF cavities are the backbone of CEBAF. They configure electromagnetic fields to add energy to the electrons as they travel through the CEBAF accelerator. If an SRF cavity faults, the cavity is turned off, disrupting the electron beam and potentially requiring a reconfiguration that limits the energy of the electrons that are being accelerated for experiments.

Shabalina and her team plan to use a recently deployed data acquisition system that records data from individual cavities. The system records 17 parameters from a cavity that faults; it also records the 17 parameters from a cavity if one of its near neighbors faults.

At present, system experts visually inspect each data set by hand to identify the type of fault and which component caused it. The information is a valuable tool that helps CEBAF operators for how to mitigate the fault.

Each cavity fault leaves a unique signature in the data, Shabalina says. Machine learning is particularly well suited for finding patterns, even in noisy data.

The team plans to work off of this strength of machine learning to build a model that recognizes the various types of faults. When shown enough input signals and corresponding fault types, the model is expected to be able to identify the fault patterns in CEBAFs complex signals. The next step would then be to run the model during CEBAF operations so that it can classify in real time the different kinds of faults that cause the machine to automatically trip off.

We plan to develop machine learning models to identify the type of the fault and the cavity causing instability. This will give operators the ability to apply pointed measures to quickly bring the cavities back online for researchers, Shabalina explains.

If successful, the project would also open the possibility of extending the model to identify precursors to cavity trips, so that operators would have an early warning system of possible faults and can take action to prevent them from ever occurring.

About Jefferson Science Associates, LLC

Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energys Office of Science. DOEs Office of Science is the single largest supporter of basic research in the physical sciences in the United Statesand is working to address some of the most pressing challenges of our time. For more information, visithttps://energy.gov/science.

Source: Thomas Jefferson National Accelerator Facility (Jefferson Lab)

Read the original:
New Project at Jefferson Lab Aims to Use Machine Learning to Improve Up-Time of Particle Accelerators - HPCwire

Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning – Yahoo Finance

Payoneer uses Iguazio to move from detection to prevention of fraud with predictive machine learning models served in real-time.

Iguazio, the data science platform for real time machine learning applications, today announced that Payoneer, the digital payment platform empowering businesses around the world to grow globally, has selected Iguazios platform to provide its 4 million customers with a safer payment experience. By deploying Iguazio, Payoneer moved from a reactive fraud detection method to proactive prevention with real-time machine learning and predictive analytics.

Payoneer overcomes the challenge of detecting fraud within complex networks with sophisticated algorithms tracking multiple parameters, including account creation times and name changes. However, prior to using Iguazio, fraud was detected retroactively, enabling customers to only block users after damage had already been done. Payoneer is now able to take the same sophisticated machine learning models built offline and serve them in real-time against fresh data. This ensures immediate prevention of fraud and money laundering with predictive machine learning models identifying suspicious patterns continuously. The cooperation was facilitated by Belocal, a leading Data and IT solution integrator for mid and enterprise companies.

"Weve tackled one of our most elusive challenges with real-time predictive models, making fraud attacks almost impossible on Payoneer" noted Yaron Weiss, VP Corporate Security and Global IT Operations (CISO) at Payoneer. "With Iguazios Data Science Platform, we built a scalable and reliable system which adapts to new threats and enables us to prevent fraud with minimum false positives".

"Payoneer is leading innovation in the industry of digital payments and we are proud to be a part of it" said Asaf Somekh, CEO, Iguazio. "Were glad to see Payoneer accelerating its ability to develop new machine learning based services, increasing the impact of data science on the business."

"Payoneer and Iguazio are a great example of technology innovation applied in real-world use-cases and addressing real market gaps" said Hugo Georlette, CEO, Belocal. "We are eager to continue selling and implementing Iguazios Data Science Platform to make business impact across multiple industries."

Iguazios Data Science Platform enables Payoneer to bring its most intelligent data science strategies to life. Designed to provide a simple cloud experience deployed anywhere, it includes a low latency serverless framework, a real-time multi-model data engine and a modern Python eco-system running over Kubernetes.

Earlier today, Iguazio also announced having raised $24M from existing and new investors, including Samsung SDS and Kensington Capital Partners. The new funding will be used to drive future product innovation and support global expansion into new and existing markets.

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, companies can run AI models in real time, deploy them anywhere; multi-cloud, on-prem or edge, and bring to life their most ambitious data-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, telecoms and gaming, use Iguazio to create business impact through a multitude of real-time use cases. Iguazio is backed by top financial and strategic investors including Samsung, Verizon, Bosch, CME Group, and Dell. The company is led by serial entrepreneurs and a diverse team of innovators in the USA, UK, Singapore and Israel. Find out more on http://www.iguazio.com

About Belocal

Since its inception in 2006, Belocal has experienced consistent and sustainable growth by developing strong long-term relationships with its technology partners and by providing tremendous value to its clients. We pride ourselves on delivering the most innovative technology solutions enabling our customers to lead their market segments and stay ahead of the competition. At Belocal, we pride ourselves in our ability to listen, our attention to detail and our expertise in innovation. Such strengths have enabled us to develop new solutions and services, to suit the changing needs of our clients and acquire new businesses by tailoring all our solutions and services to the specific needs of each client.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200127005311/en/

Contacts

Iguazio Media Contact:Sahar Dolev-Blitental, +972.73.321.0401press@iguazio.com

Here is the original post:
Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning - Yahoo Finance

AI snatches jobs from DJs and warehouse workers, plus OpenAI and PyTorch sittin’ in a tree, AI, AI, AI for you and me – The Register

Roundup Let's catch you up on the latest goings on in the world of AI beyond what we've already written about.

Hurrah for reinforcement learning! A machine learning robot trained using reinforcement algorithms and armed with three suction cups has been sorting through light sockets and switches in a German warehouse, proof that AI robots are now finally useful enough in real world warehouse applications.

Reinforcement learning, a technique used to train bots to perform a specific task through trial and error, was all the rage in the machine learning world. In 2016, DeepMinds agent AlphaGo beat Lee Sedol, one of the best professional Go players, using a combination of advanced search and reinforcement learning.

Splashy headlines described other games that AI had managed to master: Chess, Ms Pacman, Montezumas Revenge, Dota 2, the list goes on. At first, it was pretty exciting but it started to get a little disenchanting when it was just bots playing video games.

Getting physical AI robots to navigate the real world, however, seemed out of reach. The real world was too messy; robots would be thrown off by tiny details in their environment like light conditions.

It took considerable effort to bridge the simulation to reality gap. Folks over at OpenAI managed it but even then it was for simple tasks that werent very useful like manipulating a Rubiks cube.

Now, finally, a real, commercial robot trained in simulation using reinforcement learning really is practical. Engineers over at Covariant AI have successfully developed a robot arm capable of sorting through equipment for Knapp, a German logistics and production company, faster than its human workers.

Dirk Jandura, the managing director of the Obeta warehouse outside of Berlin, told The New York Times that its latest all-metal employee doesnt smoke, is always in good health, isnt chatting with its neighbors, no toilet breaks. Not only is it more efficient, he gushed, its also much cheaper than the companys all-meat employees.

Knapp arent about to fire all of its human staff, however, theyre still much smarter and better than Covariant AIs robot at jobs beyond the sorting tasks - for the moment at least.

Radio broadcasting giant fires hundreds of staff in favor of machines: Heres another story about AI software taking peoples jobs: iHeartMedia, the radio broadcasting conglomerate that owns the popular station iHeartRadio, announced it was laying off hundreds of employees in an attempt to take advantage of...AI.

The restructuring will see AI technology being used for every market, apparently. iHeartMedia listed programming, marketing, digital, podcasts, sales and sales support as some of the main areas it hoped to modernize.

iHeart is the rare example of a major traditional media company that has made the successful transformation into a 21st century media company one with unparalleled scale, reaching 91 per cent of Americans each month with our broadcast assets alone, more than any other media company, Bob Pittman, chairman and CEO of iHeartMedia, declared in a statement earlier this month.

We are now using our considerable investments in technology to modernize our operations and infrastructure, further setting us apart from traditional media companies; improving our services to our consumers and advertising partners; and enhancing the work environment for our employees.

Whats not mentioned in the announcement, however, is that iHeartMedias efforts to revamp itself comes at the cost of hundreds of employees jobs. The cuts have been described as a bloodbath, according to The Washington Post.

Ex-staff said iHeartMedia has been playing around with AI software capable of mixing music, a job traditionally left for DJs.

"Theyve decided to replace a lot of workers, a lot of live shows, with AI and another DJ in another state, another city, not in Fresno, dont know nothing about Fresno, Monisha Mann, a former DJ at B95, an hip-hop radio station owned by iHeartMedia, said during a live video feed on Instagram and Facebook. I just sat in my car, like, damn. This is it for me. I just got laid off from something I loved so much.

OpenAI switches to PyTorch: OpenAI has chosen to build all its deep learning models in PyTorch, the popular framework developed by Facebook.

The Python-based libraries are arguably much more flexible to use and deploy than other frameworks like Googles TensorFlow. Both are big bonuses in research, where ease and speed are more important than code optimized for production.

The main reason we chose PyTorch is to increase our research productivity at scale on GPUs, the San-Francisco research lab said this week.

It is very easy to try and execute new research ideas in PyTorch; for example, switching to PyTorch decreased our iteration time on research ideas in generative modeling from weeks to days. Were also excited to be joining a rapidly-growing developer community, including organizations like Facebook and Microsoft, in pushing scale and performance on GPUs.

Sponsored: Detecting cyber attacks as a small to medium business

Link:
AI snatches jobs from DJs and warehouse workers, plus OpenAI and PyTorch sittin' in a tree, AI, AI, AI for you and me - The Register

Made in India’ phone, artificial intelligence, machine learning and all that Budget 2020 has for the technology sector – Gadgets Now

GadgetsNow & Agencies | Feb 1, 2020, 05:28 PM IST

1 / 16Made in India' phone, artificial intelligence, machine learning and all that Budget 2020 has for the technology sector

The Union Budget 2020-21 talks about several policies to leverage technology to boost economic growth in India going ahead. Finance minister Nirmala Sitharaman proposed steps to allow private players to build data centre parks along with several measures to help startups. Heres everything about tech that FM announced in Budget 2020.

2 / 16New scheme to make mobile phones, electronics in India

The government will introduce a new scheme to encourage domestic manufacturing of mobile phones, electronic equipment and semiconductor packaging in order to make India a part of the global manufacturing chain and boost employment opportunities.

3 / 16New policy to enable private sector to build Data Centre parks throughout the country

4 / 16Fibre to the Home (FTTH) connections through Bharatnet to link 100,000 Gram Panchayats by 2020

5 / 16Finance minister Nirmala Sitharaman has proposed Rs 6,000 crore for Bharatnet programme in 2020-21

6 / 16For startups, a digital platform to be promoted to facilitate seamless application and capture of IPRs

7 / 16Knowledge Translation Clusters to be set up across different technology sectors including new and emerging areas to help startups

8 / 16This budget has announced steps to help startups build manufacturing facilities

For designing, fabrication and validation of proof of concept, and further scaling up Technology Clusters, harbouring test beds and small scale manufacturing facilities to be established.

9 / 16Two new national level Science Schemes to be initiated to create a comprehensive database to map Indias genetic landscape

10 / 16Early life funding proposed, including a seed fund to support ideation and development of early stage startups

11 / 16Budget 2020 proposes Rs 8,000 crore over five years for National Mission on Quantum Technologies and Applications

12 / 16NABARD to map and geo-tag agri-warehouses, cold storages, reefer van facilities, etc.

13 / 16Targeting diseases with an appropriately designed preventive regime using Machine Learning and AI

14 / 16Up to 1-year internship to fresh engineers to be provided by Urban Local Bodies.

15 / 16150 higher educational institutions to start apprenticeship embedded degree/diploma courses by March 2021

16 / 16Financing on Negotiable Warehousing Receipts (e-NWR) to be integrated with e-NAM.

The Budget 2020 proposes financing on negotiable warehousing receipts (e-NWR) to be integrated with e-NAM. This is likely to help facilitate seamless trading of agricultural products across the country and help realize the full potential of the national agriculture market (eNAM).

Read this article:
Made in India' phone, artificial intelligence, machine learning and all that Budget 2020 has for the technology sector - Gadgets Now

CyberMAK Partners With Kore.ai Offering Conversational AI-powered Chatbots for Digital Transformation – AiThority

Through this partnership, CyberMAK along with Kore.ai will be providing an enterprise-grade, end-to-end conversational AI platform that can be deployed on-premises or on-cloud, enabling companies to rapidly and easily build and deploy advanced chatbots.

Kore.ai offers an all-in-one conversational AI platform (as-a-service) that allows enterprises to build and deploy out-of-the-box or customized chatbots/virtual assistants for customers and workforce. It combines natural language processing, machine learning, and AI into enterprise-wide collaboration and automation through conversational interfaces, thus supporting the growing mandate for digital transformation. Kore.aisplatform has a multi-pronged NLP engine that supports 30+ channels, and makes websites and mobile app more human-like, integrates with multiple enterprise backend applications and helps global enterprises across multiple verticals leverage the power of conversational AI without compromising privacy, security or compliance. These bots make digital interactions faster and more human, triggering conditional flows and steer user-bot conversations with sentiment analysis and tone processing. Bots built with Kore.ai analyze the emotional state of users and callers and model their response to provide excellent service.

Recommended News: Netwrix Reveals Top Seven It Predictions for 2020

In addition to its market-leading conversational AI platform, Orlando-based Kore.ai brings targeted conversational AI solutions to cater to the exploding demand in areas of:

In its article on chatbots, Gartner states that by 2022, 70% of white-collar workers will interact with conversational platforms on a daily basis. There has been more than 160% increase in client interest around implementing chatbots and associated technologies in 2018 from previous years, saysVan Baker, VP Analyst at Gartner. This increase has been driven bycustomer service, knowledge management and user support.

The partnership with Kore.ai is strategic to our portfolio and will further augment CyberMAKs leadership and expertise in IT Service Management, Customer Care and Service Quality with BMC Software and Robotic Process Automation with Automation Anywhere and brings immense value and competitive advantage to our customers in the several industry verticals, saidColin Miranda, President& CEO, CyberMAK Information Systems.

Recommended News: VoiceFoundry Announces its Consultant Listing on Salesforce AppExchange, the Worlds Leading Enterprise Cloud Marketplace

CyberMAK and Kore.ai will enable clients to leverage the power of AI chatbots by providing a platform to connect with customers and have valuable conversations, providing solutions to their problems. This not only improves customer experience but allows customers to efficiently obtain the desired information in real-time, leading to customer loyalty and retention. The AI+ML+NLP based chat platform, will automatically facilitate smarter conversations between the client and its stakeholders.

With this partnership, we believe we will jointly deliver exceptional value for global enterprises: complementing CyberMAKs robust IT service delivery with Kores end-to-end and comprehensive conversational AI technology capabilities, said Raj Koneru, Founder and CEO of Kore.ai. CyberMAKs expertise in helping large organizations realize the fruits of digital transformation through increased customer satisfaction and enhanced efficiency, is aligned to our own philosophy of creating intelligent enterprises through conversational AI.

Recommended News: Google Cloud Lands Agreement With Lufthansa Group to Support Optimization of Its Airline Operations

Visit link:
CyberMAK Partners With Kore.ai Offering Conversational AI-powered Chatbots for Digital Transformation - AiThority

Itiviti Partners With AI Innovator Imandra to Integrate Machine Learning Into Client Onboarding and Testing Tools – PRNewswire

NEW YORK, Jan. 30, 2020 /PRNewswire/ -- Itiviti, a leading technology, and service provider to financial institutions worldwide, has signed an exclusive partnership agreement with Imandra Inc., the AI pioneer behind the Imandra automated reasoning engine.

Imandra's technology will initially be applied to improving the onboarding process for our clients to Itiviti's Managed FIX global connectivity platform, with further plans to swiftly expand the AI capabilities across a number of our software solutions and services.

Imandra is the world-leader in cloud-scale automated reasoning, and has pioneered scalable symbolic AI for financial algorithms. Imandra's technology brings deep advances relied upon in safety-critical industries such as avionics and autonomous vehicles to the financial markets. Imandra is relied upon by top investment banks for the design, testing and governance of highly regulated trading systems. In 2019, the company expanded outside financial services and is currently under contract with the US Department of Defense for applications of Imandra to safety-critical algorithms.

"Partnerships are integral to Itiviti's overall strategy, by partnering with cutting edge companies like Imandra we can remain at the forefront of technology innovation and continue to develop quality solutions to support our clients. Generally, client onboarding has been a neglected area within the industry for many years, but we believe working with Imandra we can raise the level of automation for testing and QA, while significantly reducing onboarding bottlenecks for our clients. Other areas we are actively exploring to benefit from AI are within the Compliance and Analytics space. We are very excited to be working with Imandra." said Linda Middleditch, EVP, Head of Product Strategy, Itiviti Group.

"This partnership will capture the tremendous opportunities within financial markets for removing manual work and applying much-needed rigorous scientific techniques toward testing of safety critical infrastructure," said Denis Ignatovich, co-founder and co-CEO of Imandra. "We look forward to helping Itiviti empower clients to take full advantage of their solutions, while adding key capabilities." Dr Grant Passmore, co-founder and co-CEO of Imandra, further added, "This partnership is the culmination of many years of deep R&D and we're thrilled to partner with Itiviti to bring our technology to global financial markets on a massive scale."

About Itiviti

Itiviti enables financial institutions worldwide to transform their trading and capture tomorrow. With innovative technology, deep expertise and a dedication to service, we help customers seize market opportunities and guide them through regulatory change.

Top-tier banks, brokers, trading firms and institutional investors rely on Itiviti's solutions to service their clients, connect to markets, trade smarter in all asset classes by consolidating trading platforms and leverage automation to move faster.

A global technology and service provider, we offer the most innovative, consistent, and reliable connectivity and trading solutions available.

With presence in all major financial centres and serving around 2,000 clients in over 50 countries, Itiviti delivers on a global scale.

For more information, please visitwww.itiviti.com.

Itiviti is owned by Nordic Capital.

About Imandra

Imandra Inc. (www.imandra.ai) is the world-leader in cloud-scale automated reasoning, democratizing deep advances in algorithm analysis and symbolic AI for making algorithms safe, explainable and fair. Imandra has been deep in R&D and industrial pilots over the past 5 years and has recently closed its $5mm Seed round led by several top deep-tech investors in US and UK. Imandra is headquartered in Austin, TX, and has offices in the UK and continental Europe.

For further information, please contact:

Itiviti

Linda Middleditch, EVPHead of Product StrategyTel +44 796 82 126 24Email: linda.middleditch@itiviti.com

George RosenbergerHead of Product StrategyClient Connectivity ServiceTel: + Email: george.rosenberger@itiviti.com

Christine BlinkeEVP, Head of Marketing & CommunicationsTel. +46 739 01 02 01Email: christine.blinke@itiviti.com

Imandra

Denis Ignatovich, co-CEOTel: +44 20 3773 6225Email: denis@imandra.ai

Grant Passmoreco-CEOTel: +1 512 629 4038Email: grant@imandra.ai

This information was brought to you by Cision http://news.cision.com

https://news.cision.com/itiviti-group-ab/r/itiviti-partners-with-ai-innovator-imandra-to-integrate-machine-learning-into-client-onboarding-and-,c3021540

The following files are available for download:

SOURCE Itiviti Group AB

More here:
Itiviti Partners With AI Innovator Imandra to Integrate Machine Learning Into Client Onboarding and Testing Tools - PRNewswire