Artificial intelligence for fraud detection is bound to save billions – ZME Science

Fraud mitigation is one of the most sought-after artificial intelligence (AI) services because it can provide an immediate return on investment. Already, many companies are experiencing lucrative profits thanks to AI and machine learning (ML) systems that detect and prevent fraud in real-time.

According to a new report, Highmark Inc.s Financial Investigations and Provider Review (FIPR) department generated $260 million in savings that would have otherwise been lost to fraud, waste, and abuse in 2019. In the last five years, the company saved $850 million.

We know the overwhelming majority of providers do the right thing. But we also know year after year millions of health care dollars are lost to fraud, waste and abuse, said Melissa Anderson, executive vice president and chief audit and compliance officer, Highmark Health. By using technology and working with other Blue Plans and law enforcement, we have continually evolved our processes and are proud to be among the best nationally.

FIPR detects fraud across its clients services with the help of an internal team made up of investigators, accountants, and programmers, as well as seasoned professionals with an eye for unusual activity such as registered nurses and former law enforcement agents. Human audits performed to detect unusual claims and assess the appropriateness of provider payments are used as training data for AI systems, which can adapt and react more rapidly to suspicious changing consumer behavior.

As fraudulent actors have become increasingly aggressive and cunning with their tactics, organizations are looking to AI to mitigate rising threats.

We know it is much easier to stop these bad actors before the money goes out the door then pay and have to chase them, said Kurt Spear, vice president of financial investigations at Highmark Inc.

Elsewhere, Teradata, an AI firm specialized in selling fraud detection solutions to banks, claims in a case study that it helped Danske Bank reduce its false positives by 60% and increased real fraud detection by 50%.

Other service operators are looking to AI fraud detection with a keen eye, especially in the health care sector. A recent survey performed by Optum found that 43% of health industry leaders said they strongly agree that AI will become an integral part of detecting telehealth fraud, waste, or abuse in reimbursement.

In fact, AI spending is growing tremendously with total operating spending set to reach $15 billion by 2024, the most sought-after solutions being network optimization and fraud mitigation. According to theAssociation of Certified Fraud Examiners (ACFE)inauguralAnti-Fraud Technology Benchmarking Report,the amount organizations are expected to spend on AI and machine learning to reduce online fraud is expected to triple by 2021.

Mitigating fraud in healthcare would be a boon for an industry that is plagued with many structural inefficiencies.

The United States spends about $3.5 trillion on healthcare-related services every year. This staggering sum corresponds to about 18% of the countrys GDP and is more than twice the average among developed countries. However, despite this tremendous spending, healthcare service quality is lacking. According to a now-famous 2017 study, the U.S. has fewer hospital beds and doctors per capita than any other developed country.

A 2019 study found that the countrys healthcare system is incredibly inefficient, burning through roughly 25% of all its finances which basically go to waste thats $760 billion annually in the best case scenario and up to $935 billion annually.

Most money is being wasted due to unnecessary administrative complexity, including billing and coding waste this alone is responsible for $265.6 billion annually. Drug pricing is another major source of waste, account for around $240 billion. Finally, over-treatment and failure of care delivery incurred another $300 billion in wasted costs.

And even these astronomical costs may be underestimated. According to management firm Numerof and Associates, the 25% waste estimate might be conservative. Instead, the firm believes that as much as 40% of the countrys healthcare spending is wasted, mostly due to administrative complexity. The firm adds that fraud and abuse account for roughly 8% of waste in healthcare.

Most cases of fraud in the healthcare sector are committed by organized crime groups and a fraction of some healthcare providers that are dishonest.

According to the National Healthcare Anti-Fraud Association, the most common types of healthcare frauds in the United States are:

Traditionally, the most prevalent method for fraud management has been human-generated rule sets. To this day, this is the most common practice but thanks to a quantum leap in computing and Big Data, AI-based solutions based on machine learning algorithms are becoming increasingly appealing and most importantly practical.

But what is machine learning anyway? Machine learning refers to algorithms that are designed learn like humans do and continuously tweak this learning process over time without human supervision. The algorithms output accuracy can be improved continuously by feeding them data and information in the form of observations and real-world interactions.

In other words, machine learning is the science of getting computers to act without being explicitly programmed.

There are all sorts of various machine learning algorithms, depending on the requirements of each situation and industry. Hundreds of new machine learning algorithms are published on a daily basis. Theyre typically grouped by:

In a healthcare fraud analytics context, machine learning eliminates the use of preprogrammed rule sets even those of phenomenal complexity.

Machine learning enables companies to efficiently determine what transactions or set of behaviors are most likely to be fraudulent, while reducing false positives.

In an industry where there can be billions of different transactions on a daily basis, AI-based analytics can be an amazing fit thanks to their ability to automatically discover patterns across large volumes of data.

The process itself can be complex since the algorithms have to interpret patterns in the data and apply data science in real-time in order to distinguish between normal behavior and abnormal behavior.

This can be a problem since an improper understanding of how AI works and fraud-specific data science techniques can lead you to develop algorithms that essentially learn to do the wrong things. Just like people can learn bad habits, so too can a poorly designed machine learning model.

In order for online fraud detection based on AI technology to succeed, these platforms need to check three very important boxes.

First, supervised machine learning algorithms have to be trained and fine-tuned based on decades worth of transaction data to keep false positives to a minimum and improve reaction time. This is harder said than done because the data needs to be structured and properly labeled depending on the size of the project, this could take staff even years to solve.

Secondly, unsupervised machine learning needs to keep up with increasingly sophisticated forms of online fraud. After all, AI is used by both auditors and fraudsters. And, finally, for AI fraud detection platforms to scale, they require a large-scale, universal data network of activity (i.e. transactions, filed documents, etc) to scale the ML algorithms and improve the accuracy of fraud detection scores.

According to a new market research report released earlier this year, the healthcare fraud analytics market is projected to reach $4.6 billion by 2025 from $1.2 billion in 2020.

This growth is attributed to more numerous and complex fraudulent activity in the healthcare sector.

In order to tackle rising healthcare fraud, companies offer various analytics solutions that flag fraudulent activity some are rule-based models, but AI-based technologies are expected to form the backbone of all types of analytics used in the future. These include descriptive, predictive, and prescriptive analytics.

Some of the most important companies operating today in the healthcare fraud analytics market include IBM Corporation (US), Optum (US), SAS Institute (US), Change Healthcare (US), EXL Service Holdings (US), Cotiviti (US), Wipro Limited (Wipro) (India), Conduent (US), HCL (India), Canadian Global Information Technology Group (Canada), DXC Technology Company (US), Northrop Grumman Corporation (US), LexisNexis Group (US), and Pondera Solutions (US).

That being said, there is a wide range of options in place today to prevent fraud. However, the evolving landscape of e-commerce and hacking pose new challenges all the time. To keep up, these challenges require innovation that can respond and react rapidly to fraud. The common denominator, from payment fraud to abuse, seems to be machine learning, which can easily scale to meet the demands of big data with far more flexibility than traditional methods.

Original post:
Artificial intelligence for fraud detection is bound to save billions - ZME Science

University Students Are Learning To Collaborate on AI Projects – Forbes

Penn States Nittany AI Challenge is teaching students the true meaning of collaboration in the age of Artificial Intelligence.

Nittany AI Challenge LionPlanner

This year, artificial intelligence is the buzzword. On university campuses, students who just graduated high school are checking out the latest computer science course offerings to see if they can take classes in machine learning. The truth about the age of Artificial Intelligence has caught many university administrators attention. In the age of AI, to be successful, everyone, no matter what jobs, skill sets, or majors will at some point encounter AI in their work and their life.Penn Statesaw the benefits of working on AI projects early, specifically when it comes to teamwork and collaboration. Since 2017, their successfulNittany AI Challengeeach year, has helped to teach students what it means to collaborate in the age of Artificial Intelligence.

Every university has challenges. Students bring a unique perspective and understanding of these challenges. The Nittany AI Challenge was created to provide a framework and support structure to enable students to form teams and collaborate on ideas that could address a problem or opportunity, using AI technology as the enabler. The Nittany AI Challenge is our innovation engine, ultimately putting students on stage to demonstrate how AI and machine learning can be leveraged to have a positive impact on the university.

The Nittany AI Challenge runs for 8 months each year. It has multiple phases such as the idea phase, the prototype phase, and the MVP phase. In the end, theres a pitch competition between 5 to 10 teams and they compete for a pool of $25,000. The challenge incentivizes students to keep going by awarding the best teams at each phase of the competition with another combined total of $25,000 during the 8 months of competition. By the time pitching comes around for the top 5 to 10 teams, these teams not only have figured out how they can work together as a team, but they have also experienced what it means to receive funding.

This year, the Nittany AI Challenge has expanded from asking students to solve the universitys problems using AI to broader categories based on the theme of AI for Good. Students are competing in additional categories such as humanitarianism, healthcare, and sustainability/climate change.

In the last two years, students formed teams amongst friends within their circle. But, as the competition matured, now, theres an online system that allows students to sign up for teams.

Students often form teams with students coming from different backgrounds and majoring in different disciplines based on their shared interest on a project. Christie Warren, the app designer from theLionPlanner team, helped her team to create a 4-year degree planning tool that won 2018s competition. She credits the competition for giving her a clear path to a career in app design and teaching her how to collaborate with developers.

For me the biggest learning curve is to learn to work alongside developers, as far as when to start to go into the high fidelity designs, wait for people to figure out the features that need to be developed, etc. Just looking at my designs and being really open to improvements and going through iterations of the design with the team helped me overcome the learning curve.

Early on, technology companies such as Microsoft, Google Cloud, IBM Watson, and Amazon Web Services recognized the value of an on-campus AI competition such as the Nittany AI Competition to provide teamwork education to students before they embark on internships with technology companies. Theyve been sponsoring the competition since its inception.

Both the students and us from Microsoft benefit from the time working together, in that we learn about each others culture, needs and aspirations. Challenges like the Nittany AI Challenge highlight that studying in Higher Education should be a mix of learning and fun. If we can help the students learn and enjoy the experience then we also help them foster a positive outlook about their future of work.

While having fun, some students like Michael D. Roos, project manager and backend developer from the LionPlanner team have seen synergy between his internships and his project for the Nittany AI competition. He credits the competition as giving him a pathway to success beyond simply a college education. Hes a lot more confident stepping out into the real world whether its working for a startup or a large technology company because of the experience gained.

I was doing my internship with Microsoft during a part of the competition. Some of the technology skills I learned at my internship I could then apply to my project for the competition. Also, having the cumulative experience of working on the project for the Nittany AI competition before going into my internship helped me with my internship. Even though I was interning at Microsoft, my team had similar startup vibes as the competition, my role on the team was similar to my role on the project. I felt I had a headstart in that role because of my experience in the competition.

One of the biggest myths that the Nittany AI Challenge helped to debunk is that AI implementations require only the skills of technologists. While computer science students who take a keen interest in machine learning and AI are central to every project inside the Nittany AI Challenge, its often the people who are the visionary project managers, creative designers, and students who are majoring in other disciplines such as healthcare, biological sciences, and business who end up making the most impactful contributions to the team.

The AI Alliance makes the challenge really accessible. For people like me who dont know AI, we can learn AI along the way.

The LionPlanner Team that won the competition in 2018 contributes their success mainly to the outstanding design that won over the judges. Christie, the app designer on the team credits her success to the way the team collaborated which enabled her to communicate with developers effectively.

Nyansapo_Team_pic

Every member of the Nyansapo Team that is trying to bring English education to remote parts of Kenya via NLP learning software contributes their success to the energy and the motivation behind the vision of the project. Because everyone felt strongly about the vision, even though they have one of the biggest teams in the competition, everyones pulling together and collaborating.

I really like to learn by doing. Everybody on the team joined, not just because they had something to offer, but because the vision was exciting. We are all behind this problem of education inequality in Kenya. We all want to get involved to solve this problem. We are this excited to want to go the extra step.

Not only does the Nittany AI challenge teach students the art of interdisciplinary collaboration, but it also teaches students time management, stress management, and how to overcome difficulties. During the competition, students are often juggling difficult coursework, internships, and other extracurricular activities. They often feel stressed and overwhelmed. This can pose tremendous challenges for team communication. But, as many students pointed out to me, these challenges are opportunities to learn how to work together.

There was a difficult moment yesterday in between my classes, where I had to schedule a meeting with Edward to discuss the app interface later during the day, at times, everything can feel a bit chaotic. In the back of my head, when I think about the vision of our project, how much Im learning on the project, and how Im working with all my friends, these are the things that keep me going even through hard times.

One of the projects from the Nittany AI Challenge that the university is integrating into their systems is the LionPlanner tool. It uses AI algorithms to help students match their profiles with clubs and extracurricular activities they might be interested in. It also helps students plan their courses to customize their degree to allow them to complete on time while keeping the cost of their degree as low as possible.

The students who worked on the project are now working to create a Prospective Student Planning Tool that can integrate into the University Admissions Office systems to be used by transfer students.

Currently, in the U.S., theres a skill gap of almost 1.5 million high tech jobs. Companies are having a hard time hiring people who have the skills to work in innovative companies. We now have coding camps, apprenticeships, and remote coding platforms.

Why not also have university-sponsored AI challenges where students can demonstrate their potential and abilities to collaborate?

The Nittany AI Challenge from Penn State presents a unique solution in the age of innovation that many employees are trying to solve. By sitting in the audience as judges, companies can follow the teams progress and watch students shine in their perspective areas. Students are not pitching their skills. Students are pitching their work products. They are showing what they can do in real-time for 8 months.

This could be a new way for companies to recruit. We have NFL drafts. Why not have drafts for star players on these AI teams that work especially well with others?

This year, Penn State introduced the Nittany AI Associates program where students can continue their work from the Nittany AI Challenge so that they can develop their ideas further.

So while thechallengeis the "Innovation Engine", theNittanyAIAssociates provides students the opportunity to work on managed projects with an actual client, funding to the students to reduce their debt (paid internships), a low cost, low risk avenue for the university (and other clients) to innovate, while providingAIknowledge transfer to client staff (the student becomes the teacher).

In the age of AI, education is becoming more multidisciplinary. When higher education institutions can evolve the way that they teach their students to enable both innovation and collaboration, then the potential they unleash in their graduates can have an exponential effect on their career and the companies that hire them. Creating competitions and collaborative work projects such as the Nittany AI Challenge within the university that fosters win-win thinking might just be the key to the type of innovations we need in higher education to keep up in the age of AI.

Original post:
University Students Are Learning To Collaborate on AI Projects - Forbes

Doing machine learning the right way – MIT News

The work of MIT computer scientist Aleksander Madry is fueled by one core mission: doing machine learning the right way.

Madrys research centers largely on making machine learning a type of artificial intelligence more accurate, efficient, and robust against errors. In his classroom and beyond, he also worries about questions of ethical computing, as we approach an age where artificial intelligence will have great impact on many sectors of society.

I want society to truly embrace machine learning, says Madry, a recently tenured professor in the Department of Electrical Engineering and Computer Science. To do that, we need to figure out how to train models that people can use safely, reliably, and in a way that they understand.

Interestingly, his work with machine learning dates back only a couple of years, to shortly after he joined MIT in 2015. In that time, his research group has published several critical papers demonstrating that certain models can be easily tricked to produce inaccurate results and showing how to make them more robust.

In the end, he aims to make each models decisions more interpretable by humans, so researchers can peer inside to see where things went awry. At the same time, he wants to enable nonexperts to deploy the improved models in the real world for, say, helping diagnose disease or control driverless cars.

Its not just about trying to crack open the machine-learning black box. I want to open it up, see how it works, and pack it back up, so people can use it without needing to understand whats going on inside, he says.

For the love of algorithms

Madry was born in Wroclaw, Poland, where he attended the University of Wroclaw as an undergraduate in the mid-2000s. While he harbored interest in computer science and physics, I actually never thought Id become a scientist, he says.

An avid video gamer, Madry initially enrolled in the computer science program with intentions of programming his own games. But in joining friends in a few classes in theoretical computer science and, in particular, theory of algorithms, he fell in love with the material. Algorithm theory aims to find efficient optimization procedures for solving computational problems, which requires tackling difficult mathematical questions. I realized I enjoy thinking deeply about something and trying to figure it out, says Madry, who wound up double-majoring in physics and computer science.

When it came to delving deeper into algorithms in graduate school, he went to his first choice: MIT. Here, he worked under both Michel X. Goemans, who was a major figure in applied math and algorithm optimization, and Jonathan A. Kelner, who had just arrived to MIT as a junior faculty working in that field. For his PhD dissertation, Madry developed algorithms that solved a number of longstanding problems in graph algorithms, earning the 2011 George M. Sprowls Doctoral Dissertation Award for the best MIT doctoral thesis in computer science.

After his PhD, Madry spent a year as a postdoc at Microsoft Research New England, before teaching for three years at the Swiss Federal Institute of Technology Lausanne which Madry calls the Swiss version of MIT. But his alma mater kept calling him back: MIT has the thrilling energy I was missing. Its in my DNA.

Getting adversarial

Shortly after joining MIT, Madry found himself swept up in a novel science: machine learning. In particular, he focused on understanding the re-emerging paradigm of deep learning. Thats an artificial-intelligence application that uses multiple computing layers to extract high-level features from raw input such as using pixel-level data to classify images. MITs campus was, at the time, buzzing with new innovations in the domain.

But that begged the question: Was machine learning all hype or solid science? It seemed to work, but no one actually understood how and why, Madry says.

Answering that question set his group on a long journey, running experiment after experiment on deep-learning models to understand the underlying principles. A major milestone in this journey was an influential paper they published in 2018, developing a methodology for making machine-learning models more resistant to adversarial examples. Adversarial examples are slight perturbations to input data that are imperceptible to humans such as changing the color of one pixel in an image but cause a model to make inaccurate predictions. They illuminate a major shortcoming of existing machine-learning tools.

Continuing this line of work, Madrys group showed that the existence of these mysterious adversarial examples may contribute to how machine-learning models make decisions. In particular, models designed to differentiate images of, say, cats and dogs, make decisions based on features that do not align with how humans make classifications. Simply changing these features can make the model consistently misclassify cats as dogs, without changing anything in the image thats really meaningful to humans.

Results indicated some models which may be used to, say, identify abnormalities in medical images or help autonomous cars identify objects in the road arent exactly up to snuff. People often think these models are superhuman, but they didnt actually solve the classification problem we intend them to solve, Madry says. And their complete vulnerability to adversarial examples was a manifestation of that fact. That was an eye-opening finding.

Thats why Madry seeks to make machine-learning models more interpretable to humans. New models hes developed show how much certain pixels in images the system is trained on can influence the systems predictions. Researchers can then tweak the models to focus on pixels clusters more closely correlated with identifiable features such as detecting an animals snout, ears, and tail. In the end, that will help make the models more humanlike or superhumanlike in their decisions. To further this work, Madry and his colleagues recently founded the MIT Center for Deployable Machine Learning, a collaborative research effort within the MIT Quest for Intelligence that is working toward building machine-learning tools ready for real-world deployment.

We want machine learning not just as a toy, but as something you can use in, say, an autonomous car, or health care. Right now, we dont understand enough to have sufficient confidence in it for those critical applications, Madry says.

Shaping education and policy

Madry views artificial intelligence and decision making (AI+D is one of the three new academic units in the Department of Electrical Engineering and Computer Science) as the interface of computing thats going to have the biggest impact on society.

In that regard, he makes sure to expose his students to the human aspect of computing. In part, that means considering consequences of what theyre building. Often, he says, students will be overly ambitious in creating new technologies, but they havent thought through potential ramifications on individuals and society. Building something cool isnt a good enough reason to build something, Madry says. Its about thinking about not if we can build something, but if we should build something.

Madry has also been engaging in conversations about laws and policies to help regulate machine learning. A point of these discussions, he says, is to better understand the costs and benefits of unleashing machine-learning technologies on society.

Sometimes we overestimate the power of machine learning, thinking it will be our salvation. Sometimes we underestimate the cost it may have on society, Madry says. To do machine learning right, theres still a lot still left to figure out.

Read the rest here:
Doing machine learning the right way - MIT News

RIT professor explores the art and science of statistical machine learning – RIT University News Services

Statistical machine learning is at the core of modern-day advances in artificial intelligence, but a Rochester Institute of Technology professor argues that applying it correctly requires equal parts science and art. Professor Ernest Fokou of RITs School of Mathematical Sciences emphasized the human element of statistical machine learning in his primer on the field that graced the cover of a recent edition of Notices of the American Mathematical Society.

One of the most important commodities in your life is common sense, said Fokou. Mathematics is beautiful, but mathematics is your servant. When you sit down and design a model, data can be very stubborn. We design models with assumptions of what the data will show or look like, but the data never looks exactly like what you expect. You may have a nice central tenet, but theres always something thats going to require your human intervention. Thats where the art comes in. After you run all these statistical techniques, when it comes down to drawing the final conclusion, you need your common sense.

Statistical machine learning is a field that combines mathematics, probability, statistics, computer science, cognitive neuroscience and psychology to create models that learn from data and make predictions about the world. One of its earliest applications was when the United States Postal Service used it to accurately learn and recognize handwritten letters and digits to autonomously sort letters. Today, we see it applied in a variety of settings, from facial recognition technology on smartphones to self-driving cars.

Researchers have developed many different learning machines and statistical models that can be applied to a given problem, but there is no one-size-fits-all method that works well for all situations. Fokou said using selecting the appropriate method requires mathematical and statistical rigor along with practical knowledge. His paper explains the central concepts and approaches, which he hopes will get more people involved in the field and harvesting its potential.

Statistical machine learning is the main tool behind artificial intelligence, said Fokou. Its allowing us to construct extensions of the human being so our lives, transportation, agriculture, medicine and education can all be better. Thanks to statistical machine learning, you can understand the processes by which people learn and slowly and steadily help humanity access a higher level.

This year, Fokou has been on sabbatical traveling the world exploring new frontiers in statistical machine learning. Fokous full article is available on the AMS website.

See more here:
RIT professor explores the art and science of statistical machine learning - RIT University News Services

Grok combines Machine Learning and the Human Brain to build smarter AIOps – Diginomica

A few weeks ago I wrote a piece here about Moogsoft which has been making waves in the service assurance space by applying artificial intelligence and machine learning to the arcane task of keeping on keeping critical IT up and running and lessening the business impact of service interruptions. Its a hot area for startups and Ive since gotten article pitches from several other AIops firms at varying levels of development.

The most intriguing of these is a company called Grok which was formed by a partnership between Numenta, a pioneering AI research firm co-founded by Jeff Hawkins and Donna Dubinsky, who are famous for having started two classic mobile computing companies, Palm and Handspring, and Avik Partners. Avik is a company formed by brothers Casey and Josh Kindiger, two veteran entrepreneurs who have successfully started and grown multiple technology companies in service assurance and automation over the past two decadesmost recently Resolve Systems.

Josh Kindiger told me in a telephone interview how the partnership came about:

Numenta is primarily a research entity started by Jeff and Donna about 15 years ago to support Jeffs ideas about the intersection of neuroscience and data science. About five years ago, they developed an algorithm called HTM and a product called Grok for AWS which monitors servers on a network for anomalies. They werent interested in developing a company around it but we came along and saw a way to link our deep domain experience in the service management and automation areas with their technology. So, we licensed the name and the technology and built part of our Grok AIOps platform around it.

Jeff Hawkins has spent most of his post-Palm and Handspring years trying to figure out how the human brain works and then reverse engineering that knowledge into structures that machines can replicate. His model or theory, called hierarchical temporal memory (HTM), was originally described in his 2004 book On Intelligence written with Sandra Blakeslee. HTM is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (in particular, human) brain. For a little light reading, I recommend a peer-reviewed paper called A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex.

Grok AIOps also uses traditional machine learning, alongside HTM. Said Kindiger:

When I came in, the focus was purely on anomaly detection and I immediately engaged with a lot of my old customers--large fortune 500 companies, very large service providers and quickly found out that while anomaly detection was extremely important, that first signal wasn't going to be enough. So, we transformed Grok into a platform. And essentially what we do is we apply the correct algorithm, whether it's HTM or something else, to the proper stream events, logs and performance metrics. Grok can enable predictive, self-healing operations within minutes.

The Grok AIOps platform uses multiple layers of intelligence to identify issues and support their resolution:

Anomaly detection

The HTM algorithm has proven exceptionally good at detecting and predicting anomalies and reducing noise, often up to 90%, by providing the critical context needed to identify incidents before they happen. It can detect anomalies in signals beyond low and high thresholds, such as signal frequency changes that reflect changes in the behavior of the underlying systems. Said Kindiger:

We believe HTM is the leading anomaly detection engine in the market. In fact, it has consistently been the best performing anomaly detection algorithm in the industry resulting in less noise, less false positives and more accurate detection. It is not only best at detecting an anomaly with the smallest amount of noise but it also scales, which is the biggest challenge.

Anomaly clustering

To help reduce noise, Grok clusters anomalies that belong together through the same event or cause.

Event and log clustering

Grok ingests all the events and logs from the integrated monitors and then applies to it to event and log clustering algorithms, including pattern recognition and dynamic time warping which also reduce noise.

IT operations have become almost impossible for humans alone to manage. Many companies struggle to meet the high demand due to increased cloud complexity. Distributed apps make it difficult to track where problems occur during an IT incident. Every minute of downtime directly impacts the bottom line.

In this environment, the relatively new solution to reduce this burden of IT management, dubbed AIOps, looks like a much needed lifeline to stay afloat. AIOps translates to "Algorithmic IT Operations" and its premise is that algorithms, not humans or traditional statistics, will help to make smarter IT decisions and help ensure application efficiency. AIOps platforms reduce the need for human intervention by using ML to set alerts and automation to resolve issues. Over time, AIOps platforms can learn patterns of behavior within distributed cloud systems and predict disasters before they happen.

Grok detects latent issues with cloud apps and services and triggers automations to troubleshoot these problems before requiring further human intervention. Its technology is solid, its owners have lots of experience in the service assurance and automation spaces, and who can resist the story of the first commercial use of an algorithm modeled on the human brain.

More here:
Grok combines Machine Learning and the Human Brain to build smarter AIOps - Diginomica

What is machine learning? – Brookings

In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term artificial intelligence to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observationsto demonstrate, that is, an innate intelligence.

The question was how to achieve that goal. Early efforts focused primarily on whats known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950sone of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as machine learningit wasnt until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.

Machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning works.

The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning worksas well as how it doesnt.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.If you think about it long enough, this makes sense. When we look at a picture of someone, our brains unconsciously estimate how likely it is that we have seen their face before. When we drive to the store, we estimate which route is most likely to get us there the fastest. When we play a board game, we estimate which move is most likely to lead to victory. Recognizing someone, planning a trip, plotting a strategyeach of these tasks demonstrate intelligence. But rather than hinging primarily on our ability to reason abstractly or think grand thoughts, they depend first and foremost on our ability to accurately assess how likely something is. We just dont always realize that thats what were doing.

Back in the 1950s, though, McCarthy and his colleagues did realize it. And they understood something else too: Computers should be very good at computing probabilities. Transistors had only just been invented, and had yet to fully supplant vacuum tube technology. But it was clear even then that with enough data, digital computers would be ideal for estimating a given probability. Unfortunately for the first AI researchers, their timing was a bit off. But their intuition was spot onand much of what we now know as AI is owed to it. When Facebook recognizes your face in a photo, or Amazon Echo understands your question, theyre relying on an insight that is over sixty years old.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.

The machine learning algorithm that Facebook, Google, and others all use is something called a deep neural network. Building on the prior work of Warren McCullough and Walter Pitts, Frank Rosenblatt coded one of the first working neural networks in the late 1950s. Although todays neural networks are a bit more complex, the main idea is still the same: The best way to estimate a given probability is to break the problem down into discrete, bite-sized chunks of information, or what McCullough and Pitts termed a neuron. Their hunch was that if you linked a bunch of neurons together in the right way, loosely akin to how neurons are linked in the brain, then you should be able to build models that can learn a variety of tasks.

To get a feel for how neural networks work, imagine you wanted to build an algorithm to detect whether an image contained a human face. A basic deep neural network would have several layers of thousands of neurons each. In the first layer, each neuron might learn to look for one basic shape, like a curve or a line. In the second layer, each neuron would look at the first layer, and learn to see whether the lines and curves it detects ever make up more advanced shapes, like a corner or a circle. In the third layer, neurons would look for even more advanced patterns, like a dark circle inside a white circle, as happens in the human eye. In the final layer, each neuron would learn to look for still more advanced shapes, such as two eyes and a nose. Based on what the neurons in the final layer say, the algorithm will then estimate how likely it is that an image contains a face. (For an illustration of how deep neural networks learn hierarchical feature representations, see here.)

The magic of deep learning is that the algorithm learns to do all this on its own. The only thing a researcher does is feed the algorithm a bunch of images and specify a few key parameters, like how many layers to use and how many neurons should be in each layer, and the algorithm does the rest. At each pass through the data, the algorithm makes an educated guess about what type of information each neuron should look for, and then updates each guess based on how well it works. As the algorithm does this over and over, eventually it learns what information to look for, and in what order, to best estimate, say, how likely an image is to contain a face.

Whats remarkable about deep learning is just how flexible it is. Although there are other prominent machine learning algorithms tooalbeit with clunkier names, like gradient boosting machinesnone are nearly so effective across nearly so many domains. With enough data, deep neural networks will almost always do the best job at estimating how likely something is. As a result, theyre often also the best at mimicking intelligence too.

Yet as with machine learning more generally, deep neural networks are not without limitations. To build their models, machine learning algorithms rely entirely on training data, which means both that they will reproduce the biases in that data, and that they will struggle with cases that are not found in that data. Further, machine learning algorithms can also be gamed. If an algorithm is reverse engineered, it can be deliberately tricked into thinking that, say, a stop sign is actually a person. Some of these limitations may be resolved with better data and algorithms, but others may be endemic to statistical modeling.

To glimpse how the strengths and weaknesses of AI will play out in the real-world, it is necessary to describe the current state of the art across a variety of intelligent tasks. Below, I look at the situation in regard to speech recognition, image recognition, robotics, and reasoning in general.

Ever since digital computers were invented, linguists and computer scientists have sought to use them to recognize speech and text. Known as natural language processing, or NLP, the field once focused on hardwiring syntax and grammar into code. However, over the past several decades, machine learning has largely surpassed rule-based systems, thanks to everything from support vector machines to hidden markov models to, most recently, deep learning. Apples Siri, Amazons Alexa, and Googles Duplex all rely heavily on deep learning to recognize speech or text, and represent the cutting-edge of the field.

When several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

The specific deep learning algorithms at play have varied somewhat. Recurrent neural networks powered many of the initial deep learning breakthroughs, while hierarchical attention networks are responsible for more recent ones. What they all share in common, though, is that the higher levels of a deep learning network effectively learn grammar and syntax on their own. In fact, when several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

Yet for all the success of deep learning at speech recognition, key limitations remain. The most important is that because deep neural networks only ever build probabilistic models, they dont understand language in the way humans do; they can recognize that the sequence of letters k-i-n-g and q-u-e-e-n are statistically related, but they have no innate understanding of what either word means, much less the broader concepts of royalty and gender. As a result, there is likely to be a ceiling to how intelligent speech recognition systems based on deep learning and other probabilistic models can ever be. If we ever build an AI like the one in the movie Her, which was capable of genuine human relationships, it will almost certainly take a breakthrough well beyond what a deep neural network can deliver.

When Rosenblatt first implemented his neural network in 1958, he initially set it loose onimages of dogs and cats. AI researchers have been focused on tackling image recognition ever since. By necessity, much of that time was spent devising algorithms that could detect pre-specified shapes in an image, like edges and polyhedrons, using the limited processing power of early computers. Thanks to modern hardware, however, the field of computer vision is now dominated by deep learning instead. When a Tesla drives safely in autopilot mode, or when Googles new augmented-reality microscope detects cancer in real-time, its because of a deep learning algorithm.

A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.

Convolutional neural networks, or CNNs, are the variant of deep learning most responsible for recent advances in computer vision. Developed by Yann LeCun and others, CNNs dont try to understand an entire image all at once, but instead scan it in localized regions, much the way a visual cortex does. LeCuns early CNNs were used to recognize handwritten numbers, but today the most advanced CNNs, such as capsule networks, can recognize complex three-dimensional objects from multiple angles, even those not represented in training data. Meanwhile, generative adversarial networks, the algorithm behind deep fake videos, typically use CNNs not to recognize specific objects in an image, but instead to generate them.

As with speech recognition, cutting-edge image recognition algorithms are not without drawbacks. Most importantly, just as all that NLP algorithms learn are statistical relationships between words, all that computer vision algorithms learn are statistical relationships between pixels. As a result, they can be relatively brittle. A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.

What makes our intelligence so powerful is not just that we can understand the world, but that we can interact with it. The same will be true for machines. Computers that can learn to recognize sights and sounds are one thing; those that can learn to identify an object as well as how to manipulate it are another altogether. Yet if image and speech recognition are difficult challenges, touch and motor control are far more so. For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

The reason: Picking up an object like a shirt isnt just one task, but several. First you need to recognize a shirt as a shirt. Then you need to estimate how heavy it is, how its mass is distributed, and how much friction its surface has. Based on those guesses, then you need to estimate where to grasp the shirt and how much force to apply at each point of your grip, a task made all the more challenging because the shirts shape and distribution of mass will change as you lift it up. A human does this trivially and easily. But for a computer, the uncertainty in any of those calculations compounds across all of them, making it an exceedingly difficult task.

Initially, programmers tried to solve the problem by writing programs that instructed robotic arms how to carry out each task step by step. However, just as rule-based NLP cant account for all possible permutations of language, there also is no way for rule-based robotics to run through all the possible permutations of how an object might be grasped. By the 1980s, it became increasingly clear that robots would need to learn about the world on their own and develop their own intuitions about how to interact with it. Otherwise, there was no way they would be able to reliably complete basic maneuvers like identifying an object, moving toward it, and picking it up.

The current state of the art is something called deep reinforcement learning. As a crude shorthand, you can think of reinforcement learning as trial and error. If a robotic arm tries a new way of picking up an object and succeeds, it rewards itself; if it drops the object, it punishes itself. The more the arm attempts its task, the better it gets at learning good rules of thumb for how to complete it. Coupled with modern computing, deep reinforcement learning has shown enormous promise. For instance, by simulating a variety of robotic hands across thousands of servers, OpenAI recently taught a real robotic hand how to manipulate a cube marked with letters.

For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

Compared with prior research, OpenAIs breakthrough is tremendously impressive. Yet it also shows the limitations of the field. The hand OpenAI built didnt actually feel the cube at all, but instead relied on a camera. For an object like a cube, which doesnt change shape and can be easily simulated in virtual environments, such an approach can work well. But ultimately, robots will need to rely on more than just eyes. Machines with the dexterity and fine motor skills of a human are still a ways away.

When Arthur Samuels coined the term machine learning, he wasnt researching image or speech recognition, nor was he working on robots. Instead, Samuels was tackling one of his favorite pastimes: checkers. Since the game had far too many potential board moves for a rule-based algorithm to encode them all, Samuels devised an algorithm that could teach itself to efficiently look several moves ahead. The algorithm was noteworthy for working at all, much less being competitive with other humans. But it also anticipated the astonishing breakthroughs of more recent algorithms like AlphaGo and AlphaGo Zero, which have surpassed all human players at Go, widely regarded as the most intellectually demanding board game in the world.

As with robotics, the best strategic AI relies on deep reinforcement learning. In fact, the algorithm that OpenAI used to power its robotic hand also formed the core of its algorithm for playing Dota 2, a multi-player video game. Although motor control and gameplay may seem very different, both involve the same process: making a sequence of moves over time, and then evaluating whether they led to success or failure. Trial and error, it turns out, is as useful for learning to reason about a game as it is for manipulating a cube.

Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear.

From Samuels on, the success of computers at board games has posed a puzzle to AI optimists and pessimists alike. If a computer can beat a human at a strategic game like chess, how much can we infer about its ability to reason strategically in other environments? For a long time, the answer was, very little. After all, most board games involve a single player on each side, each with full information about the game, and a clearly preferred outcome. Yet most strategic thinking involves cases where there are multiple players on each side, most or all players have only limited information about what is happening, and the preferred outcome is not clear. For all of AlphaGos brilliance, youll note that Google didnt then promote it to CEO, a role that is inherently collaborative and requires a knack for making decisions with incomplete information.

Fortunately, reinforcement learning researchers have recently made progress on both of those fronts. One team outperformed human players at Texas Hold Em, a poker game where making the most of limited information is key. Meanwhile, OpenAIs Dota 2 player, which coupled reinforcement learning with whats called a Long Short-Term Memory (LSTM) algorithm, has made headlines for learning how to coordinate the behavior of five separate bots so well that they were able to beat a team of professional Dota 2 players. As the algorithms improve, humans will likely have a lot to learn about optimal strategies for cooperation, especially in information-poor environments.This kind of information would be especially valuable for commanders in military settings, who sometimes have to make decisions without having comprehensive information.

Yet theres still one challenge no reinforcement learning algorithm can ever solve. Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear. Should corporate strategy prioritize growth or sustainability? Should U.S. foreign policy prioritize security or economic development? No AI will ever be able to answer higher-order strategic reasoning, because, ultimately, those are moral or political questions rather than empirical ones. The Pentagon may lean more heavily on AI in the years to come, but it wont be taking over the situation room and automating complex tradeoffs any time soon.

From autonomous cars to multiplayer games, machine learning algorithms can now approach or exceed human intelligence across a remarkable number of tasks. The breakout success of deep learning in particular has led to breathless speculation about both the imminent doom of humanity and its impending techno-liberation. Not surprisingly, all the hype has led several luminaries in the field, such as Gary Marcus or Judea Pearl, to caution that machine learning is nowhere near as intelligent as it is being presented, or that perhaps we should defer our deepest hopes and fears about AI until it is based on more than mere statistical correlations. Even Geoffrey Hinton, a researcher at Google and one of the godfathers of modern neural networks, has suggested that deep learning alone is unlikely to deliver the level of competence many AI evangelists envision.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics? If all of it can be, then machine learning may well be all we need to get to a true artificial general intelligence. But its very unclear whether thats the case. As far back as 1969, when Marvin Minsky and Seymour Papert famously argued that neural networks had fundamental limitations, even leading experts in AI have expressed skepticism that machine learning would be enough. Modern skeptics like Marcus and Pearl are only writing the latest chapter in a much older book. And its hard not to find their doubts at least somewhat compelling. The path forward from the deep learning of today, which can mistake a rifle for a helicopter, is by no means obvious.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics?

Yet the debate over machine learnings long-term ceiling is to some extent beside the point. Even if all research on machine learning were to cease, the state-of-the-art algorithms of today would still have an unprecedented impact. The advances that have already been made in computer vision, speech recognition, robotics, and reasoning will be enough to dramatically reshape our world. Just as happened in the so-called Cambrian explosion, when animals simultaneously evolved the ability to see, hear, and move, the coming decade will see an explosion in applications that combine the ability to recognize what is happening in the world with the ability to move and interact with it. Those applications will transform the global economy and politics in ways we can scarcely imagine today. Policymakers need not wring their hands just yet about how intelligent machine learning may one day become. They will have their hands full responding to how intelligent it already is.

View original post here:
What is machine learning? - Brookings

Twitter says AI tweet recommendations helped it add millions of users – The Verge

Twitter had 152 million daily users during the final months of 2019, and it says the latest spike was thanks in part to improved machine learning models that put more relevant tweets in peoples timelines and notifications. The figure was released in Twitters Q4 2019 earnings report this morning.

Daily users grew from 145 million the prior quarter and 126 million during the same period a year earlier. Twitter says this was primarily driven by product improvements, such as the increased relevance of what people are seeing in their main timeline and their notifications.

By default, Twitter shows users an algorithmic timeline that highlights what it thinks theyll be most interested in; for users following few accounts, it also surfaces likes and replies by the people they follow, giving them more to scroll through. Twitters notifications will also highlight tweets that are being liked by people you follow, even if you missed that tweet on your timeline.

Twitter has continually been trying to reverse concerns about its user growth. The services monthly user count shrank for a full year going into 2019, leading it to stop reporting that figure altogether. Instead, it now shares daily users, a metric that looks much rosier.

Compared to many of its peers, though, Twitter still has an enormous amount of room to grow. Snapchat, for comparison, reported 218 million daily users during its final quarter of 2019. Facebook reported 1.66 billion daily users over the same time period.

Twitter also announced a revenue milestone this quarter: it brought in more than $1 billion in quarterly revenue for the first time. The total was just over the milestone $1.01 billion during its final quarter, up from $909 million that quarter the prior year.

Last quarter, Twitter said that its ad revenue took a hit due to bugs that limited its ability to target ads and share advertising data with partners. At the time, the company said it had taken steps to remediate the issue, but it didnt say whether it was resolved. In this quarters update, Twitter says it has since shipped remediations to those issues.

View post:
Twitter says AI tweet recommendations helped it add millions of users - The Verge

Googles Machine Learning Is Making You More Effective In 2020 – Forbes

The collection of web-based software that Google offers to businesses and consumers is officially known as G Suite. Most people are familiar with Gmail and Google Docs, but quite a few do not realize that they offer a whole range of productivity and collaboration tools via your computer or mobile device.

HONG KONG, HONG KONG - November 27: A woman using an Macbook Pro as she uses Google G Suite on ... [+] November 27, 2017 in Hong Kong, Hong Kong. (Photo by studioEAST/Getty Images)

I have been working on another post about consumer-level uses of artificial intelligence (AI), not the media-hyped creepiness, but the practical, useful ways that AI is helping us do more and be more. Google started me thinking about this as I have watched it add various smart functions (think AI) to email as well as increasing ways to help me complete or enhance a document, spreadsheet, or presentation with the Explore function.It keeps learning from you and adjusting to you with these features.

Draft and send email responses quicker: Two relatively new, intelligent features include Smart Compose and Smart Reply. Gmail will suggest ways to complete your sentences while drafting an email and suggest responses to incoming messages as one-click buttons (at the bottom of the newly received message). This works in relatively simple messages that are calling for answers like these:

Enable Smart Compose and Smart Reply by going to Settings (that little gear icon in the upper right of your email inbox). Smart Reply is automatically enabled when users switch to the new Gmail.

On mobile and desktop or web, Smart Reply utilizes machine learning to give you better responses the more you use it. So if you're more of a thanks! than a thanks. person, itll suggest the response that is more authentic to you. Subtle difference, for sure, but I have noticed with certain people I interact with, the punctuation does change to show more emotion. I have not seen any emojis popping up, however. That may be a good thing.

For some of the newest features, you must go to Settings, then click Experimental Access. Features that are under test have a special little chemistry bottle icon or emoji. Most of the features in this post have already been fully tested and released to the general public.

Auto-reminders to respond: Gmails new Nudging function reportedly will now automatically bump both incoming and outgoing messages to the top of your inbox after a few days if neither party has responded. You can turn this feature on/off in Settings. However, I have not had this work properly, but maybe I am simply too efficient. Not. Either way, I have not noticed these reminders yet.

Machine Learning in Google Docs, Google Sheets, and Google Slides

The Explore button in the lower right corner of Docs, Sheets, or Slides is machine learning (ML) in action. You can visualize data in Sheets without using a formula. The explore button is a compass-looking type star and as you hover over it, it expands. Once clicked, it serves as a search tool within these products.

Explore and visualize in Sheets to help you decipher data easily by asking Explore with words, not formulas to get answer about your data. You can ask it a question like, how many units were sold on Black Friday, or what is my best selling product? or how much was spent on payroll last month, can be asked directly instead of creating formulas to get an answer. Explore in Sheets is available on the web, Android and iOS. On Android, you click the three vertical dots to get to the menu and then Explore is listed. When you first click it, it offers a try an example option and creates a new spreadsheet showcasing various examples.

Explore in Docs gives you a way to stay focused in the same tab. Using Explore, you get a little sidebar with Web, Images, and Drive results. It provides instant suggestions based on the content in your document including related topics to learn about, images to insert, or more content to check out in Docs. You can also find a related document from Drive or search Google right within Explore. Explore in Docs is available in a web browser, but I did not find it on my mobile apps for Android or iOS.

Explore in Slides makes designing a presentation simple. I think theres some AI/ML going on here, as Explore dynamically generates design suggestions, based on the content of your slides. Then, pick a recommendation and apply it with a single click, without having to crop, resize or reformat.

Are all of these features going to single-handedly make you the most productive person on the planet? No, but they are definitely small and constant improvements that point the way to a more customized and helpful use of artificial intelligence and machine learning.

If you are looking for other creative ways that people and organizations are using G Suite, there are tons of great customer stories that Google shares about how big and small organizations and companies use its free and enterprise-level products that may give you ideas for how you can leverage their cloud software. I find many of these case studies inspiring, but that is based on how organizations are responding to community needs.

Check out this one from Eagle County, Colorado during a wildfire there and this one from the City of Los Angeles with a real-time sheet to show police officers which homeless shelters have available beds.

Here is the original post:
Googles Machine Learning Is Making You More Effective In 2020 - Forbes

ValleyML Is Launching a Series of 3 Unique AI Expo Events Focused on Hardware, Enterprise and Robotics in Silicon Valley – AiThority

With the great success of recent State of AI and ML event at Intel in January 2020, Valley Machine Learning and Artificial Intelligence (ValleyML.ai or simply ValleyML) is organizing AI Hardware Expo on May 5th-6th 2020, AI Enterprise Expo on August 25th-26th 2020 and AI Robotics Expo on November 12th-13th 2020 at SEMI, Milpitas, Silicon Valley

ValleyMLis the most active and important community of ML & AI Companies and Start-ups, Data Practitioners, Executives and Researchers in the Silicon Valley. The goal is Advancing AI to Empower People.

ValleyML sponsors include UL, MINDBODY Inc., Ambient Scientific Inc., SEMI, Intel, Western Digital, Texas Instruments, Google, Facebook, Cadence, Xilinx.

These highly focused events welcome a community of CTOs, CEOs, Chief data scientists, product management executives and delegates from some of the worlds top technology companies. Companies interested in sponsoring ValleyML events can follow the instructions in the sponsor brochure documentas soon as possible as there are limited sponsorship opportunities available. A unified call for proposalsfor 3 AI Expo events is now open.

AiThority.com News: Delta Unveils Highly-Integrated Building Automation Solutions Under the Theme Smarter Buildings, Smarter Cities at AHR Expo 2020

AI Hardware Expo May 5th-6th 2020

Submit by March 1st.

AI Enterprise Expo August 25th-26th 2020

Submit by May 1st.

AI Robotics Expo November 12th-13th 2020

Submit by August 1st.

AiThority.com News: Cloud Performance, Platform Advancements and Customer Success Program Drive Continued Double-Digit Growth for iManage Across All Customer Segments

ValleyML.ai s recent event State of AI and ML-January 2020 at Intel, Santa Clara on January 14th-15th was a great success with more than 36 speakers and 250+ attendees.Event updates, videos and pictures are at ValleyML website. These highly content oriented conferences are curated by an expert program committeethat included Dr. Koji Seto, Dr.Osso Vahabzadeh, Marc J. Mar-Yohana, Promila Agarwal, Dr. Mehran Nekuiiassisted by industry advisory boardunder the leadership of Dr. Kiran Gunnam,a Distinguished Machine Learning and Computer Vision Engineer with more than 100 inventions. The event featured prominent speakers such as Dr.Prasad Saripalli from MINDBODY, Dr.Ted Selker from C3.chat, Gajendra Prasad Singh from Ambient Scientific, Janet George from Oracle, a panel chair, John Currie from UL. This event received publicity help from local IEEE chapters and SF Bay ACM as well as great post-event coverage in a Forbes article titled Silicon Valley Event On Machine Learning Tackles The Latest Riddles Vexing AI Self-Driving Cars. In addition, registered attendees are eligible to get IEEE PDH (Professional Development Hours) Certificate.

AiThority.com News: New Study Reveals Rising Demands For Cross-Platform Video Solutions As Publishers Needs Evolve

More:
ValleyML Is Launching a Series of 3 Unique AI Expo Events Focused on Hardware, Enterprise and Robotics in Silicon Valley - AiThority

Putting the Humanity Back Into Technology: 10 Skills to Future Proof Your Career – HR Technologist

Dave Coplin believes that the key to success for all of our futures is how we rise to the challenge and unleash the potential that AI and machine learning will bring us. We all need to evolve and fast. He looks at the10skillswe can nurture tofuture-proofour careers

I have been working with global technology companies for more than 30 years, helping people to truly understand the amazing potential on offer when humans work in harmony with machines.

Your HCM System controls the trinity of talent acquisition, management and optimization - and ultimately, multiple mission-critical performance outcomes. Choosing the right solution for your organization....

I have written two books, Ive worked with businesses and governments all over the world and recently Ive been inspiring and engaging kids and adults alike, all with one single goal in mind, which is simply to help everyone get the absolute best from technology.

The key to success for all of our futures is how we rise to the challenge and unleash the potential that AI and machine learning will bring us. We all need to evolve and fast.

In an age of algorithms and robots, we need to find a way to combine the best of technological capability with the best of human ability and find that sweet spot where humans and machines complement each other perfectly. With this in mind, here are my top ten skills that will enable humans to rise, to achieve more than ever before not just at work but across all aspects of our lives:

When it comes to creativity, I absolutely believe that technology is one of the most creative forces that we will ever get to enjoy. But creativity needs to be discovered and it needs to be nurtured. Our future will be filled with complex, challenging problems, the like of which we will never have encountered before. Were going to need a society of creative thinkers to help navigate it.

While the machines are busy crunching numbers, it will be the humans left to navigate the complicated world of emotions, motivation, and reason. In a world of the dark, cold logic of algorithms, the ability for individuals to understand and share the feelings of others is going to become a crucial skill. Along with creativity, empathy will be one of the most critical attributes that define the border between human and machine.

As well as teaching ourselves and our families to be confident with technology we also need to be accountable for how we use it.

Just because the computer gives you an answer, it doesnt make it right. We all need to learn to take the computers valuable input but crucially combine that with our own human intuition in order to discover the best course of action. Our future is all about being greater than the sum of our parts

One of creativitys most important companions is curiosity it is the gateway to the best way to be creative with technology. We now have at our fingertips access to every fact and opinion that has ever been expressed, but we take this for granted. And what do we choose to do with all that knowledge? Two words, cat videos. Im being playful of course, but part of the solution is to help all of us, especially kids, be curious about the world around us and to use technology to explore it.

Critical thinking will be the 21st-centuryhumans superpower. If we can help individuals both understand and apply it, we can, over time, unleash the full potential of our connected world. We should constantly question content we see, hear and read - and not just assume its true.

One of digital technologys key purposes is to connect humans with each other. Communicating with others is as essential to our future survival as breathing and yet were often just not that good at it, especially when were communicating with others who arent in the same physical space.

Learning to communicate well (and that includes really effective listening) regardless of whether that is on-line or off-line is one of the basic literacies of our digital world.

Building on our communication skills, collaboration is the purpose for much of the reason behind why we need to communicate well. Technology enables large numbers of people to come together, aligned around a common cause but we can only harness the collective power of people if we can find the best way to work together to unleash our collective potential.

The future doesnt stand still and now more than ever, that means neither can we. While we used to think about education as a single phase, early on in most peoples lives, the reality is that learning needs to be an everyday occurrence, regardless of our age or stage of life. Thanks to new technologies like artificial intelligence, skills that are new today will be automated tomorrow and this means we can never afford to stand still.

The by-product of a rapidly changing world is that we need to help people learn to embrace the ambiguity such a world presents. More traditional mindsets of single domains of skills and single careers will have to give way to the much more nebulous world of multiple skillsets for multiple careers.In order to make the transition, people are going to need to find a way to preserve and develop enough energy to be able to embrace every new change and challenge so that they can both offer value and be valued by the ever-changing society they are a part of.

Learn More:5 Skills You Should Develop to Keep Your Job in 2020

As a technologist, and an optimist, we need to remember that the machines and algorithms are here to help.The success of our future will depend entirely on our ability to grasp the potential they offer us. Regardless of the career we choose, our and our childrens lives will be better, more successful, happier and more rewarding if we are confident in how we can use technology to help us achieve more at work, in our relationships and in how we enjoy ourselves.

None of these skills were picked by chance.They were specifically picked because they are the very qualities that will complement the immensely powerful gift that technology brings us.Better still, these are the skills that will remain fundamentally human for decades to come.

Now is the time to think differently about our relationship with technology and its potential.We owe it to ourselves and our children to help ensure we dont just learn to survive in the 21st century but instead we learn how to thrive. If we can get this right for ourselves and our kids, we are going to get some amazing rewards as a result.

The rise of the humans starts with us, and it starts now

Learn More:The Rise of the Corporate Recruiter: Job Description, Salary Expectations, and Key Skills for 2020

The rest is here:
Putting the Humanity Back Into Technology: 10 Skills to Future Proof Your Career - HR Technologist

Vectorspace AI Datasets are Now Available to Power Machine Learning (ML) and Artificial Intelligence (AI) Systems in Collaboration with Elastic -…

SAN FRANCISCO, Jan. 22, 2020 /PRNewswire/ -- Vectorspace AI (VXV) announces datasets that power data engineering, machine learning (ML) and artificial intelligence (AI) systems. Vectorspace AI alternative datasets are designed for predicting unique hidden relationships between objects including current and future price correlations between equities.

Vectorspace AI enables data, ML and Natural Language Processing/Understanding (NLP/NLU) engineers and scientists to save time by testing a hypothesis or running experiments faster to achieve an improvement in bottom line revenue and information discovery. Vectorspace AI datasets underpin most of ML and AI by improving returns from R&D divisions of any company in discovering hidden relationships in drug development.

"We are happy to be working with Vectorspace AI based on their most recent collaboration with us based on the article we published titled 'Generating and visualizing alpha with Vectorspace AI datasets and Canvas'. They represent the tip of the spear when it comes to advances in machine learning and artificial intelligence. Our customers and partners will certainly benefit from our continued joint development efforts in ML and AI," Shaun McGough, Product Engineering, Elastic.

Increasing the speed of discovery in every industry remains the aim of Vectorspace AI, along with a particular goal which relates to engineering machines to trade information with one another, connected to exchanging and transacting data in a way that minimizes a selected loss function. Data vendors such as Neudata.co, asset management companies and hedge funds including WorldQuant, use Vectorspace AI datasets to improve and protect 'alpha'.

Limited releases of Vectorspace AI datasets will be available in partnership with Amazon and Microsoft.

About Vectorspace AI (vectorspace.ai)

Vectorspace AI focuses on context-controlled NLP/NLU (Natural Language Processing/Understanding) and feature engineering for hidden relationship detection in data for the purpose of powering advanced approaches in Artificial Intelligence (AI) and Machine Learning (ML). Our platform powers research groups, data vendors, funds and institutions by generating on-demand NLP/NLU correlation matrix datasets. We are particularly interested in how we can get machines to trade information with one another or exchange and transact data in a way that minimizes a selected loss function. Our objective is to enable any group analyzing data to save time by testing a hypothesis or running experiments with higher throughput. This can increase the speed of innovation, novel scientific breakthroughs and discoveries. For a little more on who we are, see our latest reddit AMA on r/AskScience or join our 24 hour communication channel here. Vectorspace AI offers NLP/NLU services and alternative datasets consisting of correlation matrices, context-controlled sentiment scoring, and other automatically engineered feature attributes. These services are available utilizing the VXV token and VXV wallet-enabled API. Vectorspace AI is a spin-off from Lawrence Berkeley National Laboratory (LBNL) and the U.S. Dept. of Energy (DOE). The team holds patents in the area of hidden relationship discovery.

SOURCE Vectorspace AI

vectorspace.ai

More here:
Vectorspace AI Datasets are Now Available to Power Machine Learning (ML) and Artificial Intelligence (AI) Systems in Collaboration with Elastic -...

Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning – Yahoo Finance

Payoneer uses Iguazio to move from detection to prevention of fraud with predictive machine learning models served in real-time.

Iguazio, the data science platform for real time machine learning applications, today announced that Payoneer, the digital payment platform empowering businesses around the world to grow globally, has selected Iguazios platform to provide its 4 million customers with a safer payment experience. By deploying Iguazio, Payoneer moved from a reactive fraud detection method to proactive prevention with real-time machine learning and predictive analytics.

Payoneer overcomes the challenge of detecting fraud within complex networks with sophisticated algorithms tracking multiple parameters, including account creation times and name changes. However, prior to using Iguazio, fraud was detected retroactively, enabling customers to only block users after damage had already been done. Payoneer is now able to take the same sophisticated machine learning models built offline and serve them in real-time against fresh data. This ensures immediate prevention of fraud and money laundering with predictive machine learning models identifying suspicious patterns continuously. The cooperation was facilitated by Belocal, a leading Data and IT solution integrator for mid and enterprise companies.

"Weve tackled one of our most elusive challenges with real-time predictive models, making fraud attacks almost impossible on Payoneer" noted Yaron Weiss, VP Corporate Security and Global IT Operations (CISO) at Payoneer. "With Iguazios Data Science Platform, we built a scalable and reliable system which adapts to new threats and enables us to prevent fraud with minimum false positives".

"Payoneer is leading innovation in the industry of digital payments and we are proud to be a part of it" said Asaf Somekh, CEO, Iguazio. "Were glad to see Payoneer accelerating its ability to develop new machine learning based services, increasing the impact of data science on the business."

"Payoneer and Iguazio are a great example of technology innovation applied in real-world use-cases and addressing real market gaps" said Hugo Georlette, CEO, Belocal. "We are eager to continue selling and implementing Iguazios Data Science Platform to make business impact across multiple industries."

Iguazios Data Science Platform enables Payoneer to bring its most intelligent data science strategies to life. Designed to provide a simple cloud experience deployed anywhere, it includes a low latency serverless framework, a real-time multi-model data engine and a modern Python eco-system running over Kubernetes.

Earlier today, Iguazio also announced having raised $24M from existing and new investors, including Samsung SDS and Kensington Capital Partners. The new funding will be used to drive future product innovation and support global expansion into new and existing markets.

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, companies can run AI models in real time, deploy them anywhere; multi-cloud, on-prem or edge, and bring to life their most ambitious data-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, telecoms and gaming, use Iguazio to create business impact through a multitude of real-time use cases. Iguazio is backed by top financial and strategic investors including Samsung, Verizon, Bosch, CME Group, and Dell. The company is led by serial entrepreneurs and a diverse team of innovators in the USA, UK, Singapore and Israel. Find out more on http://www.iguazio.com

About Belocal

Since its inception in 2006, Belocal has experienced consistent and sustainable growth by developing strong long-term relationships with its technology partners and by providing tremendous value to its clients. We pride ourselves on delivering the most innovative technology solutions enabling our customers to lead their market segments and stay ahead of the competition. At Belocal, we pride ourselves in our ability to listen, our attention to detail and our expertise in innovation. Such strengths have enabled us to develop new solutions and services, to suit the changing needs of our clients and acquire new businesses by tailoring all our solutions and services to the specific needs of each client.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200127005311/en/

Contacts

Iguazio Media Contact:Sahar Dolev-Blitental, +972.73.321.0401press@iguazio.com

See the rest here:
Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning - Yahoo Finance

SwRI, SMU fund SPARKS program to explore collaborative research and apply machine learning to industry problems – TechStartups.com

Southwest Research Institute (SwRI) and the Lyle School of Engineering at Southern Methodist University (SMU) announced the Seed Projects Aligning Research, Knowledge, and Skills (SPARKS) joint program, which aims to strengthen and cultivate long-term research collaboration between the organizations.

Research topics will vary for the annual funding cycles. The inaugural program selections will apply machine learning a subset of artificial intelligence (AI) to solve industry problems. A peer review panel selected two proposals for the 2020 cycle, with each receiving $125,000 in funding for a one-year term.

Our plan for the SPARKS program is not only to foster a close collaboration between our two organizations but, more importantly, to also make a long-lasting impact in our collective areas of research, said Lyle Dean Marc P. Christensen. With the growing demand for AI tools in industry, machine learning was an obvious theme for the programs inaugural year.

The first selected project is a proof of concept that will lay the groundwork for drawing relevant data from satellite and other sources to assess timely surface moisture conditions applicable to other research. SwRI will extract satellite, terrain and weather data that will be used by SMU Lyle to develop machine learning functions that can rapidly process these immense quantities of data. The interpreted data can then be applied to research for municipalities, water management authorities, agricultural entities and others to produce, for example, fire prediction tools and maps of soil or vegetation water content. Dr. Stuart Stothoff of SwRI and Dr. Ginger Alford of SMU Lyle are principal investigators of Enhanced Time-resolution Backscatter Maps Using Satellite Radar Data and Machine Learning.

The second project tackles an issue related to the variability of renewable energy from wind and solar power systems: effective management of renewable energy supplies to keep the power grid stable. To help resolve this challenge, the SwRI-SMU Lyle team will use advanced machine learning techniques to model and control battery energy storage systems. These improved battery storage systems, which would automatically and strategically push or draw power instantly in response to grid frequency deviations, could potentially be integrated with commercial products and tools to help regulate the grid. Principal investigators of Machine Learning-powered Battery Storage Modeling and Control for Fast Frequency Regulation Service are Dr. Jianhui Wang of SMU Lyle and Yaxi Liu of SwRI.

To some extent, the SPARKS program complements our internal research efforts, which are designed to advance technologies and processes so they can be directly applied to industry programs, said Executive Vice President and COO Walt Downing of SwRI. We expect the 2020 selections to do just that, greatly advancing the areas of environmental management and energy storage and supply.

The program will fund up to three projects each year, seeking to bridge the gap between basic and applied research.

See the rest here:
SwRI, SMU fund SPARKS program to explore collaborative research and apply machine learning to industry problems - TechStartups.com

Pricing – Machine Learning | Microsoft Azure

For open source development at cloud scale with a code-first experience. Basic + UI capabilities + secure and comprehensive machine learning lifecycle management for all skill levels. Automated machine learning Create and run experiments in notebooks Available Available Create and run experiments in studio web experience Not available Available Industry leading forecasting capabilities Not available Available Support for deep learning and other advanced learners Not available Available Large data support (up to 100GB) Not available Available Interpretability in UI Not available Available Machine Learning Pipelines Create, run, and publish pipelines using the Azure ML SDK Available Available Create pipeline endpoints using the Azure ML SDK Available Available Create, edit, and delete scheduled runs of pipelines using the Azure ML SDK Available Available Create and publish custom modules using the Azure ML SDK Available Available View pipeline run details in studio Available Available Create, run, visualize, and publish pipelines in Azure ML designer Not available Available Create pipeline endpoints in Azure ML designer Not available Available Create, edit, and delete scheduled runs of pipelines in Azure ML designer Not available Available Create and publish custom modules in Azure ML designer Not available Available Integrated notebooks Workspace notebook and file sharing Available Available R and Python support Available Available Notebook collaboration Available Available Compute instance Managed compute Instances for integrated Notebooks Available Available Sharing of compute instances Available Available Collaborative debugging of models Available Available Jupyter, JupyterLab, Visual Studio Code Available Available Virtual Network (VNet) support for deployment Available Available SDK Support R and Python SDK support Available Available Security Role Based Access Control (RBAC) support Available Available Virtual Network (VNet) support for training Available Available Virtual Network (VNet) support for inference Available Available Scoring endpoint authentication Available Available Compute Cross workspace capacity sharing and quotas Not available Available Data for machine learning Create, view or edit datasets and datastores from the SDK Available Available Create, view or edit datasets and datastores from the UI Available Available View, edit, or delete dataset drift monitors from the SDK Available Available View, edit, or delete dataset drift monitors from the UI Not available Available MLOps Create ML pipelines in SDK Available Available Batch inferencing Available Available Model profiling Available Available Interpretability in UI Not available Available Labeling Labeling Project Management Portal Available Available Labeler Portal Available Available Labeling using private workforce Available Available

Link:
Pricing - Machine Learning | Microsoft Azure

Uncover the Possibilities of AI and Machine Learning With This Bundle – Interesting Engineering

If you want to be competitive in an increasingly data-driven world, you need to have at least a baseline understanding of AI and machine learningthe driving forces behind some of todays most important technologies.

The Essential AI & Machine Learning Certification Training Bundle will introduce you to a wide range of popular methods and tools that are used in these lucrative fields, and its available for over 90 percent off at just $39.99.

This 4-course bundle is packed with over 280 lessons that will introduce you to NLP, computer vision, data visualization, and much more.

After an introduction to the basic terminology of the field, youll explore the interconnected worlds of AI and machine learning through instruction that focusses on neural networks, deep architectures, large-scale data analysis, and much more.

The lessons are easy to follow regardless of your previous experience, and there are plenty of real-world examples to keep you on track.

Dont get left behind during the AI and machine learning revolution. The Essential AI & Machine Learning Certification Training Bundle will get you up to speed for just $39.99over 90 percent off for a limited time.

Prices are subject to change.

This is a promotional article about one of Interesting Engineering's partners. By shopping with us, you not only get the materials you need, but youre also supporting our website.

View post:
Uncover the Possibilities of AI and Machine Learning With This Bundle - Interesting Engineering

Machine Learning Overview | What is Machine Learning?

Machines, most often computers, are given rules to follow known as algorithms. They are also given an initial set of data to explore when they first begin learning. That data is called training data.

Computers start to recognize patterns and make decisions based on algorithms and training data. Depending on the type of machine learning being used, they are also given targets to hit or they receive rewards when they make the right decision or take a positive step towards their end goal.

As they build this understanding or learn, they work through a series of steps to transform new inputs into outputs which may consist of brand-new datasets, labeled data, decisions, or even actions.

The idea is that they learn enough to operate without any human intervention. In this way they start to develop and demonstrate what we call artificial intelligence. Machine learning is one of the main ways artificial intelligence is created.

Other examples of artificial intelligence include robotics, speech recognition, and natural language generation, all of which also require some element of machine learning.

There are many different reasons to implement machine learning and ways to go about it. There are also a variety of machine learning algorithms and types and sources of training data.

Follow this link:
Machine Learning Overview | What is Machine Learning?

Machine Learning Definition

What Is Machine Learning?

Machine learning is theconcept that a computer program can learn and adapt to new data without human interference. Machine learning is a field of artificial intelligence (AI) that keeps a computers built-in algorithms current regardless of changes in the worldwide economy.

Various sectors of the economy are dealing with huge amounts of data available in different formats from disparate sources. The enormous amount of data, known as big data, is becoming easily available and accessible due to the progressive use of technology. Companies and governments realize the huge insights that can be gained from tapping into big data but lack the resources and time required to comb through its wealth of information. As such, artificial intelligence measures are being employed by different industries to gather, process, communicate, and share useful information from data sets. One method of AI that is increasingly utilized for big data processing is machine learning.

The various data applications of machine learning are formed through a complex algorithm or source code built into the machine or computer. This programming code creates a model that identifies the data and builds predictions around the data it identifies. The model uses parameters built in the algorithm to form patterns for its decision-making process. When new or additional data becomes available, the algorithm automatically adjusts the parameters to check for a pattern change, if any. However, the model shouldnt change.

Machine learning is used in different sectors for various reasons. Trading systems can be calibrated to identify new investment opportunities. Marketing and e-commerce platforms can be tuned to provide accurate and personalized recommendations to their users based on the users internet search history or previous transactions. Lending institutions can incorporate machine learning to predict bad loans and build a credit risk model. Information hubs can use machine learning to cover huge amounts of news stories from all corners of the world. Banks can create fraud detection tools from machine learning techniques. The incorporation of machine learning in the digital-savvy era is endless as businesses and governments become more aware of the opportunities that big data presents.

How machine learning works can be better explained by an illustration in the financial world. Traditionally, investment players in the securities market like financial researchers, analysts, asset managers, individual investors scour through a lot of information from different companies around the world to make profitable investment decisions. However, some pertinent information may not be widely publicized by the media and may be privy to only a select few who have the advantage of being employees of the company or residents of the country where the information stems from. In addition, theres only so much information humans can collect and process within a given time frame. This is where machine learning comes in.

An asset management firm may employ machine learning in its investment analysis and research area. Say the asset manager only invests in mining stocks. The model built into the system scans the web and collects all types of news events from businesses, industries, cities, and countries, and this information gathered makes up the data set. The asset managers and researchers of the firm would not have been able to get the information in the data set using their human powers and intellects. The parameters built alongside the model extracts only data about mining companies, regulatory policies on the exploration sector, and political events in select countries from the data set. Saya mining company XYZ just discovered a diamond mine in a small town in South Africa, the machine learning app would highlight this as relevant data. The model could then use an analytics tool called predictive analytics to make predictions on whether the mining industry will be profitable for a time period, or which mining stocks are likely to increase in value at a certain time. This information is relayed to the asset manager to analyze and make a decision for his portfolio. The asset manager may make a decision to invest millions of dollars into XYZ stock.

In the wake of an unfavorable event, such as South African miners going on strike, the computer algorithm adjusts its parameters automatically to create a new pattern. This way, the computational model built into the machine stays current even with changes in world events and without needing a human to tweak its code to reflect the changes. Because the asset manager received this new data on time, they are able to limit his losses by exiting the stock.

Read more here:
Machine Learning Definition

Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know – The Register

Reader survey We hear a lot these days about IT automation. Yet whether it's labelled intelligent infrastructure, AIOps, self-driving IT, or even private cloud, the aim is the same.

And that aim is: to use the likes of machine learning, workflow automation, and infrastructure-as-code to automatically make changes in real-time, eliminating as much as possible of the manual drudgery associated with routine IT administration.

Are the latest AI/ML-powered intelligent automation solutions trustworthy and ready for mainstream deployment, particularly in areas such as storage management?

Should we go ahead and implement the technology now on offer?

This controversial topic is the subject of our latest reader survey, and we are eager to hear your views.

Please complete our short survey, here.

As always, your responses will be anonymous and your privacy assured.

Sponsored: Webcast: Why you need managed detection and response

The rest is here:
Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know - The Register

Machine Learning Engineer Interview Questions: What You Need to Know – Dice Insights

Along with artificial intelligence (A.I.), machine learning is regarded as one of the most in-demand areas for tech employment at the moment. Machine learning engineers develop algorithms and models that can adapt and learn from data. As a result, those who thrive in this discipline are generally skilled not only in computer science and programming, but also statistics, data science, deep learning, and problem solving.

According to Burning Glass, which collects and analyzes millions of job postings from across the country, the prospects for machine learning as an employer-desirable skill are quite good, with jobs projected to rise 36.5 percent over the next decade. Moreover, even those with relatively little machine-learning experience can pull down quite a solid median salary:

Dice Insights spoke with Oliver Sulley, director of Edge Tech Headhunters, to figure out how you should prepare, what youll be asked during an interviewand what you should say to grab the gig.

Youre going to be faced potentially by bosses who dont necessarily know what it is that youre doing, or dont understand ML and have just been [told] they need to get it in the business, Sulley said. Theyre being told by the transformation guys that they need to bring it on board.

As he explained, that means one of the key challenges facing machine learning engineers is determining what technology would be most beneficial to the employer, and being able to work as a cohesive team that may have been put together on very short notice.

What a lot of companies are looking to do is take data theyve collected and stored, and try and get them to build some sort of model that helps them predict what they can be doing in the future, Sulley said. For example, how to make their stock leaner, or predicting trends that could come up over they year that would change their need for services that they offer.

Sulley notes that machine learning engineers are in rarified air at themomentits a high-demand position, and lots of companies are eager to show theyve brought machine learning specialists onboard.

If theyre confident on their skills, then a lot of the time they have to make sure the role is right for them, Sulley said. Its more about the soft skills that are going to be important.

Many machine learning engineers are strong on the technical side, but they often have to interact with teams such as operations; as such, they need to be able to translate technical specifics into laymans terms and express how this data is going to benefit other areas of the company.

Building those soft skills, and making sure people understand how you will work in a team, is just as important at this moment in time, Sulley added.

There are quite a few different roles for machine learning engineers, and so its likely that all these questions could come upbut it will depend on the position. We find questions with more practical experience are more common, and therefore will ask questions related to past work and the individual contributions engineers have made, Sulley said.

For example:

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

A lot of data engineering and machine learning roles involve working with different tech stacks, so its hard to nail down a hard and fast set of skills, as much depends on the company youre interviewing with.(If youre just starting out with machine learning, here are some resources that could prove useful.)

For example, if its a cloud based-role, a machine learning engineer is going to want to have experience with AWS and Azure; and for languages alone, Python and R are the most important, because thats what we see more and more in machine learning engineering, Sulley said. For deployment, Id say Docker, but it really depends on the persons background and what theyre looking to get into.

Sulley said ideal machine learning candidates posses a really analytical mind, as well as a passion for thinking about the world in terms of statistics.

Someone who can connect the dots and has a statistical mind, someone who has a head for numbers and who is interested in that outside of work, rather than someone who just considers it their job and what they do, he said.

As you can see from the following Burning Glass data, quite a few jobs now ask for machine-learning skills; if not essential, theyre often a nice to have for many employers that are thinking ahead.

Sulley suggests the questions you ask should be all about the technologyits about understanding what the companies are looking to build, what their vision is (and your potential contribution to it), and looking to see where your career will grow within that company.

You want to figure out whether youll have a clear progression forward, he said. From that, you will understand how much work theyre going to do with you. Find out what theyre really excited about, and that will help you figure out whether youll be a valued member of the team. Its a really exciting space, and they should be excited by the opportunities that come with bringing you onboard.

Continued here:
Machine Learning Engineer Interview Questions: What You Need to Know - Dice Insights

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

See the rest here:
Navigating the New Landscape of AI Platforms - Harvard Business Review