What is Machine Learning? | Emerj

Typing what is machine learning? into a Google search opens up a pandoras box of forums, academic research, and false information and the purpose of this article is to simplify the definition and understanding of machine learning thanks to the direct help from our panel of machine learning researchers.

At Emerj, the AI Research and Advisory Company, many of our enterprise clients feel as though they should be investing in machine learning projects, but they dont have a strong grasp of what it is. We often direct them to this resource to get them started with the fundamentals of machine learning in business.

In addition to an informed, working definition of machine learning (ML), we detail the challenges and limitations of getting machines to think, some of the issues being tackled today in deep learning (the frontier of machine learning), and key takeaways for developing machine learning applications for business use-cases.

This article will be broken up into the following sections:

We put together this resource to help with whatever your area of curiosity about machine learning so scroll along to your section of interest, or feel free to read the article in order, starting with our machine learning definition below:

* Machine Learning is the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.

The above definition encapsulates the ideal objective or ultimate aim of machine learning, as expressed by many researchers in the field. The purpose of this article is to provide a business-minded reader with expert perspective on how machine learning is defined, and how it works.Machine learning and artificial intelligence share the same definition in the minds of many however, there are some distinct differences readers should recognize as well. References and related researcher interviews are included at the end of this article for further digging.

(Our aggregate machine learning definition can be found at the beginning of this article)

As with any concept, machine learning may have a slightly different definition, depending on whom you ask. We combed the Internet to find five practicaldefinitions from reputable sources:

We sent these definitions to experts whom weve interviewed and/or included in one of our past research consensuses, and asked them to respond with their favorite definition or to provide their own. Our introductory definition is meant to reflect the varied responses. Below are someof their responses:

Dr. Yoshua Bengio,Universit de Montral:

ML should not be defined by negatives (thus ruling 2 and 3). Here is my definition:

Machine learning research is part of research on artificial intelligence, seeking to provide knowledge to computers through data, observations and interacting with the world. That acquired knowledge allows computers to correctly generalize to new settings.

Dr. Danko Nikolic, CSC and Max-Planck Institute:

(edit of number 2 above): Machine learning is the science of getting computers to act without being explicitly programmed, but instead letting them learn a few tricks on their own.

Dr. Roman Yampolskiy, University ofLouisville:

Machine Learning is the science of getting computers to learn as well as humans do or better.

Dr. Emily Fox, University of Washington:

My favorite definition is #5.

There are many different types of machine learning algorithms, with hundreds published each day, and theyretypically grouped by either learning style (i.e. supervised learning, unsupervised learning, semi-supervised learning) or by similarity in form or function (i.e. classification, regression, decision tree, clustering, deep learning, etc.). Regardless of learning style or function, all combinations of machine learning algorithms consist of the following:

Image credit: Dr. Pedro Domingo, University of Washington

The fundamental goal of machine learning algorithms is togeneralize beyond the training samples i.e. successfully interpret data that it has never seen before.

Concepts and bullet points can only take one so far in understanding.When people ask What is machine learning?, they often want to see what it is and what it does. Below are some visual representations of machine learning models, with accompanying links for further information. Even more resources can be found at the bottom of this article.

Decision tree model

Gaussian mixture model

Dropout neural network

Merging chrominance and luminance using Convolutional Neural Networks

There are different approaches to getting machines to learn, from using basic decision trees to clustering to layers of artificial neural networks (the latter of which has given way to deep learning), depending on what task youre trying to accomplish and the type and amount of data that you have available. This dynamic sees itself played out in applications as varyingas medical diagnostics or self-driving cars.

While emphasis is often placed on choosing the best learning algorithm, researchers have found that some of the most interesting questions arise out of none of the available machine learning algorithms performing to par. Most of the time this is a problem with training data, but this also occurs when working with machine learning in new domains.

Research done when working on real applications often drives progress in the field, and reasons are twofold: 1. Tendency to discover boundaries and limitations of existing methods 2. Researchers and developers working with domain experts andleveraging time and expertise to improve system performance.

Sometimes this also occurs by accident. We might consider model ensembles, or combinations of many learning algorithms to improve accuracy, to be one example. Teams competing for the 2009 Netflix Price found that they got their best results when combining their learners with other teams learners, resulting in an improved recommendation algorithm (read Netflixs blog for more on why theydidnt end up using this ensemble).

One important point (based on interviews and conversations with experts in the field), in terms of application within business and elsewhere, is that machine learning is not just, or even about, automation, an often misunderstood concept. If you think this way, youre bound to miss the valuable insights that machines can provide and the resulting opportunities (rethinking an entire business model, for example, as has been in industries like manufacturing and agriculture).

Machines that learn are useful to humans because, with all of their processing power, theyre able to more quickly highlight or find patterns in big (or other) data that would have otherwise been missed by human beings. Machine learning is a tool that can be used to enhance humans abilities to solve problems and make informed inferences on a wide range of problems, from helping diagnose diseases to coming up with solutions for global climate change.

Machine learning cant get something from nothingwhat it does is get more from less. Dr. Pedro Domingo, University of Washington

The two biggest, historical (and ongoing) problems in machine learning have involved overfitting (in which the model exhibits bias towards the training data and does not generalize to new data, and/or variance i.e. learns random things when trained on new data) and dimensionality (algorithms with more features work in higher/multiple dimensions, making understanding the data more difficult). Having access to a large enough data set has in some cases also been a primary problem.

One of the most common mistakes among machine learning beginners is testing training data successfully and having the illusion of success; Domingo (and others) emphasize the importance of keeping some of the data set separate when testing models, and only using that reserved data to test a chosen model, followed by learning on the whole data set.

When a learning algorithm (i.e. learner) is not working, often the quicker path to success is to feed the machine more data, the availability of which is by now well-known as a primary driver of progress in machine and deep learning algorithms in recent years; however, this can lead to issues with scalability, in which we have more data but time to learn that data remains an issue.

In terms of purpose, machine learning is not an end or a solution in and of itself. Furthermore, attempting to use it as a blanket solution i.e. BLANKis not a useful exercise; instead, coming to the table with a problem or objective is often best driven bya more specific question BLANK.

Deep learning involves the study and design of machine algorithms for learning good representation of data at multiple levels of abstraction (ways of arranging computer systems). Recent publicity of deep learning through DeepMind, Facebook, and other institutionshas highlighted it as the next frontier of machine learning.

The International Conference on Machine Learning (ICML) is widely regarded as one of the most important in the world. This years took place in June in New York City, and it brought together researchers from all over the world who are working on addressing the current challenges in deep learning:

Deep-learning systems have made great gains over the past decade in domains like bject detection and recognition, text-to-speech, information retrieval and others. Research is now focused on developingdata-efficient machine learning i.e. deep learning systems that can learn more efficiently, with the same performance in less time and with less data, in cutting-edge domains like personalized healthcare, robot reinforcement learning, sentiment analysis, and others.

Below is a selection of best-practices and concepts of applying machine learning that weve collated from our interviews for out podcast series, and from select sources cited at the end of this article. We hope that some of these principles will clarify how ML is used, and how to avoid some of the common pitfalls that companies and researchers might be vulnerable to in starting off on an ML-related project.

Emerj helps businesses get started with artificial intelligence and machine learning. Using our AI Opportunity Landscapes, clients can discover the largest opportunities for automation and AI at their companies and pick the highest ROI first AI projects. Instead of wasting money on pilot projects that are destined to fail, Emerj helps clients do business with the right AI vendors for them and increase their AI project success rate.

1 http://homes.cs.washington.edu/~pedrod/papers/cacm12.pd

2 http://videolectures.net/deeplearning2016_precup_machine_learning/

3 http://www.aaai.org/ojs/index.php/aimagazine/article/view/2367/2272

4 https://research.facebook.com/blog/facebook-researchers-focus-on-the-most-challenging-machine-learning-questions-at-icml-2016/

5 https://sites.google.com/site/dataefficientml/

6 http://www.cl.uni-heidelberg.de/courses/ws14/deepl/BengioETAL12.pdf

One of the best ways to learn about artificial intelligence concepts is to learn from the research and applications of the smartest minds in the field. Below is a brief list of some of our interviews with machine learning researchers, many of which may be of interest for readers who want to explore these topics further:

Read the original post:
What is Machine Learning? | Emerj

Management Styles And Machine Learning: A Case Of Life Imitating Art – Forbes

Oscar Wilde coined a great phrase in saying that life imitates art far more than art imitates life. What he meant by that is that through art, we appreciate life more. Art is an expression of life, and it helps us better understand ourselves and our surroundings. In business, learning how life imitates art, and even how art imitates life, is an intriguing way to capitalize on the value they both can bring.

When looking at organizations, there is a lot of art and life in how the business is run. Leadership styles range from specific and precise to open and adaptable. Often this is based on a managers own personal life experience. An example on the art side of the equation is machine learning. Its something we have created as an expression of what it means to be human. Both have common benefits that have the capability to propel the business forward.

The Art Of Machine Learning

Although its been around for many years, its only been in the past 10 years that machine learning has become more mainstream. In the past, technology used traditional programming, where programmers hard-coded rigid sets of instructions that tried to accommodate every possible scenario. Outputs were predetermined and could not respond to new scenarios without additional coding. Not only does this require continuous updates, costing in lost time and money, but it can also result in outputs that are inaccurate based on problems that the program could not predict.

Computer programming has advanced to where programs are now capable of learning and evolving on their own. With traditional programming, an input was run through a program and created an output. Now, with machine learning, an input is run through a trainable data model, so the output evolves as the model is continually learning and adapting, much in the same way a human brain does.

Take a game of checkers. Using traditional programming, it is unlikely that the computer will ever beat the programmer, who is the trainer of that software. Traditional programming has limited the game to a set of rules based on that programmers knowledge of the game. Whereas with machine learning, the program is based on a self-improving model that is not only able to make decisions for each move, but also evolves the program, learning which moves are most likely to lead to a win.

After several matches, the program will learn from its "experience" in prior matches how to play even better than a human. In the same way humans learn which moves to take and which to avoid, the program also keeps track of that information and eventually becomes a better player.

A Different Take On Management Styles

Now lets consider management styles as part of life. In the business world, you typically find two types of managers: those who give explicit instructions that need to be followed in order to accomplish a goal, and those who provide the goal without detailed instructions, but offer guidance and counseling as needed to get there.

In the first scenario, the worker needs only to apply the instructions received to get the work done, which makes the manager happy with the result, similar to traditional programming. But the results are limited and dont take into account unexpected variables and changes that happen as a part of the business. In this micromanagement style, the worker often has to go back to the manager for additional instructions, costing both time and money, similar to traditional programming.

In the second scenario, the worker is expected to find their own path to achieving the goal by learning, trying and testing different options. From that experience, the worker modifies their process based on the results of their efforts, much like machine learning. The process is much more flexible and accommodating as its able to be adjusted easily, on the fly. Like machine learning, this strategic style of management provides flexibility and autonomy, saving time and producing much better results.

Leverage Common Benefits For Optimal Results

Both machine learning and strategic management styles have some common benefits, if done well. One benefit is scalability. Its nearly impossible to scale an organization with a micromanagement style. As the company grows, managers will have increasingly less time to spend with workers. The same is true of traditional programming. Unless the program can learn and change on its own, it will never be able to scale to keep pace with the business.

Another common benefit is the ability to outsmart the competition. Companies that embrace machine learnings intelligent algorithms and better analytical power will have a leg up on those organizations that do not. They can take advantage of the automated learning capabilities built into machine learning. In the same way, those companies that take advantage of a strategic management style over micromanaging will enable workers to be self-sufficient and contribute the full power of their wisdom.

Oscar Wilde was surprisingly prophetic when he talked about life imitating art. The best organizations are those that leverage the commonalities both life and art; of machine learning and strategic management styles. As life can be fuller by imitating art, art and life together help organizations realize their greatest potential.

More:
Management Styles And Machine Learning: A Case Of Life Imitating Art - Forbes

Alibaba using machine learning to fight coronavirus with AI – Gigabit Magazine – Technology News, Magazine and Website

Chinese ecommerce giant Alibaba has announced a breakthrough in natural language processing (NLP) through machine learning.

NLP is a key technology in the field of speech technologies such as machine translation and automatic speech recognition. The companys DAMO academy, a global research program, has made a breakthrough in machine reading techniques with applications in the fight against coronavirus.

Alibaba not only topped the GLUE Benchmark rankings, a table measuring the performance of competing NLP models, despite competition from the likes of Google, Facebook and Microsoft, but beat human baselines, signifying that its model could even outperform a human at understanding language. Applications include sentiment analysis, textual entailment (i.e. understanding the correct chronology of sentences) and question-answering.

SEE ALSO:

With the solution already deployed in technologies ranging from AI chatbots to search engines, it is now finding use in the analysis of healthcare records by centers for disease control in cities across China.

We are excited to achieve a new breakthrough in driving research of the NLP development, said Si Luo, head of NLP Research at Alibaba DAMO Academy. Not only NLP as a core technology underpinning Alibabas various businesses, which serve hundreds of millions of customers, but it also becomes a critical technology now in fighting the coronavirus. We hope we can continue to leverage our leading technologies and contribute to the community during this difficult time.

Other AI initiatives put forth by the company for use in containing the coronavirus epidemic include technology to assist in the diagnosis of the virus. The company also made its Alibaba Cloud computing platform free for research organisations seeking to sequence the virus genome.

Read more:
Alibaba using machine learning to fight coronavirus with AI - Gigabit Magazine - Technology News, Magazine and Website

Machine learning could speed the arrival of ultra-fast-charging electric car – Chemie.de

Using machine learning, a Stanford-led research team has slashed battery testing times - a key barrier to longer-lasting, faster-charging batteries for electric vehicles.

Battery performance can make or break the electric vehicle experience, from driving range to charging time to the lifetime of the car. Now, artificial intelligence has made dreams like recharging an EV in the time it takes to stop at a gas station a more likely reality, and could help improve other aspects of battery technology.

For decades, advances in electric vehicle batteries have been limited by a major bottleneck: evaluation times. At every stage of the battery development process, new technologies must be tested for months or even years to determine how long they will last. But now, a team led by Stanford professors Stefano Ermon and William Chueh has developed a machine learning-based method that slashes these testing times by 98 percent. Although the group tested their method on battery charge speed, they said it can be applied to numerous other parts of the battery development pipeline and even to non-energy technologies.

"In battery testing, you have to try a massive number of things, because the performance you get will vary drastically," said Ermon, an assistant professor of computer science. "With AI, we're able to quickly identify the most promising approaches and cut out a lot of unnecessary experiments."

The study, published by Nature on Feb. 19, was part of a larger collaboration among scientists from Stanford, MIT and the Toyota Research Institute that bridges foundational academic research and real-world industry applications. The goal: finding the best method for charging an EV battery in 10 minutes that maximizes the battery's overall lifetime. The researchers wrote a program that, based on only a few charging cycles, predicted how batteries would respond to different charging approaches. The software also decided in real time what charging approaches to focus on or ignore. By reducing both the length and number of trials, the researchers cut the testing process from almost two years to 16 days.

"We figured out how to greatly accelerate the testing process for extreme fast charging," said Peter Attia, who co-led the study while he was a graduate student. "What's really exciting, though, is the method. We can apply this approach to many other problems that, right now, are holding back battery development for months or years."

Designing ultra-fast-charging batteries is a major challenge, mainly because it is difficult to make them last. The intensity of the faster charge puts greater strain on the battery, which often causes it to fail early. To prevent this damage to the battery pack, a component that accounts for a large chunk of an electric car's total cost, battery engineers must test an exhaustive series of charging methods to find the ones that work best.

The new research sought to optimize this process. At the outset, the team saw that fast-charging optimization amounted to many trial-and-error tests - something that is inefficient for humans, but the perfect problem for a machine.

"Machine learning is trial-and-error, but in a smarter way," said Aditya Grover, a graduate student in computer science who co-led the study. "Computers are far better than us at figuring out when to explore - try new and different approaches - and when to exploit, or zero in, on the most promising ones."

The team used this power to their advantage in two key ways. First, they used it to reduce the time per cycling experiment. In a previous study, the researchers found that instead of charging and recharging every battery until it failed - the usual way of testing a battery's lifetime -they could predict how long a battery would last after only its first 100 charging cycles. This is because the machine learning system, after being trained on a few batteries cycled to failure, could find patterns in the early data that presaged how long a battery would last.

Second, machine learning reduced the number of methods they had to test. Instead of testing every possible charging method equally, or relying on intuition, the computer learned from its experiences to quickly find the best protocols to test.

By testing fewer methods for fewer cycles, the study's authors quickly found an optimal ultra-fast-charging protocol for their battery. In addition to dramatically speeding up the testing process, the computer's solution was also better - and much more unusual - than what a battery scientist would likely have devised, said Ermon.

"It gave us this surprisingly simple charging protocol - something we didn't expect," Ermon said. Instead of charging at the highest current at the beginning of the charge, the algorithm's solution uses the highest current in the middle of the charge. "That's the difference between a human and a machine: The machine is not biased by human intuition, which is powerful but sometimes misleading."

The researchers said their approach could accelerate nearly every piece of the battery development pipeline: from designing the chemistry of a battery to determining its size and shape, to finding better systems for manufacturing and storage. This would have broad implications not only for electric vehicles but for other types of energy storage, a key requirement for making the switch to wind and solar power on a global scale.

"This is a new way of doing battery development," said Patrick Herring, co-author of the study and a scientist at the Toyota Research Institute. "Having data that you can share among a large number of people in academia and industry, and that is automatically analyzed, enables much faster innovation."

The study's machine learning and data collection system will be made available for future battery scientists to freely use, Herring added. By using this system to optimize other parts of the process with machine learning, battery development - and the arrival of newer, better technologies - could accelerate by an order of magnitude or more, he said.

The potential of the study's method extends even beyond the world of batteries, Ermon said. Other big data testing problems, from drug development to optimizing the performance of X-rays and lasers, could also be revolutionized by the use of machine learning optimization. And ultimately, he said, it could even help to optimize one of the most fundamental processes of all.

"The bigger hope is to help the process of scientific discovery itself," Ermon said. "We're asking: Can we design these methods to come up with hypotheses automatically? Can they help us extract knowledge that humans could not? As we get better and better algorithms, we hope the whole scientific discovery process may drastically speed up."

Read the original here:
Machine learning could speed the arrival of ultra-fast-charging electric car - Chemie.de

Machine Learning on AWS

Amazon SageMaker enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. It removes the complexity that gets in the way of successfully implementing machine learning across use cases and industriesfrom running models for real-time fraud detection, to virtually analyzing biological impacts of potential drugs, to predicting stolen-base success in baseball.

Amazon SageMaker Studio: Experience the first fully integrated development environment (IDE) for machine learning with Amazon SageMaker Studio, where you can perform all ML development steps. You can quickly upload data, create and share new notebooks, train and tune ML models, move back and forth between steps to adjust experiments, debug and compare results, and deploy and monitor ML models all in a single visual interface, making you much more productive.

Amazon SageMaker Autopilot: Automatically build, train, and tune models with full visibility and control, using Amazon SageMaker Autopilot. It is the industrys first automated machine learning capability that gives you complete control and visibility into how your models were created and what logic was used in creating these models.

Link:
Machine Learning on AWS

Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 – Lexology

Introduction

This article is the first of a five-part series of articles dealing with what patentability of machine learning looks like in 2019. This article begins the series by describing the USPTOs 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) in the context of the U.S. patent system. Then, this article and the four following articles will describe one of five cases in which Examiners rejections under Section 101 were reversed by the PTAB under this new 2019 PEG. Each of the five cases discussed deal with machine-learning patents, and may provide some insight into how the 2019 PEG affects the patentability of machine-learning, as well as software more broadly.

Patent Eligibility Under the U.S. Patent System

The US patent laws are set out in Title 35 of the United States Code (35 U.S.C.). Section 101 of Title 35 focuses on several things, including whether the invention is classified as patent-eligible subject matter. As a general rule, an invention is considered to be patent-eligible subject matter if it falls within one of the four enumerated categories of patentable subject matter recited in 35 U.S.C. 101 (i.e., process, machine, manufacture, or composition of matter).[1] This, on its own, is an easy hurdle to overcome. However, there are exceptions (judicial exceptions). These include (1) laws of nature; (2) natural phenomena; and (3) abstract ideas. If the subject matter of the claimed invention fits into any of these judicial exceptions, it is not patent-eligible, and a patent cannot be obtained. The machine-learning and software aspects of a claim face 101 issues based on the abstract idea exception, and not the other two.

Section 101 is applied by Examiners at the USPTO in determining whether patents should be issued; by district courts in determining the validity of existing patents; in the Patent Trial and Appeal Board (PTAB) in appeals from Examiner rejections, in post-grant-review (PGR) proceedings, and in covered-business-method-review (CBM) proceedings; and in the Federal Circuit on appeals. The PTAB is part of the USPTO, and may hear an appeal of an Examiners rejection of claims of a patent application when the claims have been rejected at least twice.

In determining whether a claim fits into the abstract idea category at the USPTO, the Examiners and the PTAB must apply the 2019 PEG, which is described in the following section of this paper. In determining whether a claim is patent-ineligible as an abstract idea in the district courts and the Federal Circuit, however, the courts apply the Alice/Mayo test; and not the 2019 PEG. The definition of abstract idea was formulated by the Alice and Mayo Supreme Court cases. These two cases have been interpreted by a number of Federal Circuit opinions, which has led to a complicated legal framework that the USPTO and the district courts must follow.[2]

The 2019 PEG

The USPTO, which governs the issuance of patents, decided that it needed a more practical, predictable, and consistent method for its over 8,500 patent examiners to apply when determining whether a claim is patent-ineligible as an abstract idea.[3] Previously, the USPTO synthesized and organized, for its examiners to compare to an applicants claims, the facts and holdings of each Federal Circuit case that deals with section 101. However, the large and still-growing number of cases, and the confusion arising from similar subject matter [being] described both as abstract and not abstract in different cases,[4] led to issues. Accordingly, the USPTO issued its 2019 Revised Patent Subject Matter Eligibility Guidance on January 7, 2019 (2019 PEG), which shifted from the case-comparison structure to a new examination structure.[5] The new examination structure, described below, is more patent-applicant friendly than the prior structure,[6] thereby having the potential to result in a higher rate of patent issuances. The 2019 PEG does not alter the federal statutory law or case law that make up the U.S. patent system.

The 2019 PEG has a structure consisting of four parts: Step 1, Step 2A Prong 1, Step 2A Prong 2, and Step 2B. Step 1 refers to the statutory categories of patent-eligible subject matter, while Step 2 refers to the judicial exceptions. In Step 1, the Examiners must determine whether the subject matter of the claim is a process, machine, manufacture, or composition of matter. If it is, the Examiner moves on to Step 2.

In Step 2A, Prong 1, the Examiners are to determine whether the claim recites a judicial exception including laws of nature, natural phenomenon, and abstract ideas. For abstract ideas, the Examiners must determine whether the claim falls into at least one of three enumerated categories: (1) mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations); (2) certain methods of organizing human activity (fundamental economic principles or practices, commercial or legal interactions, managing personal behavior or relationships or interactions between people); and (3) mental processes (concepts performed in the human mind: encompassing acts people can perform using their mind, or using pen and paper). These three enumerated categories are not mere examples, but are fully-encompassing. The Examiners are directed that [i]n the rare circumstance in which they believe[] a claim limitation that does not fall within the enumerated groupings of abstract ideas should nonetheless be treated as reciting an abstract idea, they are to follow a particular procedure involving providing justifications and getting approval from the Technology Center Director.

Next, if the claim limitation recites one of the enumerated categories of abstract ideas under Prong 1 of Step 2A, the Examiner is instructed to proceed to Prong 2 of Step 2A. In Step 2A, Prong 2, the Examiners are to determine if the claim is directed to the recited abstract idea. In this step, the claim does not fall within the exception, despite reciting the exception, if the exception is integrated into a practical application. The 2019 PEG provides a non-exhaustive list of examples for this, including, among others: (1) an improvement in the functioning of a computer; (2) a particular treatment for a disease or medical condition; and (3) an application of the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.

Finally, even if the claim recites a judicial exception under Step 2A Prong 1, and the claim is directed to the judicial exception under Step 2A Prong 2, it might still be patent-eligible if it satisfies the requirement of Step 2B. In Step 2B, the Examiner must determine if there is an inventive concept: that the additional elements recited in the claims provide[] significantly more than the recited judicial exception. This step attempts to distinguish between whether the elements combined to the judicial exception (1) add[] a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field; or alternatively (2) simply append[] well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality. Furthermore, the 2019 PEG indicates that where an additional element was insignificant extra-solution activity, [the Examiner] should reevaluate that conclusion in Step 2B. If such reevaluation indicates that the element is unconventional . . . this finding may indicate that an inventive concept is present and that the claim is thus eligible.

In summary, the 2019 PEG provides an approach for the Examiners to apply, involving steps and prongs, to determine if a claim is patent-ineligible based on being an abstract idea. Conceptually, the 2019-PEG method begins with categorizing the type of claim involved (process, machine, etc.); proceeds to determining if an exception applies (e.g., abstract idea); then, if an exception applies, proceeds to determining if an exclusion applies (i.e., practical application or inventive concept). Interestingly, the PTAB not only applies the 2019 PEG in appeals from Examiner rejections, but also applies the 2019 PEG in its other Section-101 decisions, including CBM review and PGRs.[7] However, the 2019 PEG only applies to the Examiners and PTAB (the Examiners and the PTAB are both part of the USPTO), and does not apply to district courts or to the Federal Circuit.

Case 1: Appeal 2018-007443[8] (Decided October 10, 2019)

This case involves the PTAB reversing the Examiners Section 101 rejections of claims of the 14/815,940 patent application. This patent application relates to applying AI classification technologies and combinational logic to predict whether machines need to be serviced, and whether there is likely to be equipment failure in a system. The Examiner contended that the claims fit into the judicial exception of abstract idea because monitoring the operation of machines is a fundamental economic practice. The Examiner explained that the limitations in the claims that set forth the abstract idea are: a method for reading data; assessing data; presenting data; classifying data; collecting data; and tallying data. The PTAB disagreed with the Examiner. The PTAB stated:

Specifically, we do not find monitoring the operation of machines, as recited in the instant application, is a fundamental economic principle (such as hedging, insurance, or mitigating risk). Rather, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning.

As explained in the previous section of this paper, the 2019 PEG set forth three possible categories of abstract ideas: mathematical concepts, certain methods of organizing human activity, and mental processes. Here, the PTAB addressed the second of these categories. The PTAB found that the claims do not recite a fundamental economic principle (one method of organizing human activity) because the claims recite AI components like neural networks in the context of monitoring machines. Clearly, economic principles and AI components are not always mutually exclusive concepts.[9] For example, there may be situations where these algorithms are applied directly to mitigating business risks. Accordingly, the PTAB was likely focusing on the distinction between monitoring machines and mitigating risk; and not solely on the recitation of the AI components. However, the recitation of the AI components did not seem to hurt.

Then, moving on to another category of abstract ideas, the PTAB stated:

Claims 1 and 8 as recited are not practically performed in the human mind. As discussed above, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning. . . . [Also,] claim 8 recites an output device that transforms the composite prediction output into human-readable form.

. . . .

In other words, the classifying steps of claims 1 and modules of claim 8 when read in light of the Specification, recite a method and system difficult and challenging for non-experts due to their computational complexity. As such, we find that one of ordinary skill in the art would not find it practical to perform the aforementioned classifying steps recited in claim 1 and function of the modules recited in claim 8 mentally.

In the language above, the PTAB addressed the third category of abstract ideas: mental processes. The PTAB provided that the claim does not recite a mental process because the AI algorithms, based on the context in which they are applied, are computationally complex.

The PTAB also addressed the first of the three categories of abstract ideas (mathematical concepts), and found that it does not apply because the specific mathematical algorithm or formula is not explicitly recited in the claims. Requiring that a mathematical concept be explicitly recited seems to be a narrow interpretation of the 2019 PEG. The 2019 PEG does not require that the recitation be explicit, and leaves the math category open to relationships, equations, or calculations. From this, the PTAB might have meant that the claims list a mathematical concept (the AI algorithm) by its name, as a component of the process, rather than trying to claim the steps of the algorithm itself. Clearly, the names of the algorithms are explicitly recited; the steps of the AI algorithms, however, are not recited in the claims.

Notably, reciting only the name of an algorithm, rather than reciting the steps of the algorithm, seems to indicate that the claims are not directed to the algorithms (i.e., the claims have a practical application for the algorithms). It indicates that the claims include an algorithm, but that there is more going on in the claim than just the algorithm. However, instead of determining that there is a practical application of the algorithms, or an inventive concept, the PTAB determined that the claim does not even recite the mathematical concepts.

Additionally, the PTAB found that even if the claims had been classified as reciting an abstract idea, as the Examiner had contended the claims are not directed to that abstract idea, but are integrated into a practical application. The PTAB stated:

Appellants claims address a problem specifically using several artificial intelligence classification technologies to monitor the operation of machines and to predict preventative maintenance needs and equipment failure.

The PTAB seems to say that because the claims solve a problem using the abstract idea, they are integrated into a practical application. The PTAB did not specify why the additional elements are sufficient to integrate the invention. The opinion actually does not even specifically mention that there are additional elements. Instead, the PTABs conclusion might have been that, based on a totality of the circumstances, it believed that the claims are not directed to the algorithms, but actually just apply the algorithms in a meaningful way. The PTAB could have fit this reasoning into the 2019 PEG structure through one of the Step 2A, Prong 2 examples (e.g., that the claim applies additional elements in some other meaningful way), but did not expressly do so.

Conclusion

This case illustrates:

(1) the monitoring of machines was held to not be an abstract idea, in this context; (2) the recitation of AI components such as neural networks in the claims did not seem to hurt for arguing any of the three categories of abstract ideas; (3) complexity of algorithms implemented can help with the mental processes category of abstract ideas; and (4) the PTAB might not always explicitly state how the rule for practical application applies, but seems to apply it consistently with the examples from the 2019 PEG.

The next four articles will build on this background, and will provide different examples of how the PTAB approaches reversing Examiner 101-rejections of machine-learning patents under the 2019 PEG. Stay tuned for the analysis and lessons of the next case, which includes methods for overcoming rejections based on the mental processes category of abstract ideas, on an application for a probabilistic programming compiler that performs the seemingly 101-vulnerable function of generat[ing] data-parallel inference code.

Read more:
Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 - Lexology

Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 – The Register

Microsoft has announced a new application, Dynamics 365 Project Operations, as well as additional AI-driven features for its Dynamics 365 range.

If you are averse to buzzwords, look away now. Microsoft Business Applications President James Phillips announced the new features in a post which promises AI-driven insights, a holistic 360-degree view of a customer, personalized customer experiences across every touchpoint, and real-time actionable insights.

Dynamics 365 is Microsofts cloud-based suite of business applications covering sales, marketing, customer service, field service, human resources, finance, supply chain management and more. There are even mixed reality offerings for product visualisation and remote assistance.

Dynamics is a growing business for Microsoft, thanks in part to integration with Office 365, even though some of the applications are quirky and awkward to use in places. Licensing is complex too and can be expensive.

Keeping up with what is new is a challenge. If you have a few hours to spare, you could read the 546-page 2019 Release Wave 2 [PDF] document, for features which have mostly been delivered, or the 405-page 2020 Release Wave 1 [PDF], about what is coming from April to September this year.

Many of the new features are small tweaks, but the company is also putting its energy into connecting data, both from internal business sources and from third parties, to drive AI analytics.

The updated Dynamics 365 Customer Insights includes data sources such as demographics and interests, firmographics, market trends, and product and service usage data, says Phillips. AI is also used in new forecasting features in Dynamics 365 Sales and in Dynamics 365 Finance Insights, coming in preview in May.

Dynamics 365 Project Operations ... Click to enlarge

The company is also introducing a new application, Dynamics 365 Business Operations, with general availability promised for October 1 2020. This looks like a business-oriented take on project management, with the ability to generate quotes, track progress, allocate resources, and generate invoices.

Microsoft already offers project management through its Project products, though this is part of Office rather than Dynamics. What can you do with Project Operations that you could not do before with a combination of Project and Dynamics 365?

There is not a lot of detail in the overview, but rest assured that it has AI-powered business insights and seamless interoperability with Microsoft Teams, so it must be great, right? More will no doubt be revealed at the May Business Applications Summit in Dallas, Texas.

Sponsored: Detecting cyber attacks as a small to medium business

The rest is here:
Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 - The Register

How Machine Learning Will Lead to Better Maps – Popular Mechanics

Despite being one of the richest countries in the world, in Qatar, digital maps are lagging behind. While the country is adding new roads and constantly improving old ones in preparation for the 2022 FIFA World Cup, Qatar isn't a high priority for the companies that actually build out maps, like Google.

"While visiting Qatar, weve had experiences where our Uber driver cant figure out how to get where hes going, because the map is so off," Sam Madden, a professor at MIT's Department of Electrical Engineering and Computer Science, said in a prepared statement. "If navigation apps dont have the right information, for things such as lane merging, this could be frustrating or worse."

Madden's solution? Quit waiting around for Google and feed machine learning models a whole buffet of satellite images. It's faster, cheaper, and way easier to obtain satellite images than it is for a tech company to drive around grabbing street-view photos. The only problem: Roads can be occluded by buildings, trees, or even street signs.

So Madden, along with a team composed of computer scientists from MIT and the Qatar Computing Research Institute, came up with RoadTagger, a new piece of software that can use neural networks to automatically predict what roads look like behind obstructions. It's able to guess how many lanes a given road has and whether it's a highway or residential road.

RoadTagger uses a combination of two kinds of neural nets: a convolutional neural network (CNN), which is mostly used in image processing, and a graph neural network (GNN), which helps to model relationships and is useful with social networks. This system is what the researchers call "end-to-end," meaning it's only fed raw data and there's no human intervention.

First, raw satellite images of the roads in question are input to the convolutional neural network. Then, the graph neural network divides up the roadway into 20-meter sections called "tiles." The CNN pulls out relevant road features from each tile and then shares that data with the other nearby tiles. That way, information about the road is sent to each tile. If one of these is covered up by an obstruction, then, RoadTagger can look to the other tiles to predict what's included in the one that's obfuscated.

Parts of the roadway may only have two lanes in a given tile. While a human can easily tell that a four-lane road, shrouded by trees, may be blocked from view, a computer normally couldn't make such an assumption. RoadTagger creates a more human-like intuition in a machine learning model, the research team says.

"Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks cant do that," Madden said. "Our approach tries to mimic the natural behavior of humans ... to make better predictions."

The results are impressive. In testing out RoadTagger on occluded roads in 20 U.S. cities, the model correctly counted the number of lanes 77 percent of the time and inferred the correct road types 93 percent of the time. In the future, the team hopes to include other new features, like the ability to identify parking spots and bike lanes.

Read more here:
How Machine Learning Will Lead to Better Maps - Popular Mechanics

How to handle the unexpected in conversational AI – ITProPortal

One of the biggest challenges for developers of natural language systems is accounting for the many and varied ways people express themselves. There is a reason many technology companies would rather we all spoke in simple terms, it makes humans easier to understand and narrows down the chances of machines getting it wrong.

But its hardly the engaging conversational experience that people expect of AI.

Language has evolved over many centuries. As various nations colonised and traded with other nations so our language whatever your native tongue is changed. And thanks to radio, TV, and the internet its continuing to expand every day.

Among the hundreds of new words added to the Merriam Webster dictionary in 2019 was Vacay: a shortening of vacation; Haircut: a new sense was added meaning a reduction in the value of an asset; and Dad joke: a corny pun normally told by fathers.

In a conversation, we as humans would probably be able to deduce what someone meant, even if wed never heard a word or expression before. Machines? Not so much. Or at least, not if they are reliant solely on machine learning for their natural language understanding.

While adding domain specialism such as a product name or industry terminology to an application overcomes a machine recognising some specific words, understanding all of the general everyday phrases people use in between those words is where the real challenge lies.

Most commercial natural language development tools today dont offer the intelligent, humanlike, experience that customers expect in automated conversations. One of the reasons is because they rely on pattern matching words using machine learning.

Although humans - at a basic level - pattern match words too, our brains add a much higher level of reasoning to allow us to do a better job of interpreting what the person meant by considering the words used, their order, synonyms and more, plus understanding when words such as book is being used as a verb or a noun. One might say we add our own more flexible form of linguistic modelling.

As humans, we can zoom in on the vocabulary that is relevant to the current discussion. So, when someone asks a question using a phrasing weve not heard before, we can extrapolate from what we do know, to understand what is meant. Even if weve never heard a particular word before, we can guess with a high degree of accuracy what it means.

But when it comes to machines, most statisticians will tell you that accuracy isnt a great metric. Its too easily skewed by the data its based on. Instead of accuracy, they use precision and recall. In simple terms precision is about quality. It marks the number of times you were actually correct with your prediction. Recall is about quantity, the number of times you predicted correctly out of all of the possibilities.

The vast majority of conversational AI development tools available today rely purely on machine learning. However, machine learning isnt great at precision, not without massive amounts of data on which to build its model. The end result is that the developer has to code in each and every way someone might ask a question. Not a task for the faint hearted when you consider there are at least 22 ways to say yes in the English language.

Some development tools rely on linguistic modelling, which is great at precision, because it understands sentence constructs and the common ways a particular type of question is phrased, but often doesnt stack up to machine learnings recall ability. This is because linguistic modelling is based on binary rules. They either match or they dont, which means inputs with minor deviations such as word ordering or spelling mistakes will be missed.

Machine learning on the other hand provides a probability on how much the input matches with the training data for a particular intent class and is therefore less sensitive to minor variations. Used alone, neither system is conducive to delivering a highly engaging conversation.

However, by taking a hybrid approach to conversational AI development, enterprises can benefit from the best of both worlds. Rules increase the precision of understanding, while machine learning delivers greater recall by recovering the data missed by the rules.

Not only does this significantly speed up the development process, it also allows for the application to deal with examples it has never seen before. In addition, it reduces the number of customers sent to a safety net such as a live chat agent, merely because theyve phrased their question slightly differently.

By enabling the conversational AI development platform to decide where each model is used, the performance of the conversational system can be optimised even further. Making it easier for the developer to build robust applications by automatically mixing and matching the underlying technology to achieve the best results, while allowing technology to more easily understand humans no matter what words we choose to use.

Andy Peart, CMSO, Artificial Solutions

Read the original here:
How to handle the unexpected in conversational AI - ITProPortal

Difference between AI, Machine Learning and Deep Learning

As we reached the digital era, where computers became an integral part of the everyday lifestyle, people cannot help but be amazed at how far we have come since the time immemorial. The creation of the computers, as well as the internet, had led us into a more complex thinking, making information available to us with just a click. You just type in the words and information will be readily available for you.

However, as we approached this era, a lot of inventions and terms became confusing. Have you heard about Artificial intelligence? How about Deep Learning? Moreover, Machine Learning? These three words are familiar to us and can be used interchangeably, however, the exact meaning of this becomes uncertain. The more people used it, the more confusing it gets.

Also Read:Top 5 Data Science and Machine Learning Courses

Deep Learning and Machine Learning are words that followed after Artificial Intelligence was created. It is like breaking down the function of AI and naming them Deep Learning and Machine Learning. But before this gets more confusing, let us differentiate the three starting off with Artificial Intelligence.

AI is the like creating intelligence artificially. Artificial Intelligence is the broad umbrella term for attempting to make computers think the way humans think, be able to simulate the kinds of things that humans do and ultimately to solve problems in a better and faster way than we do. The AI itself is a rather generic term for solving tasks that are easy for humans, but hard for computers. It includes all kinds of tasks, such as doing creative work, planning, moving around, speaking, recognizing objects and sounds, performing social or business transactions and a lot more.

Digital era, brought an explosion of data in all forms and from every region of the world. This data, known simply as Big Data, is drawn from sources like social media, internet search engines, e-commerce platforms, online cinemas, etc. This enormous amount of data is readily accessible and can be shared through various applications like cloud computing. However, the data, which normally is unstructured, is so vast that it could take decades for humans to comprehend it and extract relevant information. Companies realize the incredible potential that can result from unraveling this wealth of information and are increasingly adapting to Artificial Intelligence (AI) systems for automated support.

More and more plans to try different approaches to use AI leads to the most promising and relevant area which is the Machine Learning. The most common way to process the Big Data is called Machine Learning. It is a self-adaptive algorithm that gets better and better analysis and patterns with experience or with newly added data.

For example, if a digital payments company wanted to detect the occurrence of or potential for fraud in its system, it could employ machine learning tools for this purpose. The computational algorithm built into a computer model will process all transactions happening on the digital platform, find patterns in the data set, and point out any anomaly detected by the pattern.

Deep learning, on the other hand, is a subset of machine learning, utilizes a hierarchical level of artificial neural networks to carry out the process of machine learning. The artificial neural networks are built like the human brain, with neuron nodes connected together like a web. While traditional programs build analysis with data in a linear way, the hierarchical function of deep learning systems enables machines to process data with a non-linear approach.

A traditional approach to detecting fraud or money laundering might rely on the amount of transaction that ensues, while a deep learning non-linear technique to weeding out a fraudulent transaction would include time, geographic location, IP address, type of retailer, and any other feature that is likely to make up a fraudulent activity.

Thus, these three are like a triangle where the AI to be the top that leads to the creation of Machine Learning with a subset of Deep Learning. These three had made our life easier as time goes by and helped make a faster and better way of gathering information that cannot be done by humans because of the enormous amount of information available.

Humans will take forever just to get a single information while these AI will only take minutes. As we become more and more comfortable using technology, the better humans can develop them into a better version of itself. You should also check our latest article:5 Best Programming Languages for Artificial Intelligence Systems

See original here:
Difference between AI, Machine Learning and Deep Learning

Jenkins Creator Launches Startup To Speed Software Testing with Machine Learning — ADTmag – ADT Magazine

Jenkins Creator Launches Startup To Speed Software Testing with Machine Learning

Kohsuke Kawaguchi, creator of the open source Jenkins continuous integration/continuous delivery (CI/CD) server, and Harpreet Singh, former head of the product group at Atlassian, have launched a startup that's using machine learning (ML) to speed up the software testing process.

Their new company, Launchable, which emerged from stealth mode on Thursday, is developing a software-as-a-service (SaaS) product with the ability to predict the likelihood of a failure for each test case, given a change in the source code. The service will use ML to extract insights from the massive and growing amount of data generated by the increasingly automated software development process to make its predictions.

"As a developer, I've seen this problem of slow feedback from tests first-hand," Kawaguchi told ADTmag. "And as the guy who drove automation in the industry with Jenkins, it seemed to me that we could make use of all that data the automation is generating by applying machine learning to the problem. I thought we should be able to train the machine on the model and apply quantifiable metrics, instead of relying on human experience and gut instinct. We believe we can predict, with meaningful accuracy, what tests are more likely to catch a regression, given what has changed, and that translates to faster feedback to developers."

The strategy here is to run only a meaningful subset of tests, in the order that minimizes the feedback delay.

Kawaguchi (known as "KK") and Singh worked together at CloudBees, the chief commercial supporter of Jenkins. Singh left that company in 2018 to serve as GM of Atlassian's Bitbucket cloud group. Kawaguchi became an elite developer and architect at CloudBees, and he's been a part of the community throughout the evolution of this technology. His departure from the company was amicable: Its CEO and co-founder Sacha Labourey is an investor in the startup, and Kawaguchi will continue to be involved with the Jenkins community, he said.

Software testing has been a passion of Kawaguchi's since his days at Sun Microsystems, where he developed Jenkins as a fork of the Hudson CI server in 2011. Singh also worked at Sun and served as the first product manager for Hudson before working on Jenkins. They will serve as co-CEOs of the new company. They reportedly snagged $3.2 million in seed funding to get the ball rolling.

"KK and I got to talking about how the way we test now impacts developer productivity, and how machine learning could be used to address the problem," Singh said. "And then we started talking about doing a startup. We sat next to each other at CloudBees for eight years; it was an opportunity I couldn't pass up."

An ML engine is at the heart of the Launchable SaaS, but it's really all about the data, Singh said.

"We saw all these sales and marketing guys making data-driven decisions -- even more than the engineers, which was kind of embarrassing," Singh said. "So it became a mission for us to change that. It's kind of our north star."

The co-execs are currently talking with potential partners and recruiting engineers and data scientists. They offered no hard release date, but they said they expect a version of the Launchable SaaS to become generally available later this year.

Posted by John K. Waters on 01/23/2020 at 8:41 AM

Read more from the original source:
Jenkins Creator Launches Startup To Speed Software Testing with Machine Learning -- ADTmag - ADT Magazine

I Know Some Algorithms Are Biased–because I Created One – Scientific American

Artificial intelligence and machine learning are becoming common in research and everyday life, raising concerns about how these algorithms work and the predictions they make. For example, when Apple released its credit card over the summer, there were claims that women were given a lower credit limit than otherwise identical men were. In response, Sen. Elizabeth Warren warned that women might have been discriminated against, on an unknown algorithm.

On its face, her statement appears to contradict the way algorithms work. Algorithms are logical mathematical functions and processes, so how can they discriminate against a person or a certain demographic?

Creating an algorithm that discriminates or shows bias isnt as hard as it might seem, however. As a first-year graduate student, my advisor asked me to create a machine-learning algorithm to analyze a survey sent to United States physics instructors about teaching computer programming in their courses. While programming is an essential skill for physicists, many undergraduate physics programs do not offer programming courses, leaving individual instructors to decide whether to teach programming.

The task seemed simple enough. Id start with an algorithm in Pythons scikit-learn library to create my algorithm to predict whether a survey respondent had experience teaching programming. Id supply the physics instructors responses to the survey and run the algorithm. My algorithm then would tell me whether the instructors taught programming and which questions on the survey were most useful in making that prediction.

When I did that, however, I noticed a problem. My algorithm kept finding that only the written response questions (and none of the multiple-choice questions) differentiated the two groups of instructors. When I analyzed those questions using a different technique, I didnt find any differences between the instructors who taught and did not teach programming! It out turned that I had been using the wrong algorithm the whole time.

My example may seem silly. So what if I chose the wrong algorithm to predict which instructors teach programming? But what if I had instead been creating a model to predict which patients should receive extra care? Then using the wrong algorithm could be a significant problem.

Yet, this isnt hypothetical as a recent study in Science showed. In the study, researchers examined an algorithm created to find patients who may be good fits in a high-risk care management program. For white and black patients the algorithm identified as having equal risk, the black patient was sicker than the white patient. Thus, even though the black patient was sicker than the white patient, the algorithm saw the two patients as having equal needs.

Just as in my research, the health care company had used the wrong algorithm. The designers of the algorithm created it to predict health care costs rather than the severity of the illness. As a result, since white patients have better access to care and hence spend more on health care, the algorithm assigned white patients, who were less ill, the same level of risk as more ill black patients. The researchers claim that similar algorithms are applied to around 200 million Americans each year, so who knows how many lives may have been lost to what the study authors called a racial bias in an algorithm?

What then can we do to combat this bias? I learned that I used an incorrect algorithm because I visualized my data, saw that my algorithms predictions were not aligned with what my data or previous research said, and could not remove the discrepancy regardless of how I changed my algorithm. Likewise, to combat any bias, policy ideas need to focus on the algorithms and the data.

To address issues with the algorithm, we can push for algorithms transparency, where anyone could see how an algorithm works and contribute improvements. Given that most commercial machine learning algorithms are considered proprietary information, companies may not be willing to share their algorithms.

A more practical route may be to occasionally test algorithms for potential bias and discrimination. The companies themselves could conduct this testing, as the House of Representatives Algorithm Accountability Act would require, or the testing could be performed by an independent nonprofit accreditation board, such as the proposed Forum for Artificial Intelligence Regularization (FAIR).

To make sure the testing is fair, the data themselves need to be fair. For example, crime-predicting algorithms analyze historical crime data, in which people from racial and ethnic minority groups are overrepresented, and hence the algorithm may make biased predictions even if the algorithm is constructed correctly. Therefore, we need to ensure that representative data sets are available for testing.

Getting these changes to occur will not come easily. As machine learning and artificial intelligence become more essential to our lives, we must ensure our laws and regulations keep pace. Machine learning is already revolutionizing entire industries, and we are only at the beginning of that revolution. We as citizens need to hold algorithm developers and users accountable to ensure that the benefits of machine learning are equitably distributed. By taking appropriate precautions, we can ensure that algorithmic bias is a bug and not a feature of future algorithms.

Read more from the original source:
I Know Some Algorithms Are Biased--because I Created One - Scientific American

Essential AI & Machine Learning Certification Training Bundle Is Available For A Limited Time 93% Discount Offer Avail Now – Wccftech

Machine learning and AI are the future of technology. If you wish to become part of the world of technology, this is the place to begin. The world is becoming more dependent on technology every day and it wouldnt hurt to embrace it like it is. If you resist it, you will just be obsolete and will have trouble surviving. Wccftech is offering an amazing discount offer on the Essential AI & Machine Learning Certification Training Bundle. The offer will expire in less than a week, so avail it right away.

The bundle includes 4 extensive courses on NLP, Computer Vision, Data visualization and Machine Learning. Each course will help you understand the technology world a bit more and you will not regret investing your time and money on this. The courses have been created by experts so, you are in safe hands. Here are highlights of what the Essential AI & Machine Learning Certification Training Bundle has in store for you:

The bundle has been brought to you by GreyCampus. They are known for providing learning solutions to professionals in various fields including project management, data science, big data, quality management and more. They offer different kinds of teaching platforms including e-learning and live-online. All these courses have been specifically designed to meet the markets changing needs.

Original Price Essential AI & Machine Learning Certification Training Bundle: $656Wccftech Discount Price Essential AI & Machine Learning Certification Training Bundle: $39.99

Share Submit

More:
Essential AI & Machine Learning Certification Training Bundle Is Available For A Limited Time 93% Discount Offer Avail Now - Wccftech

Machine Learning: Higher Performance Analytics for Lower …

Faced with mounting compliance costs and regulatory pressures, financial institutions are rapidly adopting Artificial Intelligence (AI) solutions, including machine learning and robotic process automation (RPA) to combat sophisticated and evolving financial crimes.

Over one third of financial institutions have deployed machine learning solutions, recognizing that AI has the potential to improve the financial services industry by aiding with fraud identification, AML transaction monitoring, sanctions screening and know your customer (KYC) checks (Financier Worldwide Magazine).

When deployed in financial crime management solutions, analytical agents that leverage machine learning can help to reduce false positives, without compromising regulatory or compliance needs.

It is well known that conventional, rules-based fraud detection and AML programs generate large volumes of false positive alerts. In 2018, Forbes reported With false positive rates sometimes exceeding 90%, something is awry with most banks legacy compliance processes to fight financial crimes such as money laundering.

Such high false positive rates force investigators to waste valuable time and resources working through large alert queues, performing needless investigations, and reconciling disparate data sources to piece together evidence.

The highly regulated environment makes AML a complex, persistent and expensive challenge for FIs but increasingly, AI can help FIs control not only the complexity of their AML provisions, but also the cost (Financier Worldwide Magazine).

In an effort to reduce the costs of fraud prevention and BSA/AML compliance efforts, financial institutions should consider AI solutions, including machine learning analytical agents, for their financial crime management programs.

Machine learning agents use mathematical and statistical models to learn from data without being explicitly programmed. Financial institutions can deploy dynamic machine learning solutions to:

To effectively identify patterns, machine learning agents must process and train with a large amount of quality data. Institutions should augment data from core banking systems with:

When fighting financial crime, a single financial institution may not have enough data to effectively train high-performance analytical agents. By gathering large volumes of properly labeled data in a cloud-based environment, machine learning agents can continuously improve and evolve to accurately detect fraud and money laundering activities, and significantly improve compliance efforts for institutions.

Importing and analyzing over a billion transactions every week in our Cloud environment, Verafins big data intelligence approach allows us to build, train, and refine a proven library of machine learning agents. Leveraging this immense data set, Verafins analytical agents outperform conventional detection analytics, reducing false positives and allowing investigators to focus their efforts on truly suspicious activity. For example:

With proven behavior-based fraud detection capabilities, Verafins Deposit Fraud analytics consistently deliver 1-in-7 true positive alerts.

By deploying machine learning, Verafin was able to further improve upon these high-performing analytics resulting in an additional 66% reduction in false positives. Training our machine learning agents on check returns mapped as true fraud in the Cloud, the Deposit Fraud detection rate improved to 1-in-3 true positive alerts, while maintaining true fraud detection.

These results clearly outline the benefits of applying machine learning analytics to a large data set in a Cloud environment. In todays complex and costly financial crime landscape, financial institutions should deploy financial crime management solutions with machine learning to significantly reduce false positives, while maintaining regulatory compliance.

In an upcoming article, we will explore how and when robotic process automation can benefit financial crime management solutions.

Continued here:
Machine Learning: Higher Performance Analytics for Lower ...

Tiny Machine Learning On The Attiny85 – Hackaday

We tend to think that the lowest point of entry for machine learning (ML) is on a Raspberry Pi, which it definitely is not. [EloquentArduino] has been pushing the limits to the low end of the scale, and managed to get a basic classification model running on the ATtiny85.

Using his experience of running ML models on an old Arduino Nano, he had created a generator that can export C code from a scikit-learn. He tried using this generator to compile a support-vector colour classifier for the ATtiny85, but ran into a problem with the Arduino ATtiny85 compiler not supporting a variadic function used by the generator. Fortunately he had already experimented with an alternative approach that uses a non-variadic function, so he was able to dust that off and get it working. The classifier accepts inputs from an RGB sensor to identify a set of objects by colour. The model ended up easily fitting into the capabilities of the diminutive ATtiny85, using only 41% of the available flash and 4% of the available ram.

Its important to note what [EloquentArduino] isnt doing here: running an artificial neural network. Theyre just too inefficient in terms of memory and computation time to fit on an ATtiny. But neural nets arent the only game in town, and if your task is classifying something based on a few inputs, like reading a gesture from accelerometer data, or naming a color from a color sensor, the approach here will serve you well. We wonder if this wouldnt be a good solution to the pesky problem of identifying bats by their calls.

We really like how approachable machine learning has become and if youre keen to give ML a go, have a look at the rest of the EloquentArduino blog, its a small goldmine.

Were getting more and more machine learning related hacks, like basic ML on an Arduino Uno, and Lego sortings using ML on a Raspberry Pi.

See the original post here:
Tiny Machine Learning On The Attiny85 - Hackaday

Being human in the age of Artificial Intelligence – Deccan Herald

After a while, everything is overhyped and underwhelming. Even Artificial Intelligence has not been able to escape the inevitable reduction that follows such excessive hype. AI is everything and everywhere now and most of us wont even blink if we are toldAI is poweringsomeonestoothbrush. (It probably is).

The phrase is undoubtedly being misused but is the technology too? One thing is certain, whether we like it or not, whether we understand it or not, for good or bad, AI is playing a huge part in our everyday life today huger than we imagine. AI is being employed in health, wellness and warfare; it is scrutinizing you, helping you take better photos, making music, books and even love. (No, really. The first fully robotic sex doll is being created even as you are reading this.)

However, there is a sore lack of understanding of what AI really is, how it is shaping our future and why it is likely to alter our very psyche sooner or later. There is misinformation galore, of course. Either media coverage of AI is exaggerated (as if androids will take over the world tomorrow) or too specific and technical, creating further confusion and fuelling sci-fi-inspired imaginations of computers smarter than human beings.

So what is AI? No, we are not talking dictionary definitions here those you can Google yourself. Neither are we promising to explain everything that will need a book. We are onlyhoping to give you aglimpse into theextraordinary promise and peril of this single transformative technology as Prof Stuart Russell, one of the worlds pre-eminent AI experts, puts it.

Prof Russell has spent decades on AI research and is the author of Artificial Intelligence: A Modern Approach, which is used as a textbook on AI in over 1,400 universities around the world.

Machine learning first

Otherexperts believe our understanding of artificial intelligence should begin with comprehending machine learning, the so-called sub-field of AI butone that actually encompasses pretty much everything that is happening in AI at present.

In its very simplest definition, machine learning is enabling machines to learn on their own. The advantages of thisare easy to see. After a while, you need not tell it what to do it is your workhorse. All you need is to provide it data and it will keep coming up with smarter ways of digesting that data, spotting patterns, creating opportunities in short doing your work better than you perhaps ever could. This is the point where you need to scratch the surface. Scratch and you will stare into a dissolving ethical conundrum about what machines might end up learning. Because, remember they do not (cannot) explain their thinking process. Not yet, at least. Precisely why, the professor has a cautionary take.

The concept of intelligence is central to who we are. After more than 2,000 years of self-examination, we have arrived at a characterization of intelligence that can be boiled down to this: Humans are intelligent to the extent that our actions can be expected to achieve our objectives. Intelligence in machines has been defined in the same way: Machines are intelligent to the extent that their actions can be expected to achieve their objectives.

Whose objectives?

The problem,writes the professor, is in this very definition of machine intelligence. We say that machines are intelligent to the extent that their actions can be expected to achieve their objectives, but we have no reliable way to make sure that their objectives are the same as our objectives. He believes what we should have done all along is to tweak this definition to: Machines are beneficial to the extent that their actions can be expected to achieve our objectives.

The difficulty here is of course that our objectives are in us all eight billion of us and not in the machines. Machines will be uncertain about our objectives; after all we are uncertain about them ourselves but this is a good thing; this is a feature, not a bug. Uncertainty about objectives implies that machines will necessarily defer to humans they will ask permission, they will accept correction and they will allow themselves to be switched off.

Spilling out of the lab

This might mean a complete rethinking and rebuilding of the AI superstructure. Perhaps something that indeed is inevitable if we do not want this big event in human history to be the last, says the prof wryly. As Kai-Fu Lee, another AI researcher, said in an interview a while ago, we are at a moment where the technology is spilling out of the lab and into the world. Time to strap up then!

(With inputs from Human Compatible: AI and the Problem of Control by Stuart Russell, published by Penguin, UK. Extracted with permission.)

Visit link:
Being human in the age of Artificial Intelligence - Deccan Herald

10 Business Functions That Are Ready To Use Artificial Intelligence – Forbes

In the grand scheme of things, artificial intelligence (AI) is still in the very early stages of adoption by most organizations. However, most leaders are quite excited to implement AI into the companys business functions to start realizing its extraordinary benefits. While we have no way of knowing all the ways artificial intelligence and machine learning will ultimately impact business functions, here are 10 business functions that are ready to use artificial intelligence.

10 Business Functions That Are Ready To Use Artificial Intelligence

Marketing

If your company isnt using artificial intelligence in marketing, it's already behind. Not only can AI help to develop marketing strategies, but it's also instrumental in executing them. Already AI sorts customers according to interest or demographic, can target ads to them based on browsing history, powers recommendation engines, and is a critical tool to give customers what they want exactly when they want it. Another way AI is used in marketing is through chatbots. These bots can help solve problems, suggest products or services, and support sales. Artificial intelligence also supports marketers by analyzing data on consumer behavior faster and more accurately than humans. These insights can help businesses make adjustments to marketing campaigns to make them more effective or plan better for the future.

Sales

There is definitely a side of selling products and services that is uniquely human, but artificial intelligence can arm sales professionals with insights that can improve the sales function. AI helps improve sales forecasting, predict customer needs, and improve communication. And intelligent machines can help sales professionals manage their time and identify who they need to follow-up with and when as well as what customers might be ready to convert.

Research and Development (R&D)

What about artificial intelligence as a tool of innovation? It can help us build a deeper understanding in nearly any industry, including healthcare and pharmaceuticals, financial, automotive, and more, while collecting and analyzing tremendous amounts of information efficiently and accurately. This and machine learning can help us research problems and develop solutions that weve never thought of before. AI can automate many tasks, but it will also open the door to novel discoveries, ways of improving products and services as well as accomplishing tasks. Artificial intelligence helps R&D activities be more strategic and effective.

IT Operations

Also called AIOps, AI for IT operations is often the first experience many organizations have with implementing artificial intelligence internally. Gartner defines the term AIOps as the application of machine learning and data science to IT operations problems. AI is commonly used for IT system log file error analysis, with IT systems management functions as well as to automate many routine processes. It can help identify issues so the IT team can proactively fix them before any IT systems go down. As the IT systems to support our businesses become more complex, AIOps helps the IT improve system performance and services.

Human Resources

In a business function with human in the name, is there a place for machines? Yes! Artificial intelligence really has the potential to transform many human resources activities from recruitment to talent management. AI can certainly help improve efficiency and save money by automating repetitive tasks, but it can do much more. PepsiCo used a robot, Robot Vera, to phone and interview candidates for open sales positions. Talent is going to expect a personalized experience from their employer just as they have been accustomed to when shopping and for their entertainment. Machine learning and AI solutions can help provide that. In addition, AI can help human resources departments with data-based decision-making and make candidate screening and the recruitment process easier. Chatbots can also be used to answer many common questions about company policies and benefits.

Contact Centers

The contact center of an organization is another business area where artificial intelligence is already in use. Organizations that use AI technology to enhance rather than replace humans with these tasks are the ones that are incorporating artificial intelligence in the right way. These centers collect a tremendous amount of data that can be used to learn more about customers, predict customer intent, and improve the "next best action" for the customer for better customer engagement. The unstructured data collected from contact centers can also be analyzed by machine learning to uncover customer trends and then improve products and services.

Building Maintenance

Another way AI is already at work in businesses today is helping facilities managers optimize energy use and the comfort of occupants. Building automation, the use of artificial intelligence to help manage buildings and control lighting and heating/cooling systems, uses internet-of-things devices and sensors as well as computer vision to monitor buildings. Based upon the data that is collected, the AI system can adjust the building's systems to accommodate for the number of occupants, time of day, and more. AI helps facilities managers improve energy efficiency of the building. An additional component of many of these systems is building security as well.

Manufacturing

Heineken, along with many other companies, uses data analytics at every stage of the manufacturing process from the supply chain to tracking inventory on store shelves. Predictive intelligence can not only anticipate demand and ramp production up or down, but sensors on equipment can predict maintenance needs. AI helps flag areas of concern in the manufacturing process before costly issues erupt. Machine vision can also support the quality control process at manufacturing facilities.

Accounting and Finance

Many organizations are finding the promise of cost reductions and more efficient operations the major appeal for artificial intelligence in the workplace, and according to Accenture Consulting, robotic process automation can produce amazing results in these areas for the accounting and finance industry and departments. Human finance professionals will be freed-up from repetitive tasks to be able to focus on higher-level activities while the use of AI in accounting will reduce errors. AI is also able to provide real-time status of financial matters to organizations because it can monitor communication through natural language processing.

Customer Experience

Another way artificial intelligence technology and big data are used in business today is to improve the customer experience. Luxury fashion brand Burberry uses big data and AI to enhance sales and customer relationships. The company gathers shopper's data through loyalty and reward programs that they then use to offer tailored recommendations whether customers are shopping online or in brick-and-mortar stores. Innovative uses of chatbots during industry events are another way to provide a stellar customer experience.

For more on AI and technology trends, see Bernard Marrs bookArtificial Intelligence in Practice: How 50 Companies Used AI and Machine Learning To Solve Problemsand his forthcoming bookTech Trends in Practice: The 25 Technologies That Are Driving The 4ThIndustrial Revolution, which is available to pre-order now.

View original post here:
10 Business Functions That Are Ready To Use Artificial Intelligence - Forbes

Machine Learning: The Real Buzzword Of 2020 – Forbes

Artificial intelligence (AI) is a hot topic. Skim tech journals or sites, and you'll undoubtedly see articles focused on how AI is the big technology for 2020. CIOs are discussing how to bring AI into their organizations, and CX leaders are listing AI as a must-have.

But here's the funny thing: AI doesn't really exist not yet anyway. I know many will be surprised to hear this, but before you decide that I'm wrong, consider Merriam-Webster.com's definition: "The capability of a machine to imitate intelligent human behavior."

If you believe this is the right definition of AI, then I ask you: Are there machines imitating intelligent human behavior today? The answer right now is no. If there is a machine that seems smart on its own, the truth is that AI isn't the driver machine learning (ML) is. ML is alive and thriving, yet AI gets all the credit.

It's time to get familiar with ML.

ML powers programs and machines to take data, analyze it in real time, and then learn and adapt based on that information. This is happening today. Think of the recommendations you get for products on Amazon or the shows Netflix suggests you watch. This is all due to ML. It learns your preferences based on your browsing/purchasing/viewing behaviors and then makes intelligent recommendations. The ability to synthesize massive amounts of data in nanoseconds makes machines smart. There's actually nothing artificial about it it's real and at play in our lives already.

Without a doubt, ML is a game-changer for many industries, including contact centers. Similar to the way that automation revolutionized manufacturing, ML can be the missing link to revolutionizing the customer service industry. When leveraged correctly, ML offers enormous productivity gains in customer-facing interactions, empowering contact centers to use bots to perform basic, repetitive tasks. By offloading straightforward work to bots, human agents are free to do work that requires empathy and thought that only they can deliver. This can create an exponentially scalable customer experience workforce in other words, it could solve the industry's oldest and most expensive problem.

ML's potential is big.

Once you know how ML works, I'm sure you can think of ways it has touched your life. But ML's potential is greater than how we're using it. In fact, I don't think we've scratched the surface of its benefits. I believe one of the biggest untapped possibilities for ML lies inside organizations around internal processes. I believe that in 2020, we'll start seeing organizations using ML's data and analysis capabilities to make more informed workforce management decisions.

Instead of contact center managers having to manually sort through data to find out which agents are doing well on a particular day, they can use the insight delivered via ML to see who is providing great service and is able to take on additional customers and issues and, conversely, who is struggling and might need a break. This is an effect of ML's ability to use sentiment analysis and natural language process (NLP) to identify patterns, including patterns in an employee's productivity. ML gives managers informative, real-time data to help them support their staff, which helps employees succeed and helps to deliver an exceptional experience to every customer. Win-win.

When you have machines that can learn about your processes, customers' and employees' needs, and goals, you have the knowledge to make iterative, positive changes to your business. This can lead to:

Better employee experiences and a more engaged workforce with less turnover.

Better, more personalized, lower-effort customer experiences.

Reduced staffing expenses and higher revenue potential.

Streamlined operations by partnering humans with bots.

If you're not a computer science nerd, the concept of ML might feel unrealistic, expensive or difficult to deploy. In short, it seems risky. However, I believe this is a technology your business should be using. Here are some tips to make the transition to ML less intimidating:

1. Do your research. While you should feel a sense of urgency to integrate ML into your business, don't make hasty decisions. Take the time to get a solid understanding of your customers' needs. You don't want to start using just any solution, but one that best matches your business needs.

2. Choose the right ML-powered bot. Just like any other technology, there are options. Make sure you find a bot that meets the needs of your business and offers the services that make life better for your customers and your employees. Not every bot is built alike.

3. Don't forget about your people. Leveraging the right technology innovation is critical to your business, but so is investing in your people and ensuring that the tech and the humans are working together harmoniously.

4. Realize that you're never done. It's important for leaders across all businesses to realize that customer experience is constantly evolving and that we must always be watching, evaluating and tweaking. Don't be afraid to make changes or modifications to your ML plans. If something isn't producing the results you want, find the issue, and make a change. Learn, and keep going. If you have a win, isolate what worked, and replicate it. Similar to the first tip, this isn't a race, so be thoughtful about what you're doing, and ensure it resonates with your business objectives as well as your customers' and employees' needs.

ML isn't the way of the future it's the way of the present, and I can't think of one reason you would knowingly decide to be late to the game. Your business deserves to work smarter, and this is the power of ML. Are you ready?

More here:
Machine Learning: The Real Buzzword Of 2020 - Forbes

2020 Supply Chain Planning Value Matrix Underscores Benefits of Machine Learning and Customizable Integrations – Yahoo Finance

Nucleus Research identifies Blue Yonder, E2Open, Infor, Kinaxis, One Network and Vanguard as SCP Leaders

Nucleus Research today released the 2020 Supply Chain Planning (SCP) Technology Value Matrix, its assessment of the SCP market. For the report, Nucleus evaluated SCP vendors based on their products usability, functionality and overall value.

While other firms market reports position vendors based on analyst opinions, the Nucleus Value Matrix segments competitors based on usability, functionality and the value that customers realized from each products capabilities, measured with Nucleus rigorous ROI methodologies.

Nucleus named Blue Yonder, E2Open, Infor, Kinaxis, One Network and Vanguard as SCP leaders.

Supply chain planning has become critical for success as companies must maintain service levels in the face of resource constraints and external disturbances. Tight solution integrations and robust embedded analytics have become table stakes for supply chain planning systems, which can now differentiate based on go-to-market strategy and tactical focuses. Leading vendors have undertaken a "platform approach" to product delivery, providing solution flexibility that enables customers to drive long-term value by configuring deployments with their preferred blend of best practices and customizations.

"To support a broad range of planning capabilities, supply chain planning vendors must provide comprehensive product roadmaps," says Ian Campbell, CEO of Nucleus Research. "Now more than ever, customers demand the capability to prioritize tactical focuses and personalize SCP solutions with their own differentiators."

"In order to be resilient enough to handle external challenges, organizations must have robust plans in place for their supply chains," says Andrew MacMillen, analyst at Nucleus Research. "Proactive resource management has become essential for sustainable success and requires a greater level of collaboration across an organizations departments. Leading SCP solutions realize this, and can consolidate siloed data into a unified view to deliver value."

See the full report at: https://nucleusresearch.com/research/single/scp-technology-value-matrix-2020/

About Nucleus Research

Nucleus Research is a global provider of investigative, case-based technology research and advisory services. We deliver the numbers that drive business decisions. For more information, visit NucleusResearch.com or follow us on Twitter @NucleusResearch.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200324005437/en/

Contacts

Adam OuelletInkHousenucleus@inkhouse.com 978-413-4341

See more here:
2020 Supply Chain Planning Value Matrix Underscores Benefits of Machine Learning and Customizable Integrations - Yahoo Finance

dotData Receives APN Machine Learning Competency Partner of the Year Award – Yahoo Finance

Award Recognizes Company's Rapid Growth and Success in the AutoML 2.0 Market

SAN MATEO, Calif., March 25, 2020 /PRNewswire/ -- dotData, focused on delivering full-cycle data science automation and operationalization for the enterprise, today announced that Amazon Web Services (AWS) has awarded dotData with the APN Machine Learning (ML) Competency Partner of the Year Award for 2019.

The award recognizes dotData's rapid growth and success in the enterprise AI/ML market and its contribution to the AWS business in 2019. This award is a testament to dotData platform's ability to significantly accelerate and simplify development of new AI/ML use cases and deliver insights to enterprise customers. The award was announced today at the AWS Partner Summit Tokyo, currently taking place virtually from March 25 - April 10, 2020.

dotData announced in February 2020 that it had achieved AWS ML Competency status, only eight months after joining the AWS Partner Network (APN). The certification recognizes dotData as an APN Partner that accelerates the full-cycle ML and data science process and provides validation that dotData has deep expertise in artificial intelligence (AI) and ML on AWS and can deliver their organization's solutions seamlessly on AWS.

dotData provides solutions designed to improve the productivity of data science projects, which traditionally require extensive manual efforts from valuable and scarce enterprise resources. The platform automates the full life-cycle of the data science process, from business raw data through feature engineering to implementation of ML in production utilizing its proprietary AI technologies.

dotData's AI-powered feature engineering automatically applies data transformation, cleansing, normalization, aggregation, and combination, and transforms hundreds of tables with complex relationships and billions of rows into a single feature table, automating the most manual data science projects.

Story continues

"We are honored and proud to receive this award which recognizes our commitment to making AI and ML accessible to as many people in the enterprise as possible and our success in helping our enterprise customers meet their business goals," said Ryohei Fujimaki, founder and CEO of dotData. "As an APN ML Competency partner we have been able to deliver an outstanding product that dramatically accelerates the AI and ML initiatives of AWS users and maximizes their business impacts. We look forward to contributing to our customers' success bycollaborating with AWS."

AWS ML Competency Partners provide solutions that help organizations solve their data challenges and enable ML and data science workflows. The program is designed to highlight APN Partners who have demonstrated technical proficiency in specialized solution areas and helps customers find the most qualified organizations with deep expertise and proven customer success.

dotData democratizes data science by enabling existing resources to perform data science tasks, making enterprise data science scalable and sustainable. dotData automates up to 100 percent of the data science workflow, enabling users to connect directly to their enterprise data sources to discover and evaluate millions of features from complex table structures and huge data sets with minimal user input. dotData is also designed to operationalize data science by producing both feature and ML scoring pipelines in production, which IT teams can then immediately integrate with business workflow. This can further automate the time-consuming and arduous process of maintaining the deployed pipeline to ensure repeatability as data changes over time. With the dotData GUI, the data science task becomes a five-minute operation, requiring neither significant data science experience nor SQL/Python/R coding.

For more information or a demo of dotData's AI-powered full-cycle data science automation platform, please visit dotData.com.

About dotDatadotData is one of the first companies focused on full-cycle data science automation. Fortune 500 organizations around the world use dotData to accelerate their ML and AI projects and deliver higher business value. dotData's automated data science platform speeds time to value by accelerating, democratizing, augmenting and operationalizing the entire data science process, from raw business data through data and feature engineering to ML in production. With solutions designed to cater to the needs of both data scientists as well as citizen data scientists, dotData provides value across the entire organization.

dotData's unique AI-powered feature engineering delivers actionable business insights from relational, transactional, temporal, geo-locational, and text data. dotData has been recognized as a leader by Forrester in the 2019 New Wave for AutoML platforms. dotData has also been recognized as the "best machine learning platform" for 2019 by the AI breakthrough awards and was named an "emerging vendor to watch" by CRN in the big data space. For more information, visit http://www.dotdata.com, and join the conversation on Twitter and LinkedIn.

View original content:http://www.prnewswire.com/news-releases/dotdata-receives-apn-machine-learning-competency-partner-of-the-year-award-301029298.html

SOURCE dotData

Follow this link:
dotData Receives APN Machine Learning Competency Partner of the Year Award - Yahoo Finance