How machine learning and artificial intelligence can drive clinical innovation – PharmaLive

By:

Dr. Basheer Hawwash, Principal Data Scientist

Amanda Coogan, Risk-Based Monitoring Senior Product Manager

Rhonda Roberts, Senior Data Scientist

Remarque Systems Inc.

Everyone knows the terms machine learning and artificial intelligence. Few can define them, much less explain their inestimable value to clinical trials. So, its not surprising that, despite their ability to minimize risk, improve safety, condense timelines, and save costs, these technology tools are not widely used by the clinical trial industry.

Basheer Hawwash

There are lots of reasons for resistance: It seems complicated. Those who are not statistically savvy may find the thought of algorithms overwhelming. Adopting new technology requires a change in the status quo.

Yet, there are more compelling reasons for adoption especially as the global pandemic has accelerated a trend toward patient-centricity and decentralized trials, and an accompanying need for remote monitoring.

Machine learning vs. artificial intelligence. Whats the difference?

Lets start by understanding what the two terms mean. While many people seem to use them interchangeably, they are distinct: machine learning can be used independently or to inform artificial intelligence; artificial intelligence cannot happen without machine learning.

Machine learning is a series of algorithms that analyze data in various ways. These algorithms search for patterns and trends, which can then be used to make more informed decisions. Supervised machine learning starts with a specific type of data for instance, a particular adverse event. By analyzing the records of all the patients who have had that specific adverse event, the algorithm can predict whether a new patient is also likely to suffer from it. Conversely, unsupervised machine learning applies analysis such as clustering to a group of data; the algorithm sorts the data into groups which researchers can then examine more closely to discern similarities they may not have considered previously.

In either case, artificial intelligence applies those data insights to mimic human problem-solving behavior. Speech recognition, self-driving cars, even forms that auto-populate all exist because of artificial intelligence. In each case, it is the vast amounts of data that have been ingested and analyzed by machine learning that make the artificial intelligence application possible.

Physicians, for instance, can use a combination of machine learning and artificial intelligence to enhance diagnostic abilities. In this way, given a set of data, machine learning tools can analyze images to find patterns of chronic obstructive pulmonary disease (COPD); artificial intelligence may be able to further identify that some patients have idiopathic pulmonary fibrosis (IPF) as well as COPD, something their physicians may neither have thought to look for, nor found unaided.

Amanda Coogan

Now, researchers are harnessing both machine learning and artificial intelligence in their clinical trial work, introducing new efficiencies while enhancing patient safety and trial outcomes.

The case of the missing data

Data is at the core of every clinical trial. If those data are not complete, then researchers are proceeding on false assumptions, which can jeopardize patient safety and even the entire trial.

Traditionally, researchers have guarded against this possibility by doing painstaking manual verification, examining every data point in the electronic data capture system to ensure that it is both accurate and complete. More automated systems may provide reports that researchers can look through but that still requires a lot of human involvement. The reports are static and must be reviewed on an ongoing basis and every review has the potential for human error.

Using machine learning, this process happens continually in the background throughout the trial, automatically notifying researchers when data are missing. This can make a material difference in a trials management and outcomes.

Consider, if you will, a study in which patients are tested for a specific metric every two weeks. Six weeks into the study, 95 percent of the patients show a value for that metric; 5 percent dont. Those values are missing. The system will alert researchers, enabling them to act promptly to remedy the situation. They may be able to contact the patients in the 5 percent and get their values, or they may need to adjust those patients out of the study. The choice is left to the research team but because they have the information in near-real time, they have a choice.

As clinical trials move to new models, with greater decentralization and greater reliance on patient-reported data, missing data may become a larger issue. To counteract that possibility, researchers will need to move away from manual methods and embrace both the ease and accuracy of machine-learning-based systems.

The importance of the outlier

In research studies, not every patient nor even every site reacts the same way. There are patients whose vital signs are off the charts. Sites with results that are too perfect. Outliers.

Rhonda Roberts

Often researchers discover these anomalies deep into the trial, during the process of cleaning the data in preparation for regulatory submission. That may be too late for a patient who is having a serious reaction to a study drug. It also may mean that the patients data are not valid and cannot be included in the end analysis. Caught earlier, there would be the possibility of a course correction. The patient might have been able to stay in the study, to continue to provide data; alternatively, they could be removed promptly along with their associated data.

Again, machine learning simplifies the process. By running an algorithm that continually searches for outliers, those irregularities are instantly identified. Researchers can then quickly drill down to ascertain whether there is an issue and, if so, determine an appropriate response.

Of course, an anomaly doesnt necessarily flag a safety issue. In a recent case, one of the primary endpoints involved a six-minute walk test. One site showed strikingly different results; as it happened, they were using a different measurement gauge, something that would have skewed the study results, but, having been flagged, was easily modified.

In another case, all the patients at a site were rated with maximum quality of life scores and all their blood pressure readings were whole numbers. Machine learning algorithms flagged these results because they varied dramatically from the readings at the other sites. On examination, researchers found that the site was submitting fraudulent reports. While that was disturbing to learn, the knowledge gave the trial team power to act, before the entire study was rendered invalid.

A changing landscape demands a changing approach

As quality management is increasingly focusing on risk-based strategies, harnessing machine learning algorithms simplifies and strengthens the process. Setting parameters based on study endpoints and study-specific risks, machine learning systems can run in the background throughout a study, providing alerts and triggers to help researchers avoid risks.

The need for such risk-based monitoring has accelerated in response to the COVID-19 pandemic. With both researchers and patients unable or unwilling to visit sites, studies have rapidly become decentralized. This has coincided with the emergence and growing importance of patient-centricity and further propelled the rise of remote monitoring. Processes are being forced online. Manual methods are increasingly insufficient and automated methods that incorporate machine learning and artificial intelligence are gaining primacy.

Marrying in-depth statistical thinking with critical analysis

The trend towards electronic systems does not replace either the need for or the value of clinical trial monitors and other research personnel; they are simply able to do their jobs more effectively. A machine-learning-based system runs unique algorithms, each analyzing data in a different way to produce visualizations, alerts, or workflows, which CROs and sponsors can use to improve patient safety and trial efficiency. Each algorithm is tailored to the specific trial, keyed to endpoints, known risks, or other relevant factors. While the algorithms offer guidance, the platform does not make any changes to the data or the trial process; it merely alerts researchers to examine the data and determine whether a flagged value is clinically significant. Trial personnel are relieved of much tedious, reproducible, manual work, and are able to use their qualifications to advance the trial in other meaningful ways.

The imperative to embrace change

Machine learning and artificial intelligence have long been buzzwords in the clinical trial industry yet these technologies have only haltingly been put to use. Its time for that pendulum to swing. We can move more quickly and more precisely than manual data verification, and data cleaning allow. We can work more efficiently if we harness data to drive trial performance rather than simply to prove that the study endpoints were achieved. We can operate more safely if we are programmed for risk management from the outset. All this can be achieved easily, with the application of machine learning and artificial intelligence. Now is the time to move forward.

Here is the original post:
How machine learning and artificial intelligence can drive clinical innovation - PharmaLive

Machine Learning and Artificial Intelligence in Healthcare Market 2020 2026: Company Profiles, COVID 19 Outbreak, Global Trends, Profit Growth,…

Global Machine Learning and Artificial Intelligence in Healthcare Market: Trends Estimates High Demand by 2026

Machine Learning and Artificial Intelligence in Healthcare Market report 2020, discusses various factors driving or restraining the market, which will help the future market to grow with promising CAGR. The Machine Learning and Artificial Intelligence in Healthcare Market research Reports offers an extensive collection of reports on different markets covering crucial details. The report studies the competitive environment of the Machine Learning and Artificial Intelligence in Healthcare Market is based on company profiles and their efforts on increasing product value and production.

The Global Machine Learning and Artificial Intelligence in Healthcare market 2020 research provides a basic overview of the industry including definitions, classifications, applications and industry chain structure. The Global Machine Learning and Artificial Intelligence in Healthcare market report is provided for the international markets as well as development trends, competitive landscape analysis, and key regions development status. Development policies and plans are discussed as well as manufacturing processes and cost structures are also analysed. This report additionally states import/export consumption, supply and demand Figures, cost, price, revenue and gross margins.

The final report will add the analysis of the Impact of Covid-19 in this report Machine Learning and Artificial Intelligence in Healthcare industry.

Key players in global Machine Learning and Artificial Intelligence in Healthcare market include: Intel Corporation, IBM Corporation, Nvidia Corporation, Microsoft Corporation, Alphabet Inc (Google Inc.), General Electric (GE) Company, Enlitic, Inc., Verint Systems, General Vision, Inc., Welltok, Inc., iCarbonX.

Get a Sample Copy of the [emailprotected] https://www.reportsandmarkets.com/sample-request/global-machine-learning-and-artificial-intelligence-in-healthcare-market-analysis-and-forecast-2019-2025?utm_source=thedailychronicle&utm_medium=33

Machine Learning and Artificial Intelligence in Healthcare Market: Region-wise Outlook

Depending on the geographic region, Machine Learning and Artificial Intelligence in Healthcare market is divided into seven key regions: North America, Eastern Europe, Latin America, Western Europe, Japan, Asia-Pacific, and the Middle East & Africa. North America dominates the Machine Learning and Artificial Intelligence in Healthcare market followed by Europe, and Japan owing to high internet penetration, the establishment of key players in the field of the internet such as Google, and Facebook. Asia Pacific, Middle East, and Africa hold huge potential and shows substantial growth in terms of expanding the use of electronic devices, rising innovative technologies, consumer awareness, and expanding telecommunication sector are some of the factors which strengthen the growth of Machine Learning and Artificial Intelligence in Healthcare market throughout the forecast period.

Questions answered in the report with respect to the regional expanse of the Machine Learning and Artificial Intelligence in Healthcare market:

The scope of the Report:

The report segments the global Machine Learning and Artificial Intelligence in Healthcare market on the basis of application, type, service, technology, and region. Each chapter under this segmentation allows readers to grasp the nitty-gritties of the market. A magnified look at the segment-based analysis is aimed at giving the readers a closer look at the opportunities and threats in the market. It also address political scenarios that are expected to impact the market in both small and big ways.The report on the global Machine Learning and Artificial Intelligence in Healthcare market examines changing regulatory scenario to make accurate projections about potential investments. It also evaluates the risk for new entrants and the intensity of the competitive rivalry.

Reasons to read this Report:

TABLE OF CONTENT:

Chapter 1:Machine Learning and Artificial Intelligence in Healthcare Market Overview

Chapter 2: Global Economic Impact on Industry

Chapter 3:Machine Learning and Artificial Intelligence in Healthcare Market Competition by Manufacturers

Chapter 4: Global Production, Revenue (Value) by Region

Chapter 5: Global Supply (Production), Consumption, Export, Import by Regions

Chapter 6: Global Production, Revenue (Value), Price Trend by Type

Chapter 7: Global Market Analysis by Application

Chapter 8: Manufacturing Cost Analysis

Chapter 9: Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter 10: Marketing Strategy Analysis, Distributors/Traders

Chapter 11: Machine Learning and Artificial Intelligence in Healthcare Market Effect Factors Analysis

Chapter 12: GlobalMachine Learning and Artificial Intelligence in Healthcare Market Forecast to 2026

Complete Report @ https://www.reportsandmarkets.com/sample-request/global-machine-learning-and-artificial-intelligence-in-healthcare-market-analysis-and-forecast-2019-2025?utm_source=thedailychronicle&utm_medium=33

About Us:

Reports And Markets is part of the Algoro Research Consultants Pvt. Ltd. and offers premium progressive statistical surveying, market research reports, analysis & forecast data for industries and governments around the globe. Are you mastering your market? Do you know what the market potential is for your product, who the market players are and what the growth forecast is? We offer standard global, regional or country specific market research studies for almost every market you can imagine.

Contact Us:

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Ph: +1-352-353-0818 (US)

Here is the original post:
Machine Learning and Artificial Intelligence in Healthcare Market 2020 2026: Company Profiles, COVID 19 Outbreak, Global Trends, Profit Growth,...

Trending now: Machine Learning in Communication Market Size, Share, Industry Trends, Growth Insight, Share, Competitive Analysis, Statistics,…

Machine Learning in Communication Market 2025:The latest research report published by Alexa Reports presents an analytical study titled as global Machine Learning in Communication Market 2020. The report is a brief study on the performance of both historical records along with the recent trends. This report studies the Machine Learning in Communication industry based on the type, application, and region. The report also analyzes factors such as drivers, restraints, opportunities, and trends affecting the market growth. It evaluates the opportunities and challenges in the market for stakeholders and provides particulars of the competitive landscape for market leaders.

Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @https://www.alexareports.com/report-sample/849041

This study considers the Machine Learning in Communication value generated from the sales of the following segments:

The key manufacturers covered in this report: Breakdown data in in Chapter:- Amazon, IBM, Microsoft, Google, Nextiva, Nexmo, Twilio, Dialpad, Cisco, RingCentral

Segmentation by Type: Cloud-Based, On-Premise

Segmentation by Application: Network Optimization, Predictive Maintenance, Virtual Assistants, Robotic Process Automation (RPA)

The report studies micro-markets concerning their growth trends, prospects, and contributions to the total Machine Learning in Communication market. The report forecasts the revenue of the market segments concerning four major regions, namely, Americas, Europe, Asia-Pacific, and Middle East & Africa.

The report studies Machine Learning in Communication Industry sections and the current market portions will help the readers in arranging their business systems to design better products, enhance the user experience, and craft a marketing plan that attracts quality leads, and enhances conversion rates. It likewise demonstrates future opportunities for the forecast years 2019-2025.

The report is designed to comprise both qualitative and quantitative aspects of the global industry concerning every region and country basis.

To enquire More about This Report, Click Here: https://www.alexareports.com/send-an-enquiry/849041

The report has been prepared based on the synthesis, analysis, and interpretation of information about the Machine Learning in Communication market 2020 collected from specialized sources. The competitive landscape chapter of the report provides a comprehensible insight into the market share analysis of key market players. Company overview, SWOT analysis, financial overview, product portfolio, new project launched, recent market development analysis are the parameters included in the profile.

Some of the key questions answered by the report are:

What was the size of the market in 2014-2019?What will be the market growth rate and market size in the forecast period 2020-2025?What are the market dynamics and market trends?Which segment and region will dominate the market in the forecast period?Which are the key market players, competitive landscape and key development strategies of them?

The last part investigates the ecosystem of the consumer market which consists of established manufacturers, their market share, strategies, and break-even analysis. Also, the demand and supply side is portrayed with the help of new product launches and diverse application industries. Various primary sources from both, the supply and demand sides of the market were examined to obtain qualitative and quantitative information.

Table of ContentsSection 1 Machine Learning in Communication Product DefinitionSection 2 Global Machine Learning in Communication Market Manufacturer Share and Market Overview2.1 Global Manufacturer Machine Learning in Communication Shipments2.2 Global Manufacturer Machine Learning in Communication Business Revenue2.3 Global Machine Learning in Communication Market Overview2.4 COVID-19 Impact on Machine Learning in Communication IndustrySection 3 Manufacturer Machine Learning in Communication Business Introduction3.1 Amazon Machine Learning in Communication Business Introduction3.1.1 Amazon Machine Learning in Communication Shipments, Price, Revenue and Gross profit 2014-20193.1.2 Amazon Machine Learning in Communication Business Distribution by Region3.1.3 Amazon Interview Record3.1.4 Amazon Machine Learning in Communication Business Profile3.1.5 Amazon Machine Learning in Communication Product Specification3.2 IBM Machine Learning in Communication Business Introduction3.2.1 IBM Machine Learning in Communication Shipments, Price, Revenue and Gross profit 2014-20193.2.2 IBM Machine Learning in Communication Business Distribution by Region3.2.3 Interview Record3.2.4 IBM Machine Learning in Communication Business Overview3.2.5 IBM Machine Learning in Communication Product Specification3.3 Microsoft Machine Learning in Communication Business Introduction3.3.1 Microsoft Machine Learning in Communication Shipments, Price, Revenue and Gross profit 2014-20193.3.2 Microsoft Machine Learning in Communication Business Distribution by Region3.3.3 Interview Record3.3.4 Microsoft Machine Learning in Communication Business Overview3.3.5 Microsoft Machine Learning in Communication Product Specification3.4 Google Machine Learning in Communication Business Introduction3.5 Nextiva Machine Learning in Communication Business Introduction3.6 Nexmo Machine Learning in Communication Business IntroductionSection 4 Global Machine Learning in Communication Market Segmentation (Region Level)4.1 North America Country4.1.1 United States Machine Learning in Communication Market Size and Price Analysis 2014-20194.1.2 Canada Machine Learning in Communication Market Size and Price Analysis 2014-20194.2 South America Country4.2.1 South America Machine Learning in Communication Market Size and Price Analysis 2014-20194.3 Asia Country4.3.1 China Machine Learning in Communication Market Size and Price Analysis 2014-20194.3.2 Japan Machine Learning in Communication Market Size and Price Analysis 2014-20194.3.3 India Machine Learning in Communication Market Size and Price Analysis 2014-20194.3.4 Korea Machine Learning in Communication Market Size and Price Analysis 2014-20194.4 Europe Country4.4.1 Germany Machine Learning in Communication Market Size and Price Analysis 2014-20194.4.2 UK Machine Learning in Communication Market Size and Price Analysis 2014-20194.4.3 France Machine Learning in Communication Market Size and Price Analysis 2014-20194.4.4 Italy Machine Learning in Communication Market Size and Price Analysis 2014-20194.4.5 Europe Machine Learning in Communication Market Size and Price Analysis 2014-20194.5 Other Country and Region4.5.1 Middle East Machine Learning in Communication Market Size and Price Analysis 2014-20194.5.2 Africa Machine Learning in Communication Market Size and Price Analysis 2014-20194.5.3 GCC Machine Learning in Communication Market Size and Price Analysis 2014-20194.6 Global Machine Learning in Communication Market Segmentation (Region Level) Analysis 2014-20194.7 Global Machine Learning in Communication Market Segmentation (Region Level) AnalysisSection 5 Global Machine Learning in Communication Market Segmentation (Product Type Level)5.1 Global Machine Learning in Communication Market Segmentation (Product Type Level) Market Size 2014-20195.2 Different Machine Learning in Communication Product Type Price 2014-20195.3 Global Machine Learning in Communication Market Segmentation (Product Type Level) AnalysisSection 6 Global Machine Learning in Communication Market Segmentation (Industry Level)6.1 Global Machine Learning in Communication Market Segmentation (Industry Level) Market Size 2014-20196.2 Different Industry Price 2014-20196.3 Global Machine Learning in Communication Market Segmentation (Industry Level) AnalysisSection 7 Global Machine Learning in Communication Market Segmentation (Channel Level)7.1 Global Machine Learning in Communication Market Segmentation (Channel Level) Sales Volume and Share 2014-20197.2 Global Machine Learning in Communication Market Segmentation (Channel Level) AnalysisSection 8 Machine Learning in Communication Market Forecast 2019-20248.1 Machine Learning in Communication Segmentation Market Forecast (Region Level)8.2 Machine Learning in Communication Segmentation Market Forecast (Product Type Level)8.3 Machine Learning in Communication Segmentation Market Forecast (Industry Level)8.4 Machine Learning in Communication Segmentation Market Forecast (Channel Level)Section 9 Machine Learning in Communication Segmentation Product Type9.1 Cloud-Based Product Introduction9.2 On-Premise Product IntroductionSection 10 Machine Learning in Communication Segmentation Industry10.1 Network Optimization Clients10.2 Predictive Maintenance Clients10.3 Virtual Assistants Clients10.4 Robotic Process Automation (RPA) ClientsSection 11 Machine Learning in Communication Cost of Production Analysis11.1 Raw Material Cost Analysis11.2 Technology Cost Analysis11.3 Labor Cost Analysis11.4 Cost OverviewSection 12 Conclusion

Get a discount on this report: @ https://www.alexareports.com/check-discount/849041

Thus, Machine Learning in Communication Market serves as a valuable material for all industry competitors and individuals having a keen interest in the study.

About Us:Alexa Reports is a globally celebrated premium market research service provider, with a strong legacy of empowering business with years of experience. We help our clients by implementing decision support system through progressive statistical surveying, in-depth market analysis, and reliable forecast data.

Contact Us:Alexa ReportsPh. no: +1-408-844-4624Email: [emailprotected]Site: https://www.alexareports.com

Originally posted here:
Trending now: Machine Learning in Communication Market Size, Share, Industry Trends, Growth Insight, Share, Competitive Analysis, Statistics,...

BMW, Red Hat, and Malong Share Insights on AI and Machine Learning During Transform 2020 – ENGINEERING.com

BMW, Red Hat, and Malong Share Insights on AI and Machine Learning During Transform 2020Denrie Caila Perez posted on August 07, 2020 | Executives from BMW, Red Hat and Malong discuss how AI is transforming manufacturing and retail.

(From left to right) Maribel Lopez of Lopez Research, Jered Floyd of Red Hat, Jimmy Nassif of BMW Group, and Matt Scott of Malong Technologies.

The VentureBeat 2020 conference welcomed the likes of BMW Groups Jimmy Nassif, Red Hats Jered Floyd, and Malong CEO Matt Scott, who shared their insights on challenges with AI in their respective industries. Nassif, who deals primarily with robotics, and Floyd, who works in retail, both agreed that edge computing and the Internet of Things (IoT) has become powerful in accelerating production while introducing new capabilities in operations. According to Nassif, BMWs car sales have already doubled over the past decade, with 2.5 million in 2019. With over 4,500 suppliers dealing 203,000 unique parts, logistics problems are bound to occur. In addition to that, approximately 99 percent of orders are unique, which means there are over 100 end-customer options.

Thanks to platforms such as NVIDIAs Isaac, Jetson AXG Xavier, and DGX, BMW was able to come up with five navigation and manipulation robots that transport and manage parts around its warehouses. Two of the robots have already been deployed to four facilities in Germany. Using computer vision techniques, the robots are able to successfully identify parts, as well as people and potential obstacles. According to BMW, the algorithms are also constantly being optimized using NVIDIAs Omniverse simulator, which BMW engineers can access anytime from any of their global facilities.

In contrast, Malong uses machine learning in a totally different playing fieldself-checkout stations in retail locations. Overhead cameras are able to feed images of products as they pass the scanning bed to algorithms capable of detecting mis-scans. This includes mishaps such as occluded barcodes, products left in shopping carts, dissimilar products, and even ticket switching, which is when a products barcode is literally switched with that of a cheaper product.

These algorithms also run on NVIDIA hardware and are trained with minimal supervision, allowing them to learn and identify products using various video feeds on their own. According to Scott, edge computing is particularly significant in this area due to the necessity of storing closed-circuit footage via the cloud. Not only that, but it enables easier scalability to thousands of stores in the long term.

Making an AI system scalable is very different from making it run, he explained. Thats sometimes a mirage that happens when people are starting to play with these technologies.

Floyd also stressed how significant open platforms are when playing with AI and edge computing technology. With open source, everyone can bring their best technologies forward. Everyone can come with the technologies they want to integrate and be able to immediately plug them into this enormous ecosystem of AI components and rapidly connect them to applications, he said.

Malong has been working with Open Data Hub, a platform that allows for end-to-end AI and is designed for engineers to conceptualize AI solutions without needing complicated and costly machine learning workflows. In fact, its the very foundation of Red Hats data science software development stack.

All three companies are looking forward to more innovation in applications and new technologies.

Visit VentureBeats website for more information on Transform 2020. You can also watch the Transform 2020 sessions on demand here.

For more news and stories, check out how a machine learning system detects manufacturing defects using photos here.

See the original post here:
BMW, Red Hat, and Malong Share Insights on AI and Machine Learning During Transform 2020 - ENGINEERING.com

cnvrg.io Releases New Streaming Endpoints With One-click Deployment for Real-time Machine Learning Applications – PRNewswire

TEL AVIV, Israel, May 26, 2020 /PRNewswire/ --cnvrg.io, the data science platform simplifying model management and introducing advanced MLOps to the industry, today announced its streaming endpoints solution, a new capability for deploying ML models to production with Apache Kafka in one click. cnvrg.iois the first ML platform to enable one click streaming endpoint deployment for large-scale and real-time predictions with high throughput and low latency.

85% of machine learning models don't get to production due to the technical complexity of deploying the model in the right environment and architecture. Models can be deployed in a variety of different ways. Batch deployment for offline inference and web service for more real-time scenarios. These two approaches cover most of the ML use cases, but they both fall short in an enterprise setting when you need to scale and stream millions of predictions in real time. Enterprises require fast, scalable predictions to execute critical and time sensitive business decisions.

cnvrg.iois thrilled to announce its new capability of deploying ML models to production with a streaming architecture of producer/consumer interface with native integration to Apache Kafka and AWS Kinesis. In just one click, data scientists and engineers can deploy any kind of model as an endpoint that can receive data as stream and output predictions as streams.

Deployed models will be tracked with advanced model management and model monitoring solutions including alerts, retraining, A/B testing and canary rollout, autoscaling and more.

This new capability allows engineers to support and predict millions of samples in a real-time environment. This architecture is ideal for time sensitive or event-based predictions, recommender systems, and large-scale applications that require high throughput, low latency and fault tolerant environments.

"Playtika has 10 million daily active users (DAU), 10 billion daily events and over 9TB of daily processed data for our online games. To provide our players with a personalized experience, we need to ensure our models run at peak performance at all times," says Avi Gabay, Director of Architecture at Playtika. "With cnvrg.io we were able to increase our model throughput by up to 50% and on average by 30% when comparing to RESTful APIs. cnvrg.io also allows us to monitor our models in production, set alerts and retrain with high-level automation ML pipelines."

The new cnvrg.io release extends the market footprint and enhances the prior announcements of NVIDIA DGX-Ready partnership and Red Hat unified control plane.

About cnvrg.io

cnvrg.io is an AI OS, transforming the way enterprises manage, scale and accelerate AI and data science development from research to production. The code-first platform is built by data scientists, for data scientists and offers unrivaled flexibility to run on-premise or cloud.

Logo - https://mma.prnewswire.com/media/1160338/cnvrg_io_Logo.jpg

SOURCE cnvrg.io

Full Stack Machine Learning Operating System

Go here to see the original:
cnvrg.io Releases New Streaming Endpoints With One-click Deployment for Real-time Machine Learning Applications - PRNewswire

Data Science and Machine-Learning Platforms Market (impact of COVID-19) to See Massive Growth by 2026| SAS, Alteryx, IBM, RapidMiner, KNIME,…

Global Data Science and Machine-Learning Platforms Market Size, Status and Forecast 2020-2026

This report studies the Data Science and Machine-Learning Platforms market with many aspects of the industry like the market size, market status, market trends and forecast, the report also provides brief information of the competitors and the specific growth opportunities with key market drivers. Find the complete Data Science and Machine-Learning Platforms market analysis segmented by companies, region, type and applications in the report.

New vendors in the market are facing tough competition from established international vendors as they struggle with technological innovations, reliability and quality issues. The report will answer questions about the current market developments and the scope of competition, opportunity cost and more.

The major players covered in Data Science and Machine-Learning Platforms Market: SAS, Alteryx, IBM, RapidMiner, KNIME, Microsoft, Dataiku, Databricks, TIBCO Software, MathWorks, H20.ai, Anaconda, SAP, Google, Domino Data Lab, Angoss, Lexalytics, Rapid Insight, etc.

The final report will add the analysis of the Impact of Covid-19 in this report Data Science and Machine-Learning Platforms industry.

Get a Sample Copy @ https://www.reportsandmarkets.com/sample-request/global-data-science-and-machine-learning-platforms-market-size-status-and-forecast-2019-2025

Market Overview:-

Data Science and Machine-Learning Platforms market is segmented by Type, and by Application. Players, stakeholders, and other participants in the global Data Science and Machine-Learning Platforms market will be able to gain the upper hand as they use the report as a powerful resource. The segmental analysis focuses on revenue and forecast by Type and by Application in terms of revenue and forecast for the period 2015-2026.

Data Science and Machine-Learning Platforms Market in its database, which provides an expert and in-depth analysis of key business trends and future market development prospects, key drivers and restraints, profiles of major market players, segmentation and forecasting. An Data Science and Machine-Learning Platforms Market provides an extensive view of size; trends and shape have been developed in this report to identify factors that will exhibit a significant impact in boosting the sales of Data Science and Machine-Learning Platforms Market in the near future.

This report focuses on the global Data Science and Machine-Learning Platforms status, future forecast, growth opportunity, key market and key players. The study objectives are to present the Data Science and Machine-Learning Platforms development in United States, Europe, China, Japan, Southeast Asia, India, and Central & South America.

Market segment by Type, the product can be split into

Market segment by Application, split into

The Data Science and Machine-Learning Platforms market is a comprehensive report which offers a meticulous overview of the market share, size, trends, demand, product analysis, application analysis, regional outlook, competitive strategies, forecasts, and strategies impacting the Data Science and Machine-Learning Platforms Industry. The report includes a detailed analysis of the market competitive landscape, with the help of detailed business profiles, SWOT analysis, project feasibility analysis, and several other details about the key companies operating in the market.

The study objectives of this report are:

Inquire More about This Report @ https://www.reportsandmarkets.com/enquiry/global-data-science-and-machine-learning-platforms-market-size-status-and-forecast-2019-2025

The Data Science and Machine-Learning Platforms market research report completely covers the vital statistics of the capacity, production, value, cost/profit, supply/demand import/export, further divided by company and country, and by application/type for best possible updated data representation in the figures, tables, pie chart, and graphs. These data representations provide predictive data regarding the future estimations for convincing market growth. The detailed and comprehensive knowledge about our publishers makes us out of the box in case of market analysis.

Key questions answered in this report

Table of Contents

Chapter 1: Global Data Science and Machine-Learning Platforms Market Overview

Chapter 2: Data Science and Machine-Learning Platforms Market Data Analysis

Chapter 3: Data Science and Machine-Learning Platforms Technical Data Analysis

Chapter 4: Data Science and Machine-Learning Platforms Government Policy and News

Chapter 5: Global Data Science and Machine-Learning Platforms Market Manufacturing Process and Cost Structure

Chapter 6: Data Science and Machine-Learning Platforms Productions Supply Sales Demand Market Status and Forecast

Chapter 7: Data Science and Machine-Learning Platforms Key Manufacturers

Chapter 8: Up and Down Stream Industry Analysis

Chapter 9: Marketing Strategy -Data Science and Machine-Learning Platforms Analysis

Chapter 10: Data Science and Machine-Learning Platforms Development Trend Analysis

Chapter 11: Global Data Science and Machine-Learning Platforms Market New Project Investment Feasibility Analysis

About Us:

Reports and Markets is not just another company in this domain but is a part of a veteran group called Algoro Research Consultants Pvt. Ltd. It offers premium progressive statistical surveying, market research reports, analysis & forecast data for a wide range of sectors both for the government and private agencies all across the world. The database of the company is updated on a daily basis. Our database contains a variety of industry verticals that include: Food Beverage, Automotive, Chemicals and Energy, IT & Telecom, Consumer, Healthcare, and many more. Each and every report goes through the appropriate research methodology, Checked from the professionals and analysts.

Contact Us:

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Ph: +1-352-353-0818 (US)

Link:
Data Science and Machine-Learning Platforms Market (impact of COVID-19) to See Massive Growth by 2026| SAS, Alteryx, IBM, RapidMiner, KNIME,...

Beware the AI winter – but can Covid-19 alter this process? – AI News

We have had a blockchain winter as the hype around the technology moves towards a reality and the same will happen with artificial intelligence (AI).

Thats according to Dr Karol Przystalski, CTO at IT consulting and software development provider Codete. Przystalski founded Codete having had a significant research background in AI, with previous employers including Sabre and IBM and a PhD exploring skin cancer pattern recognition using neural networks.

Yet what effect will the Covid-19 pandemic have on this change? Speaking with AI News, Przystalski argues much like Dorian Selz, CEO of Squirro, in a piece published earlier this week that while AI isnt quite there to predict or solve the current pandemic, the future can look bright.

AI News: Hi Karol. Tell us about your career to date and your current role and responsibilities as the CTO of Codete?

Dr Karol Przystalski: The experience from the previous companies I worked at and the AI background that I had from my PhD work allowed me to get Codete off the ground. At the beginning, not every potential client could see the advantages of machine learning, but it has changed in the last couple of years. Weve started to implement more and more machine learning-based solutions.

Currently, my responsibilities as the CTO are not focused solely on development, as we have already grown to 160 engineers. Even though I still devote some of my attention to research and development, most of my work right now is centred on mentoring and training in the areas of artificial intelligence and big data.

AI: Tell us about the big data and data science services Codete provides and how your company aims to differ from the competitors?

KP: We offer a number of services related to big data and data science: consulting, auditing, training, and software development support. Based on our extensive experience in machine learning solutions, we provide advice to our clients. We audit already implemented solutions, as well as whole processes of product development. We also have a workshop for managers on how not to fail with a machine learning project.

All the materials are based on our own case studies. As a technological partner, we focus on the quality of the applications that we deliver, and we always aim at full transparency in relationships with our clients.

AI: How difficult is it, in your opinion, for companies to gather data science expertise? Is there a shortage of skills and a gap in this area?

KP: In the past, to become a data scientist you had to have a mathematical background or, even better, a PhD in this field. We now know its not that hard to implement machine learning solutions, and almost every software developer can become a data scientist.

There are plenty of workshops, lectures, and many other materials dedicated to software developers who want to understand machine learning methods. Usually, the journey starts with a few proof of concepts and, in the next build, production solutions. It usually takes a couple of months at the very minimum to become a solid junior level data scientist, even for experienced software engineers. Codete is well-known in the machine learning communities at several universities, and thats why we can easily extend our team with experienced ML engineers.

AI: What example can you provide of a client Codete has worked with throughout their journey, from research and development to choosing a solution for implementation?

KP: We dont implement all of the projects that clients bring to us. In the first stage, we distinguish between projects that are buzzword-driven and the real-world ones.

One time, a client came to us with an idea for an NLP project for their business. After some research, it turned out that ML was not the best choice for the project we recommended a simpler, cheaper solution that was more suitable in their case.

We are transparent with our clients, even if it takes providing them with constructive criticism on the solution they want to build. Most AI projects start with a PoC, and if it works well, the project goes through the next stages to a full production solution. In our AI projects, we follow the fail fast approach to prevent our clients from potential over-investing.

AI: Which industries do you think will have the most potential for machine learning and AI and why?

KP: In the Covid-19 times, for sure the health, med, and pharma industries will grow and use AI more often. We will see more use cases applied in telemedicine and medical diagnosis. For sure, the pharma industry and the development of drugs might be supported by AI. We can see how fast the vaccine for Covid-19 is being developed. In the future, the process of finding a valid vaccine can be supported by AI.

But it is not only health-related industries which will use AI more often. I think that almost every industry will invest more in digitalisation, like process automation where ML can be applied. First, we will see an increasing interest in AI in the industries that were not affected by the virus so much, but in the long run even the hospitality and travel industry, as well as many governments, will introduce AI-based solutions to prevent future lockdown.

AI: What is the greatest benefit of AI in business in your opinion and what is the biggest fear?

KP: There are plenty of ways machine learning can be applied in many industries. There is a machine learning and artificial intelligence hype going on now, and many managers become aware of the benefits that machine learning can bring to their companies. On the other hand, many can take AI for a solution for almost everything but thats how buzzword-driven projects are born, not real-world use cases.

This hype may end similarly to other tech hypes that we have witnessed before, when a buzzword was popular, but eventually only a limited number of companies applied the technology. Blockchain is a good example many companies have tried using it, for almost everything, and in many cases the technology didnt really prove useful, sometimes even causing new problems.

Blockchain is now being used with success in several industries. Just the same, we can have an AI winter again, if we dont distinguish between the hype and the true value behind an AI solution.

Photo byAaron BurdenonUnsplash

Interested in hearing industry leaders discuss subjects like this and their use cases?Attend the co-locatedAI & Big Data Expoevents with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with theIoT Tech Expo,Blockchain Expo, andCyber Security & Cloud Expo.

Original post:
Beware the AI winter - but can Covid-19 alter this process? - AI News

How Machine Learning in Search Works: Everything You Need to Know – Search Engine Journal

In the world of SEO, its important to understand the system youre optimizing for.

You need to understand how:

Another crucial area to understand is machine learning.

Now, the term machine learning gets thrown around a lot these days.

But how does machine learning actually impact search and SEO?

This chapter will explore everything you need to know about how search engines use machine learning.

It would be difficult to understand how search engines use machine learning without knowing what machine learning actually is.

ADVERTISEMENT

CONTINUE READING BELOW

Lets start with the definition (provided by Stanford University in their course description for Coursera) before we move on to a practical explanation:

Machine learning is the science of getting computers to act without being explicitly programmed.

Machine learning isnt the same as Artificial Intelligence (AI), but the line is starting to get a bit blurry with the applications.

As noted above, machine learning is the science of getting computers to come to conclusions based on information but without being specifically programmed in how to accomplish said task.

AI, on the other hand, is the science behind creating systems that either have, or appear to possess, human-like intelligence and process information in a similar manner.

Think of the difference this way:

Machine learning is a system designed to solve a problem. It works mathematically to produce the solution.

The solution could be programmed specifically, or worked out by humans manually, but without this need, the solutions come much faster.

ADVERTISEMENT

CONTINUE READING BELOW

A good example would be setting a machine off to pour through oodles of data outlining tumor size and location without programming in what its looking for. The machine would be given a list of known benign and malignant conclusions.

With this, we would then ask the system to produce a predictive model for future encounters with tumors to generate odds in advance as to which it is based on the data analyzed.

This is purely mathematical.

A few hundred mathematicians could do this but it would take them many years (assuming a very large database) and hopefully, none of them would make any errors.

Or, this same task could be accomplished with machine learning in far less time.

When Im thinking of Artificial Intelligence, on the other hand, thats when I start to think of a system that touches on the creative and thus becomes less predictable.

An artificial intelligence set on the same task may simply reference documents on the subject and pull conclusions from previous studies.

Or it may add new data into the mix.

Or may start working on a new system of electrical engine, foregoing the initial task.

It probably wont get distracted on Facebook, but you get where Im going.

The key word is intelligence.

While artificial, to meet the criteria it would have to be real thus producing variables and unknowns akin to what we encounter when we interact with others around us.

Right now what the search engines (and most scientists) are pushing to evolve is machine learning.

Google has a freecourse on it, has made its machine learning frameworkTensorFlow open source, and is makingbig investments in hardware to run it.

Basically, this is the future so its best to understand it.

While we cant possibly list (or even know) every application of machine learning going on over at the Googleplex, lets look at a couple of known examples:

ADVERTISEMENT

CONTINUE READING BELOW

What article on machine learning at Google would be complete without mentioning their first and still highly-relevant implementation of a machine learning algorithm into search?

Thats right were talkingRankBrain.

Essentially the system was armed only with an understanding of entities (a thing or concept that is singular, unique, well-defined, and distinguishable) and tasked with producing an understanding of how those entities connect in a query to assist in better understanding the query and a set of known good answers.

These are brutally simplified explanations of both entities and RankBrain but it serves our purposes here.

So, Google gave the system some data (queries) and likely a set of known entities.

Im going to guess on the next process but logically the system would then be tasked with training itself based on the seed set of entities on how to recognize unknown entities it encounters.

The system would be pretty useless if it wasnt able to understand a new movie name, date, etc.

ADVERTISEMENT

CONTINUE READING BELOW

Once the system had that process down and was producing satisfactory results they would have then tasked it with teaching itself how to understand the relationships between entities and what data is being implied or directly requested and seek out appropriate results in the index.

This system solves many problems that plagued Google.

The requirement to include keywords like How do I replace my S7 screen on a page about replacing one should not be necessary.

You also shouldnt have to include fix if youve included replace as, in this context, they generally imply the same thing.

RankBrain uses machine learning to:

In its first iteration, RankBrain was tested on queries Google had not encountered before. This makes perfect sense and is a great test.

If RankBrain can improve results for queries that likely werent optimized for and will involve a mix of old and new entities and services a grouping of users who were likely getting lackluster results to begin with then it should be deployed globally.

ADVERTISEMENT

CONTINUE READING BELOW

Andit was in 2016.

Lets take a look at the two results I referenced above (and worth noting, I was writing the piece and the example and then thought to get the screen capture this is simply how it works and try it yourself it works in almost all cases where different wording implies the same thing):

Some very subtle differences in rankings with the #1 and 2 sites switching places but at its core its the same result.

Now lets look at my automotive example:

Machine learning helps Google to not just understand where there are similarities in queries, but we can also see it determining that if I need my car fixed I may need a mechanic (good call Google), whereas for replacing it I may be referring to parts or in need of governmental documentation to replace the entire thing.

ADVERTISEMENT

CONTINUE READING BELOW

We can also see here where machine learning hasnt quite figured it all out.

When I ask it how to replace my car, I likely mean the whole thing or Id have listed the part I wanted.

But itll learn its still in its infancy.

Also, Im Canadian, so the DMV doesnt really apply.

So here weve seen an example of machine learning at play in determining query meaning, SERP layout, and possible necessary courses of action to fulfill my intent.

Not all of that is RankBrain, but its all machine learning.

If you use Gmail, or pretty much any other email system, you also are seeing machine learning at work.

According to Google, they are now blocking 99.9% of all spam and phishing emails with a false-positive rate of only 0.05%.

Theyre doing this using the same core technique give the machine learning system some data and let it go.

If one was to manually program in all the permutations that would yield a 99.9% success rate in spam filtering and adjust on the fly for new techniques it would be an onerous task if at all possible.

ADVERTISEMENT

CONTINUE READING BELOW

When they did things this way they sat at a 97% success rate with 1% of false positive (meaning 1% of your real messages were sent to the spam folder unacceptable if it was important).

Enter machine learning set it up with all the spam messages you can positively confirm, let it build a model around what similarities they have, enter in some new messages and give it a reward for successfully selecting spam messages on its own and over time (and not a lot of it) it will learn far more signals and react far faster than a human ever could.

Set it to watch for user interactions with new email structures and when it learns that there is a new spam technique being used, add it to the mix and filter not just those emails but emails using similar techniques to the spam folder.

This article promised to be an explanation of machine learning, not just a list of examples.

ADVERTISEMENT

CONTINUE READING BELOW

The examples, however, were necessary to illustrate a fairly easy-to-explain model.

Lets not confuse this with easy to build, just simple in what we need to know.

A common machine learning model follows the following sequence:

This model is referred to as supervised learning and if my guess is right, its the model used in the majority of the Google algorithm implementations.

Another model of machine learning is the Unsupervised Model.

To draw from the example used in a great courseover on Coursera on machine learning, this is the model used to group similar stories in Google News and one can infer that its used in other places like the identification and grouping of images containing the same or similar people in Google Images.

In this model, the system is not told what its looking for but rather simply instructed to group entities (an image, article, etc.) into groups by similar traits (the entities they contain, keywords, relationships, authors, etc.)

ADVERTISEMENT

CONTINUE READING BELOW

Understanding what machine learning is will be crucial if you seek to understand why and how SERPs are laid out and why pages rank where they do.

Its one thing to understand an algorithmic factor which is an important thing to be sure but understanding the system in which those factors are weighted is of equal, if not greater, importance.

For example, if I was working for a company that sold cars I would pay specific attention to the lack of usable, relevant information in the SERP results to the query illustrated above.

The result is clearly not a success. Discover what content would be a success and generate it.

Pay attention to the types of content that Google feels may fulfill a users intent (post, image, news, video, shopping, featured snippet, etc.) and work to provide it.

I like to think of machine learning and its evolution equivalent to having a Google engineer sitting behind every searcher, adjusting what they see and how they see it before it is sent to their device.

ADVERTISEMENT

CONTINUE READING BELOW

But better that engineer is connected like the Borg to every other engineer learning from global rules.

But well get more into that in our next piece on user intent.

Image Credits

View post:
How Machine Learning in Search Works: Everything You Need to Know - Search Engine Journal

Industrial Asset Optimization: Connecting Machines Directly with Data Scientists – Machine Learning Times – machine learning & data science news -…

By: Terry Miller, Global Digital Strategy and Business Development, SiemensFor more from this author, attend his virtual presentation, Industrial Asset Optimization: Machine-to-Cloud/Edge Analytics, at Predictive Analytics World for Industry 4.0, May 31-June 4, 2020. For industrial firms to realize the benefits promised by embracing Industry 4.0, the access to clean, quality asset data must improve. Most of a data , scientists work, in any vertical, involves cleaning and contextualizing data, or data prep. In the industrial segment, this remains true, and, considerably more challenging. Enterprise-wide data ingest platforms tend to yield inefficient, incomplete data necessary to optimize assets at the application layer. In order to improve this, firms should

To view this content OR subscribe for free

Already receive the Machine Learning Times emails?The Machine Learning Times now requires legacy email subscribers to upgrade their subscription - one time only - in order to attain a password-protected login and gain complete access.

Sign up for the Newsletter with your Choice of social media account:

Read more:
Industrial Asset Optimization: Connecting Machines Directly with Data Scientists - Machine Learning Times - machine learning & data science news -...

Determined AI makes its machine learning infrastructure free and open source – TechCrunch

Machine learning has quickly gone from niche field to crucial component of innumerable software stacks, but that doesnt mean its easy. The tools needed to create and manage it are enterprise-grade and often enterprise-only but Determined AI aims to make them more accessible than ever by open-sourcing its entire AI infrastructure product.

The company created its Determined Training Platform for developing AI in an organized, reliable way the kind of thing that large companies have created (and kept) for themselves, the team explained when they raised an $11 million Series A last year.

Machine learning is going to be a big part of how software is developed going forward. But in order for companies like Google and Amazon to be productive, they had to build all this software infrastructure, said CEO Evan Sparks. One company we worked for had 70 people building their internal tools for AI. There just arent that many companies on the planet that can withstand an effort like that.

At smaller companies, ML is being experimented with by small teams using tools intended for academic work and individual research. To scale that up to dozens of engineers developing a real product there arent a lot of options.

Theyre using things like TensorFlow and PyTorch, said Chief Scientist Ameet Talwalkar. A lot of the way that work is done is just conventions: How do the models get trained? Where do I write down the data on which is best? How do I transform data to a good format? All these are bread and butter tasks. Theres tech to do it, but its really the Wild West. And the amount of work you have to do to get it set up theres a reason big tech companies build out these internal infrastructures.

Determined AI, whose founders started out at UC Berkeleys AmpLab (home of Apache Spark), has been developing its platform for a few years, with feedback and validation from some paying customers. Now, they say, its ready for its open source debut with an Apache 2.0 license, of course.

We have confidence people can pick it up and use it on their own without a lot of hand-holding, said Sparks.

You can spin up your own self-hosted installation of the platform using local or cloud hardware, but the easiest way to go about it is probably the cloud-managed version that automatically provisions resources from AWS or wherever you prefer and tears them down when theyre no longer needed.

The hope is that the Determined AI platform becomes something of a base layer that lots of small companies can agree on, providing portability to results and standards so youre not starting from scratch at every company or project.

With machine learning development expected to expand by orders of magnitude in the coming years, even a small piece of the pie is worth claiming, but with luck, Determined AI may grow to be the new de facto standard for AI development in small and medium businesses.

You can check out the platform on GitHub or at Determined AIs developer site.

Read the original here:
Determined AI makes its machine learning infrastructure free and open source - TechCrunch

Recent Research Answers the Future of Quantum Machine Learning on COVID-19 – Analytics Insight

We have all seen movies or read books about an apocalyptic world where humankind is fighting against a deadly pathogen, and researchers are in a race against time to find a cure for the same. But COVID-19 is not a fictional chapter, it is real, and scientists all over the world are frantically looking for patterns in data by employing powerful supercomputers with the hopes of finding a speedier breakthrough in vaccine discovery for the COVID-19.

A team of researchers from Penn State University has recently unearthed a solution that has the potential to expedite the process of discovering a novel coronavirus treatment that is by employing an innovative hybrid branch of research known as quantum machine learning. Quantum Machine Learning is the latest field that combines both machine learning and quantum physics. The team is led by Swaroop Ghosh, Joseph R., and Janice M. Monkowski Career Development Assistant Professor of Electrical Engineering and Computer Science and Engineering.

In cases where a computer science-driven approach is implemented to identify a cure, most methodologies leverage machine learning to focus on screening different compounds one at a time to see if they can find a bond with the virus main protease, or protein. And the quantum machine learning method could yield quicker results and is more economical than any current methods used for drug discovery.

According to Prof. Ghosh, discovering any new drug that can cure a disease is like finding a needle in a haystack. Further, it is an incredibly expensive, laborious, and time-consuming solution. Using the current conventional pipeline for discovering new drugs can take between five and ten years from the concept stage to being released to the market and could cost billions in the process.

He further adds, High-performance computing such as supercomputers and artificial intelligence canhelp accelerate this process by screeningbillions of chemical compounds quicklyto findrelevant drugcandidates.

This approach works when enough chemical compounds are available in the pipeline, but unfortunately, this is not true for COVID-19. This project will explorequantum machine learning to unlock new capabilities in drug discovery by generating complex compounds quickly, he explains.

The funding from the Penn State Institute for Computational and Data Sciences, coordinated through the Penn State Huck Institutes of the Life Sciences as part of their rapid-response seed funding for research across the University to address COVID-19, is supporting this work.

Ghosh and his electrical engineering doctoral students Mahabubul Alam and Abdullah Ash Saki and computer science and engineering postgraduate students Junde Li and Ling Qiu have earlier worked on developing a toolset for solving particular types of problems known as combinatorial optimization problems, using quantum computing. Drug discovery too comes under a similar category. And hence their experience in this sector has made it possible for the researchers to explore in the search for a COVID-19 treatment while using the same toolset that they had already developed.

Ghosh considers the usage of Artificial intelligence fordrug discovery to be a very new area. The biggest challenge is finding an unknown solution to the problem by using technologies thatare still evolving that is, quantum computing and quantum machine learning.Weare excited about the prospects of quantum computing in addressinga current critical issue and contributing our bit in resolving this grave challenge. he elaborates.

Based on a report by McKinsey & Partner, the field of quantum computing technology is expected to have a global market value of US$1 trillion by 2035. This exciting scope of quantum machine learning can further boost the economic value while helping the healthcare industry in defeating the COVID-19.

Excerpt from:
Recent Research Answers the Future of Quantum Machine Learning on COVID-19 - Analytics Insight

How To Verify The Memory Loss Of A Machine Learning Model – Analytics India Magazine

It is a known fact that deep learning models get better with diversity in the data they are fed with. For instance, data in a use case related to healthcare data will be taken from several providers such as patient data, history, workflows of professionals, insurance providers, etc. to ensure such data diversity.

These data points that are collected through various interactions of people are fed into a machine learning model, which sits remotely in a data haven spewing predictions without exhausting.

However, consider a scenario where one of the providers ceases to offer data to the healthcare project and later requests to delete the provided information. In such a case, does the model remember or forget its learnings from this data?

To explore this, a team from the University of Edinburgh and Alan Turing Institute assumed that a model had forgotten some data and what can be done to verify the same. In this process, they investigated the challenges and also offered solutions.

The authors of this work wrote that this initiative is first of its kind and the only work that comes close is the Membership Inference Attack (MIA), which is also an inspiration to this work.

To verify if a model has forgotten specific data, the authors propose a Kolmogorov Smirnov (K-S) distance-based method. This method is used to infer whether a model is trained with the query dataset. The algorithm can be seen below:

Based on the above algorithm, the researchers have used benchmark datasets such as MNIST, SVHN and CIFAR-10 for experiments, which were used to verify the effectiveness of this new method. Later, this method was also tested on the ACDC dataset using the pathology detection component of the challenge.

The MNIST dataset contains 60,000 images of 10 digits with image size 28 28. Similar to MNIST, the SVHN dataset has over 600,000 digit images obtained from house numbers in Google Street view images. The image size of SVHN is 32 32. Since both datasets are for the task of digit recognition/classification, this dataset was considered to belong to the same domain. CIFAR-10 is used as a dataset to validate the method. CIFAR-10 has 60,000 images (size 32 32) of 10-class objects, including aeroplane, bird, etc. To train models with the same design, the images of all three datasets are preprocessed to grey-scale and rescaled to size 28 28.

Using the K-S distance statistics about the output distribution of a target model, said the authors, can be obtained without knowing the weights of the model. Since the models training data are unknown, few new models called the shadow models were trained with the query dataset and another calibration dataset.

Then by comparing the K-S values, one can conclude if the training data contains information from the query dataset or not.

Experiments have been done before to check the ownership one has over data in the world of the internet. One such attempt was made by the researchers at Stanford in which they investigated the algorithmic principles behind efficient data deletion in machine learning.

They found that for many standard ML models, the only way to completely remove an individuals data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. In a trade-off between efficiency and privacy, a challenge arises because algorithms that support efficient deletion need not be private, and algorithms that are private do not have to support efficient deletion.

Aforementioned experiments are an attempt to probe and raise new questions related to the never-ending debate about the usage of AI and privacy. The objective in these works is to investigate the idea of how much authority an individual has over specific data while also helping expose the vulnerabilities within a model if certain data is removed.

Check more about this work here.

comments

See the original post here:
How To Verify The Memory Loss Of A Machine Learning Model - Analytics India Magazine

Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company – insideBIGDATA

Tecton.ai emerged from stealth and formally launched with its data platform for machine learning. Tecton enables data scientists to turn raw data into production-ready features, the predictive signals that feed machine learning models. Tecton is in private beta with paying customers, including a Fortune 50 company.

Tecton.ai also announced $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia. Both Martin Casado, general partner at Andreessen Horowitz, and Matt Miller, partner at Sequoia, have joined the board.

Tecton.ai founders Mike Del Balso (CEO), Kevin Stumpf (CTO) and Jeremy Hermann (VP of Engineering) worked together at Uber when the company was struggling to build and deploy new machine learning models, so they createdUbers Michelangelo machine learning platform. Michelangelo was instrumental in scaling Ubers operations to thousands of production models serving millions of transactions per second in just a few years, and today it supports a myriad of use cases ranging from generating marketplace forecasts, calculating ETAs and automating fraud detection.

Del Balso, Stumpf and Hermann went on to found Tecton.ai to solve the data challenges that are the biggest impediment to deploying machine learning in the enterprise today. Enterprises are already generating vast amounts of data, but the problem is how to harness and refine this data into predictive signals that power machine learning models. Engineering teams end up spending the majority of their time building bespoke data pipelines for each new project. These custom pipelines are complex, brittle, expensive and often redundant. The end result is that 78% of new projects never get deployed, and 96% of projects encounter challenges with data quality and quantity(1).

Data problems all too often cause last-mile delivery issues for machine learning projects, said Mike Del Balso, Tecton.ai co-founder and CEO. With Tecton, there is no last mile. We created Tecton to empower data science teams to take control of their data and focus on building models, not pipelines. With Tecton, organizations can deliver impact with machine learning quickly, reliably and at scale.

Tecton.ai has assembled a world-class engineering team that has deep experience building machine learning infrastructure for industry leaders such as Google, Facebook, Airbnb and Uber. Tecton is the industrys first data platform that has been designed specifically to support the requirements of operational machine learning. It empowers data scientists to build great features, serve them to production quickly and reliably and do it at scale.

Tecton makes the delivery of machine learning data predictable for every company.

The ability to manage data and extract insights from it is catalyzing the next wave of business transformation, said Martin Casado, general partner at Andreessen Horowitz. The Tecton team has been on the forefront of this change with a long history of machine learning/AI and data at Google, Facebook and Airbnb and building the machine learning platform at Uber. Were very excited to be partnering with Mike, Kevin, Jeremy and the Tecton team to bring this expertise to the rest of the industry.

The founders of Tecton built a platform within Uber that took machine learning from a bespoke research effort to the core of how the company operated day-to-day, said Matt Miller, partner at Sequoia. They started Tecton to democratize machine learning across the enterprise. We believe their platform for machine learning will drive a Cambrian explosion within their customers, empowering them to drive their business operations with this powerful technology paradigm, unlocking countless opportunities. We were thrilled to partner with Tecton along with a16z at the seed and now again at the Series A. We believe Tecton has the potential to be one of the most transformational enterprise companies of this decade.

Sign up for the free insideBIGDATAnewsletter.

Continue reading here:
Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company - insideBIGDATA

Microsoft: This is how to protect your machine-learning applications – TechRepublic

Understanding failures and attacks can help us build safer AI applications.

Modern machine learning (ML) has become an important tool in a very short time. We're using ML models across our organisations, either rolling our own in R and Python, using tools like TensorFlow to learn and explore our data, or building on cloud- and container-hosted services like Azure's Cognitive Services. It's a technology that helps predict maintenance schedules, spots fraud and damaged parts, and parses our speech, responding in a flexible way.

SEE:Prescriptive analytics: An insider's guide (free PDF)(TechRepublic)

The models that drive our ML applications are incredibly complex, training neural networks on large data sets. But there's a big problem: they're hard to explain or understand. Why does a model parse a red blob with white text as a stop sign and not a soft drink advert? It's that complexity which hides the underlying risks that are baked into our models, and the possible attacks that can severely disrupt the business processes and services we're building using those very models.

It's easy to imagine an attack on a self-driving car that could make it ignore stop signs, simply by changing a few details on the sign, or a facial recognition system that would detect a pixelated bandanna as Brad Pitt. These adversarial attacks take advantage of the ML models, guiding them to respond in a way that's not how they're intended to operate, distorting the input data by changing the physical inputs.

Microsoft is thinking a lot about how to protect machine learning systems. They're key to its future -- from tools being built into Office, to its Azure cloud-scale services, and managing its own and your networks, even delivering security services through ML-powered tools like Azure Sentinel. With so much investment riding on its machine-learning services, it's no wonder that many of Microsoft's presentations at the RSA security conference focused on understanding the security issues with ML and on how to protect machine-learning systems.

Attacks on machine-learning systems need access to the models used, so you need to keep your models private. That goes for small models that might be helping run your production lines as much as the massive models that drive the likes of Google, Bing and Facebook. If I get access to your model, I can work out how to affect it, either looking for the right data to feed it that will poison the results, or finding a way past the model to get the results I want.

Much of this work has been published in a paper in conjunction with the Berkman Klein Center, on failure modes in machine learning. As the paper points out, a lot of work has been done in finding ways to attack machine learning, but not much on how to defend it. We need to build a credible set of defences around machine learning's neural networks, in much the same way as we protect our physical and virtual network infrastructures.

Attacks on ML systems are failures of the underlying models. They are responding in unexpected, and possibly detrimental ways. We need to understand what the failure modes of machine-learning systems are, and then understand how we can respond to those failures. The paper talks about two failure modes: intentional failures, where an attacker deliberately subverts a system, and unintentional failures, where there's an unsafe element in the ML model being used that appears correct but delivers bad outcomes.

By understanding the failure modes we can build threat models and apply them to our ML-based applications and services, and then respond to those threats and defend our new applications.

The paper suggests 11 different attack classifications, many of which get around our standard defence models. It's possible to compromise a machine-learning system without needing access to the underlying software and hardware, so standard authorisation techniques can't protect ML-based systems and we need to consider alternative approaches.

What are these attacks? The first, perturbation attacks, modify queries to change the response to one the attackers desire. That's matched by poisoning attacks, which achieve the same result by contaminating the training data. Machine-learning models often include important intellectual property, and some attacks like model inversion aim to extract that data. Similarly, a membership inference attack will try to determine whether specific data was in the initial training set. Closely related is the concept of model stealing, using queries to extract the model.

SEE:5G: What it means for IoT(free PDF)

Other attacks include reprogramming the system around the ML model, so that either results or inputs are changed. Closely related are adversarial attacks that change physical objects, adding duct tape to signs to confuse navigation or using specially printed bandanas to disrupt facial-recognition systems. Some attacks depend on the provider: a malicious provider can extract training data from customer systems. They can add backdoors to systems, or compromise models as they're downloaded.

While many of these attacks are new and targeted specifically at machine-learning systems, they are still computer systems and applications, and are vulnerable to existing exploits and techniques, allowing attackers to use familiar approaches to disrupt ML applications.

It's a long list of attack types, but understanding what's possible allows us to think about the threats our applications face. More importantly they provide an opportunity to think about defences and how we protect machine-learning systems: building better, more secure training sets, locking down ML platforms, and controlling access to inputs and outputs, working with trusted applications and services.

Attacks are not the only risk: we must be aware of unintended failures -- problems that come from the algorithms we use or from how we've designed and tested our ML systems. We need to understand how reinforcement learning systems behave, how systems respond in different environments, if there are natural adversarial effects, or how changing inputs can change results.

If we're to defend machine-learning applications, we need to ensure that they have been tested as fully as possible, in as many conditions as possible. The apocryphal stories of early machine-learning systems that identified trees instead of tanks, because all the training images were of tanks under trees, are a sign that these aren't new problems, and that we need to be careful about how we train, test, and deploy machine learning. We can only defend against intentional attacks if we know that we've protected ourselves and our systems from mistakes we've made. The old adage "test, test, and test again" is key to building secure and safe machine learning -- even when we're using pre-built models and service APIs.

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

Here is the original post:
Microsoft: This is how to protect your machine-learning applications - TechRepublic

AI, machine learning and automation in cybersecurity: The time is now – GCN.com

INDUSTRY INSIGHT

The cybersecurity skills shortage continues to plague organizations across regions, markets and sectors, and the government sector is no exception.According to (ISC)2, there are only enough cybersecurity pros to fill about 60% of the jobs that are currently open -- which means the workforce will need to grow by roughly 145% to just meet the current global demand.

The Government Accountability Office states that the federal government needs a qualified, well-trained cybersecurity workforce to protect vital IT systems, and one senior cybersecurity official at the Department of Homeland Security has described the talent gap as a national security issue. The scarcity of such workers is one reason why securing federal systems is on GAOs High Risk list.Given this situation, chief information security officers who are looking for ways to make their existing resources more effective can make great use of automation and artificial intelligence to supplement and enhance their workforce.

The overall challenge landscape

Results of our survey, Making Tough Choices: How CISOs Manage Escalating Threats and Limited Resources show that CISOs currently devote 36% of their budgets to response and 33% to prevention.However, as security needs change, many CISOs are looking to shift budget away from prevention without reducing its effectiveness. An optimal budget would reduce spend on prevention and increase spending on detection and response to 33% and 40% of the security budget, respectively.This shift would give security teams the speed and flexibility they need to react quickly in the face of a threat from cybercriminals who are outpacing agencies defensive capabilities.When breaches are inevitable, it is important to stop as many as possible at the point of intrusion, but it is even more important to detect and respond to them before they can do serious damage.

One challenge to matching the speed of todays cyberattacks is that CISOs have limited personnel and budget resources. To overcome these obstacles and attain the detection and response speeds necessary for effective cybersecurity, CISOs must take advantage of AI, machine learning and automation.These technologies will help close gaps by correlating threat intelligence and coordinating responses at machine speed. Government agencies will be able to develop a self-defending security system capable of analyzing large volumes of data, detecting threats, reconfiguring devices and responding to threats without human intervention.

The unique challenges

Federal agencies deal with a number of challenges unique to the public sector, including the age and complexity of IT systems as well as the challenges of the government budget cycle.IT teams for government agencies arent just protecting intellectual property or credit card numbers; they are also tasked with protecting citizens sensitive data and national security secrets.

Charged with this duty but constrained by limited resources, IT leaders must weigh the risks of cyber threats against the daily demands of keeping networks up and running. This balancing act becomes more difficult as agencies migrate to the cloud, adopt internet-of-things devices and transition to software-defined networks that have no perimeter. These changes mean government networks are expanding their attack surface with no additional -- or even fewerdefensive resources. Its part of the reason why the Verizon Data Breach Investigations Report found that government agencies were subjected to more security incidents and more breaches than any other sector last year.

To change that dynamic, the typical government set-up of siloed systems must be replaced with a unified platform that can provide wider and more granular network visibility and more rapid and automated response.

How AI and automation can help

The keys to making a unified platform work are AI and automation technologies. Because organizations cannot keep pace with the growing volume of threats by manual detection and response, they need to leverage AI/ML and automation to fill these gaps. AI-driven solutions can learn what normal behavior looks like in order to detect anomalous behavior.For instance, many employees typically access a specific kind of data or only log on at certain times. If an employees account starts to show activity outside of these normal parameters, an AI/ML-based solution can detect these anomalies and can inspect or quarantine the affected device or user account until it is determined to be safe or mitigating action can be taken.

If the device is infected with malware or is otherwise acting maliciously, that AI-based tool can also issue automated responses. Making these tactical tasks the responsibility of AI-driven solutions frees security teams to work on more strategic problems, develop threat intelligence or focus on more difficult tasks such as detecting unknown threats.

IT teams at government agencies that want to implement AI and automation must be sure the solution they choose can scale and operate at machine speeds to keep up with the growing complexity and speed of the threat. In selecting a solution, IT managers must take time to ensure solutions have been developed using AI best practices and training techniques and that they are powered by best-in-class threat intelligence, security research and analytics technology. Data should be collected from a variety of nodes -- both globally and within the local IT environment -- to glean the most accurate and actionable information for supporting a security strategy.

Time is of the essence

Government agencies are experiencing more cyberattacks than ever before, at a time when the nation is facing a 40% cybersecurity skills talent shortage. Time is of the essence in defending a network, but time is what under-resourced and over-tasked government IT teams typically lack. As attacks come more rapidly and adapt to the evolving IT environment and new vulnerabilities, AI/ML and automation are rapidly becoming necessities.Solutions built from the ground up with these technologies will help government CISOs counter and potentially get ahead of todays sophisticated attacks.

About the Author

Jim Richberg is a Fortinet field CISO focused on the U.S. public sector.

More here:
AI, machine learning and automation in cybersecurity: The time is now - GCN.com

Machine Learning takes Robotics to the Next Level of Development – Analytics Insight

In the mid-twentieth century when the computer and its applications were starting to bring changes to the world, sociologist David Reisman had something stuck in his mind. He wondered what people would do once machine automation comes to effect and humans have no compulsion to do daily physical chores and strain their brain to come up with solutions. He was excited to see what people would do with all the free time.

More than half a century later when the world has exactly what Reisman wondered, humans are still working on a full-time scale. Work alleviated by industrious machines such as robotics systems has only freed humans to create more elaborate new tasks to be laboured over. To counter attack all the predictions of the previous century, machines gave humans more time to work and not to relax.

Like how we currently imagine robots taking over the human society and doing all the work by themselves including physical and intellectual labour without human assistance as they are well programmed and set to adapt to any environment and take the accurate decision without human help, the previous century people too dreamed that robots will take over all the physical work during the era of the space race. But today, robots are used for their intelligence more vigorously than their physical assistance. Humans can only teach robots and make them follow instructions up to an extent. So when humans lack, machine learning makes its way to discipline robotics.

Machine learningis one of the advanced and innovative technological fields today in which robotics is being influenced. Machine learning aids robots to function with their developed applications and a deep vision.

According to a recentsurveypublished by the Evans Data Corporation Global Development, machine learning and robotics is at the top developers priority for 2016. It is calculated that 56.4% of participants build robotic apps and 24.7% of them indicate the use of machine learning in their project.

Machine learning involves enormous caches of data to be taught to the robot for its perfect learning. The procedure contains algorithms and physical machines to aid the robots in the learning process.

Deep Learning educates the purpose of the robot

Deep Learning has been in the machine learning field for more than 30 years. But it was recognised and brought into continuous use recently when Deep Neutral Network algorithms and hardware advancements started having high potential. Deep learning can be accomplished through computational capacity and the required datasets that are ultimately the powerful assets of machine learning.

The process of teaching robots machine learning necessitates engineers and scientists to decide how AI learns. Domain experts take the next role of advising on how robots need to function and operate within the scope of the task. They also specify the features of robots being of assistance at logistics experts and security consultants. Deep learning focuses on the sector that a robot needs to be specialised from its root.

Feeding robots with planning and learning

AI robotsthrough machine learning acquire two important processes namely planning and learning. Planning is like a physical way of teaching robot that presumes the robots to work on what pace it has to move every joint to complete a task. For example, grabbing an object by a robot is a planning input.

Meanwhile, learning involves different inputs and reacts according to the data added to it on a dynamic environment. Learning process takes place through physical demonstrations in which movements are trained, stimulation of 3D artificial environments and feeding video and data of a person or another robot performing the task it is hoping to master for itself. The stimulation is a training data where a set of labelled or annotated datasets that an AI algorithm can use to recognize and learn from it.

Educating and training with accurate data

The process of educating a robot needs accuracy and abundance. Inaccurate or corrupt data is going to bring nothing except for chaos. Inaccurate data will lead to a robot drawing to the wrong conclusion. For example, if the database is focused on green apples, and you input a picture of a blueberry muffin, they would still get a green apple. This kind of data disruption is a major threat. Insufficient training data will bar the robot from acquiring the full potential it is designed to perform.

Reaping the maximum of physical help

Machine learning will push robots to do physical work at its best. Recently, these kinds of robots are used in industries for various purposes. For example, unmanned vehicles are stealing the spotlight at construction sites.

It is not just the construction sector that is reaping a handful of help from machine learning.Medical industry makes use of itby involving computer vision models to recognise tumours within MRIs and CT scans. Through further training, an AI robot will be capable of doing life-saving surgeries and other medical procedures through its machine learning input.

With the emergence of robots in the society, the opportunity of training data, machine learning and Artificial Intelligence (AI) are playing a critical role in bringing it to enforcement. Tech companies involved in the robot creating and training process should spend some time to sensitize people on the robots help to humanity. If things work well and the AI department comes up with advanced robots that are well-trained, built and purposed AI, Reismans dream of humans having leisure time could come true.

The rest is here:
Machine Learning takes Robotics to the Next Level of Development - Analytics Insight

Demonstration Of What-If Tool For Machine Learning Model Investigation – Analytics India Magazine

Machine learning era has reached the stage of interpretability where developing models and making predictions is simply not enough any more. To make a powerful impact and get good results on the data it is important to investigate and probe the dataset and the models. A good model investigation involves digging deep into the understanding of the model to find insights and inconsistencies in the developed model. This task usually involves writing a lot of custom functions. But, with tools like What-If, it makes the probing task very easy and saves time and efforts for programmers.

In this article we will learn about:

What-If tool is a visualization tool that is designed to interactively probe the machine learning models. WIT allows users to understand machine learning models like classification, regression and deep neural networks by providing methods to evaluate, analyse and compare the model. It is user friendly and can be used not only by developers but also by researchers and non-programmers very easily.

WIT was developed by Google under the People+AI research (PAIR) program. It is open-source and brings together researchers across Google to study and redesign the ways people interact with AI systems.

This tool provides multiple features and advantages for users to investigate the model.

Some of the features of using this are:

WIT can be used with a Google Colab notebook or Jupyter notebook. It can also be used with Tensorflow Board.

Let us take a sample dataset to understand the different features of WIT. I will choose the forest fire dataset available for download on Kaggle. You can click here for downloading the dataset. The goal here is to predict the areas affected by forest fires given the temperature, month, amount of rain etc.

I will implement this tool on google collaboratory. Before we load the dataset and perform the processing, we will first install the WIT. To install this tool use,

!pip install witwidget

Once we have split the data, we can convert the columns month and day to categorical values using label encoder.

Now we can build our model. I will use sklearn ensemble model and implement the gradient boosting regression model.

Now that we have the model trained, we will write a function to predict the data since we need to use this for the widget.

Next, we will write the code to call the widget.

This opens an interactive widget with two panels.

To the left, there is a panel for selecting multiple techniques to perform on the data and to the right is the data points.

As you can see on the right panel we have options to select features in the dataset along X-axis and Y-axis. I will set these values and check the graphs.

Here I have set FFMC along the X-axis and area as the target. Keep in mind that these points are displayed after the regression is performed.

Let us now explore each of the options provided to us.

You can select a random data point and highlight the point selected. You can also change the value of the datapoint and observe how the predictions change dynamically and immediately.

As you can see, changing the values changes the predicted outcomes. You can change multiple values and experiment with the model behaviour.

Another way to understand the behaviour of a model is to use counterfactuals. Counterfactuals are slight changes made that can cause a model to flip its decision.

By clicking on the slide button shown below we can identify the counterfactual which gets highlighted in green.

This plot shows the effects that the features have on the trained machine learning model.

As shown below, we can see the inference of all the features with the target value.

This tab allows us to look at the overall model performance. You can evaluate the model performance with respect to one feature or more than the one feature. There are multiple options available for analysis of the performance.

I have selected two features FFMC and temp against the area to understand performance using mean error.

If multiple training models are used their performance can be evaluated here.

The features tab is used to get the statistics of each feature in the dataset. It displays the data in the form of histograms or quantile charts.

The tab also enables us to look into the distribution of values for each feature in the dataset.

It also highlights the features that are most non-uniform in comparison to the other features in the dataset.

Identifying non-uniformity is a good way to reduce bias in the model.

WIT is a very useful tool for analysis of model performance. Ability to inspect models in a simple no-code environment will be of great help especially in the business perspective.

It also gives insights to factors beyond training the model like understanding why and how that model was created and how the dataset is fitting in the model.

comments

More:
Demonstration Of What-If Tool For Machine Learning Model Investigation - Analytics India Magazine

This 17-year-old boy created a machine learning model to suggest potential drugs for Covid-19 – India Today

In keeping with its tradition of high excellence and achievements, Greenwood High International School's student Tarun Paparaju of Class 12 has achieved the 'Grand Master' level in kernels, the highest accreditation in Kaggle, holding a rank of 13 out of 118,024 Kagglers worldwide. Kaggle is the world's largest online community for Data Science and Artificial Intelligence.

There are only 20 Kernel Grandmasters out of the three million users on Kaggle worldwide, and Tarun, aged 17 years, is honored to be one of the 20 Kernel Grandmasters now. Users of Kaggle are placed at different levels based on the quality and accuracy of their solutions to real-world artificial intelligence problems. The five levels in ascending order are Novice, Contributor, Expert, Master, and Grandmaster.

Kaggle hosts several data science competitions and contestants are challenged to find solutions to these real-world challenges. Kernels are a medium through which Kagglers share their code and insights on how to solve the problem.

These kernels include in-depth data analysis, visualisation, and machine learning, usually written in Python or R programming language. Other Kagglers can up vote a kernel if they believe it provides useful insights or solves the problem. 'Kernels Grandmaster' title at Kaggle requires 15 kernels qualified with gold medals.

Tarun's passion for Calculus, Mathematical modeling, and Data science from a very young age got him interested in participating and contributing to the Kaggle community.

He loves solving real-world Data Science problems, especially in the areas based on Deep learning: Natural language processing, Signal processing. Tarun is an open-source contributor to Keras, a Deep learning framework.

He has proposed and added Capsule NN layer support to Keras framework. He writes blogs about his adventures and learnings in data science.

Now, he closely works with the Kaggle community and aspires to be a scholar in the area of Natural language processing. Additionally, he loves playing cricket and football. Sports is a large part of his life outside Data science and academics.

Read:UGC releases new academic calendar: Here are top 10 important UGC updates

Read: MPhil, PhD students to get extension of 6 months, viva-voce through video conferencing: UGC

Read: WBBSE Madhyamik Result 2020: WB Class 10 result date to be fixed after Covid-19 lockdown ends

Visit link:
This 17-year-old boy created a machine learning model to suggest potential drugs for Covid-19 - India Today

Hey software developers, youre approaching machine learning the wrong way – The Next Web

I remember the first time I ever tried to learn to code. I was in middle school, and my dad, a programmer himself, pulled open a text editor and typed this on the screen:

Excuse me? I said.

It prints Hello World, he replied.

Whats public? Whats class? Whats static? Whats

Ignore that for now. Its just boilerplate.

But I was pretty freaked out by all that so-called boilerplate I didnt understand, and so I set out to learn what each one of those keywords meant. That turned out to be complicated and boring, and pretty much put the kibosh on my young coder aspirations.

Its immensely easier to learn software development today than it was when I was in high school, thanks to sites likecodecademy.com, the ease of setting up basic development environments, and a generalsway towards teaching high-level, interpreted languageslike Python and Javascript. You can go from knowing nothing about coding to writing your first conditional statements in a browser in just a few minutes. No messy environmental setup, installations, compilers, or boilerplate to deal with you can head straight to the juicy bits.

This is exactly how humans learn best. First, were taught core concepts at a high level, and onlythencan we appreciate and understand under-the-hood details and why they matter. We learn Python,thenC,thenassembly, not the other way around.

Unfortunately, lots of folks who set out to learn Machine Learning today have the same experience I had when I was first introduced to Java. Theyre given all the low-level details up front layer architecture, back-propagation, dropout, etc and come to think ML is really complicated and that maybe they should take a linear algebra class first, and give up.

Thats a shame, because in the very near future, most software developers effectively using Machine Learning arent going to have to think or know about any of that low-level stuff. Just as we (usually) dont write assembly or implement our own TCP stacks or encryption libraries, well come to use ML as a tool and leave the implementation details to a small set of experts. At that point after Machine Learning is democratized developers will need to understand not implementation details but instead best practices in deploying these smart algorithms in the world.

Today, if you want to build a neural network that recognizes your cats face in photos or predicts whether your next Tweet will go viral, youd probably set off to learn eitherTensorFloworPyTorch. These Python-based deep learning libraries are the most popular tools for designing neural networks today, and theyre both under 5 years old.

In its short lifespan, TensorFlow has already become way,waymore user-friendly than it was five years ago. In its early days, you had to understand not only Machine Learning but also distributed computing and deferred graph architectures to be an effective TensorFlow programmer. Even writing a simple print statement was a challenge.

Just earlier this fall, TensorFlow 2.0 officially launched, making the framework significantly more developer-friendly. Heres what a Hello-World-style model looks like in TensorFlow 2.0:

If youve designed neural networks before, the code above is straight-forward and readable. But if you havent or youre just learning, youve probably got some questions. Like, what is Dropout? What are these dense layers, how many do you need and where do you put them? Whatssparse_categorical_crossentropy? TensorFlow 2.0 removes some friction from building models, but it doesnt abstract away designing the actual architecture of those models.

So what will the future of easy-to-use ML tools look like? Its a question that everyone from Google to Amazon to Microsoft and Apple are spending clock cycles trying to answer. Also disclaimer it is whatIspend all my time thinking about as an engineer at Google.

For one, well start to see many more developers using pre-trained models for common tasks, i.e. rather than collecting our own data and training our own neural networks, well just use Googles/Amazons/Microsofts models. Many cloud providers already do something like this. For example, by hitting a Google Cloud REST endpoint, you can use a pre-trained neural networks to:

You can also run pre-trained models on-device, in mobile apps, using tools like GooglesML Kitor ApplesCore ML.

The advantage to using pre-trained models over a model you build yourself in TensorFlow (besides ease-of-use) is that, frankly, you probably cannot personally build a model more accurate than one that Google researchers, training neural networks on a whole Internet of data and tons GPUs andTPUs, could build.

The disadvantage to using pre-trained models is that they solve generic problems, like identifying cats and dogs in images, rather than domain-specific problems, like identifying a defect in a part on an assembly line.

But even when it comes to training custom models for domain-specific tasks, our tools are becoming much more user-friendly.

Screenshot of Teachable Machine, a tool for building vision, gesture, and speech models in the browser.

Googles freeTeachable Machinesite lets users collect data and train models in the browser using a drag-and-drop interface. Earlier this year, MIT released a similarcode-free interfacefor building custom models that runs on touchscreen devices, designed for non-coders like doctors.Microsoftand startups likelobe.aioffer similar solutions. Meanwhile,Google Cloud AutoMLis an automated model-training framework for enterprise-scale workloads.

As ML tools become easier to use, the skills that developers hoping to use this technology (but not become specialists) will change. So if youre trying to plan for where, Wayne-Gretsky-style, the puck is going, what should you study now?

What makes Machine Learning algorithms distinct from standard software is that theyre probabilistic. Even a highly accurate model will be wrong some of the time, which means its not the right solution for lots of problems, especially on its own. Take ML-powered speech-to-text algorithms: it might be okay if occasionally, when you ask Alexa to Turn off the music, she instead sets your alarm for 4 AM. Its not ok if a medical version of Alexa thinks your doctor prescribed you Enulose instead of Adderall.

Understanding when and how models should be used in production is and will always be a nuanced problem. Its especially tricky in cases where:

Take medical imaging. Were globally short on doctors and ML models are oftenmore accuratethan trained physicians at diagnosing disease. But would you want an algorithm to have the last say on whether or not you have cancer? Same thing with models that help judges decide jail sentences.Models can be biased, but so are people.

Understanding when ML makes sense to use as well as how to deploy it properly isnt an easy problem to solve, but its one thats not going away anytime soon.

Machine Learning models are notoriously opaque. Thats why theyre sometimes called black boxes. Its unlikely youll be able to convince your VP to make a major business decision with my neural network told me so as your only proof. Plus, if you dont understand why your model is making the predictions it is, you might not realize its making biased decisions (i.e. denying loans to people from a specific age group or zip code).

Its for this reason that so many players in the ML space are focusing on building Explainable AI features tools that let users more closely examine what features models are using to make predictions. We still havent entirely cracked this problem as an industry, but were making progress. In November, for example, Google launched a suite of explainability tools as well as something calledModel Cards a sort of visual guide for helping users understand the limitations of ML models.

Googles Facial Recognition Model Card shows the limitations of this particular model.

There are a handful of developers good at Machine Learning, a handful of researchers good at neuroscience, and very few folks who fall in that intersection. This is true of almost any sufficiently complex field. The biggest advances well see from ML in the coming years likely wont be from improved mathematical methods but from people with different areas of expertise learning at least enough Machine Learning to apply it to their domains. This is mostly the case in medical imaging, for example, where themost exciting breakthroughs being able to spot pernicious diseases in scans are powered not by new neural network architectures but instead by fairly standard models applied to a novel problem. So if youre a software developer lucky enough to possess additional expertise, youre already ahead of the curve.

This, at least, is whatIwould focus on today if I were starting my AI education from scratch. Meanwhile, I find myself spending less and less time building custom models from scratch in TensorFlow and more and more time using high-level tools like AutoML and AI APIs and focusing on application development.

This article was written by Dale Markowitz, an Applied AI Engineer at Google based in Austin, Texas, where she works on applying machine learning to new fields and industries. She also likes solving her own life problems with AI, and talks about it on YouTube.

Read more from the original source:
Hey software developers, youre approaching machine learning the wrong way - The Next Web

Machine Learning as a Service (MLaaS) Market Size: Opportunities, Current Trends And Industry Analysis by 2028 | Microsoft, IBM Corporation,…

Market Scenario of the Machine Learning as a Service (MLaaS) Market:

The most recent Machine Learning as a Service (MLaaS) Market Research study includes some significant activities of the current market size for the worldwide Machine Learning as a Service (MLaaS) market. It presents a point by point analysis dependent on the exhaustive research of the market elements like market size, development situation, potential opportunities, and operation landscape and trend analysis. This report centers around the Machine Learning as a Service (MLaaS)-business status, presents volume and worth, key market, product type, consumers, regions, and key players.

Sample Copy of This Report @ https://www.quincemarketinsights.com/request-sample-50032?utm_source=TDC/komal

The prominent players covered in this report: Microsoft, IBM Corporation, International Business Machine, Amazon Web Services, Google, Bigml, Fico, Hewlett-Packard Enterprise Development, At&T, Fuzzy.ai, Yottamine Analytics, Ersatz Labs, Inc., and Sift Science Inc.

The market is segmented into By Type (Special Services and Management Services), By Organization Size (SMEs and Large Enterprises), By Application (Marketing & Advertising, Fraud Detection & Risk Analytics, Predictive Maintenance, Augmented Reality, Network Analytics, and Automated Traffic Management), By End User (BFSI, IT & Telecom, Automobile, Healthcare, Defense, Retail, Media & Entertainment, and Communication)

Geographical segments are North America, Europe, Asia Pacific, Middle East & Africa, and South America.

A 360 degree outline of the competitive scenario of the Global Machine Learning as a Service (MLaaS) Market is presented by Quince Market Insights. It has a massive data allied to the recent product and technological developments in the markets.

It has a wide-ranging analysis of the impact of these advancements on the markets future growth, wide-ranging analysis of these extensions on the markets future growth. The research report studies the market in a detailed manner by explaining the key facets of the market that are foreseeable to have a countable stimulus on its developing extrapolations over the forecast period.

Get ToC for the overview of the premium report @ https://www.quincemarketinsights.com/request-toc-50032?utm_source=TDC/komal

This is anticipated to drive the Global Machine Learning as a Service (MLaaS) Market over the forecast period. This research report covers the market landscape and its progress prospects in the near future. After studying key companies, the report focuses on the new entrants contributing to the growth of the market. Most companies in the Global Machine Learning as a Service (MLaaS) Market are currently adopting new technological trends in the market.

Machine Learning as a Service (MLaaS)

Finally, the researchers throw light on different ways to discover the strengths, weaknesses, opportunities, and threats affecting the growth of the Global Machine Learning as a Service (MLaaS) Market. The feasibility of the new report is also measured in this research report.

Reasons for buying this report:

Make an Enquiry for purchasing this Report @ https://www.quincemarketinsights.com/enquiry-before-buying/enquiry-before-buying-50032?utm_source=TDC/komal

About Us:

QMI has the most comprehensive collection of market research products and services available on the web. We deliver reports from virtually all major publications and refresh our list regularly to provide you with immediate online access to the worlds most extensive and up-to-date archive of professional insights into global markets, companies, goods, and patterns.

Contact Us:

Quince Market Insights

Ajay D. (Knowledge Partner)

Office No- A109

Pune, Maharashtra 411028

Phone: APAC +91 706 672 4848 / US +1 208 405 2835 / UK +44 1444 39 0986

Email: [emailprotected]

Web: https://www.quincemarketinsights.com

Read more here:
Machine Learning as a Service (MLaaS) Market Size: Opportunities, Current Trends And Industry Analysis by 2028 | Microsoft, IBM Corporation,...