CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning – Yahoo Finance

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Please replace the release with the following corrected version due to multiple revisions.

The updated release reads:

ANYSCALE HOSTS INAUGURAL RAY SUMMIT ON SCALABLE PYTHON AND SCALABLE MACHINE LEARNING

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Anyscale, the distributed programming platform company, is proud to announce Ray Summit, an industry conference dedicated to the use of the Ray open source framework for overcoming challenges in distributed computing at scale. The two-day virtual event is scheduled for Sept. 30 Oct. 1, 2020.

With the power of Ray, developers can build applications and easily scale them from a laptop to a cluster, eliminating the need for in-house distributed computing expertise. Ray Summit brings together a leading community of architects, machine learning engineers, researchers, and developers building the next generation of scalable, distributed, high-performance Python and machine learning applications. Experts from organizations including Google, Amazon, Microsoft, Morgan Stanley, and more will showcase Ray best practices, real-world case studies, and the latest research in AI and other scalable systems built on Ray.

"Ray Summit gives individuals and organizations the opportunity to share expertise and learn from the brightest minds in the industry about leveraging Ray to simplify distributed computing," said Robert Nishihara, Ray co-creator and Anyscale co-founder and CEO. "Its also the perfect opportunity to build on Rays established popularity in the open source community and celebrate achievements in innovation with Ray."

Anyscale will announce the v1.0 release of the Ray open source framework at the Summit and unveil new additions to a growing list of popular third-party machine learning libraries and frameworks on top of Ray.

The Summit will feature keynote presentations, general sessions, and tutorials suited to attendees with various experience and skill levels using Ray. Attendees will learn the basics of using Ray to scale Python applications and machine learning applications from machine learning visionaries and experts including:

"It is essential to provide our customers with an enterprise grade platform as they build out intelligent autonomous systems applications," said Mark Hammond, GM Autonomous Systems, Microsoft. "Microsoft Project Bonsai leverages Ray and Azure to provide transparent scaling for both reinforcement learning training and professional simulation workloads, so our customers can focus on the machine teaching needed to build their sophisticated, real world applications. Im happy we will be able to share more on this at the inaugural Anyscale Ray Summit."

To view the full event schedule, please visit: https://events.linuxfoundation.org/ray-summit/program/schedule/

For complimentary registration to Ray Summit, please visit: https://events.linuxfoundation.org/ray-summit/register/

About Anyscale

Anyscale is the future of distributed computing. Founded by the creators of Ray, an open source project from the UC Berkeley RISELab, Anyscale enables developers of all skill levels to easily build applications that run at any scale, from a laptop to a data center. Anyscale empowers organizations to bring AI applications to production faster, reduce development costs, and eliminate the need for in-house expertise to build, deploy and manage these applications. Backed by Andreessen Horowitz, Anyscale is based in Berkeley, CA. http://www.anyscale.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200812005122/en/

Contacts

Media Contact:Allison Stokesfama PR for Anyscaleanyscale@famapr.com 617-986-5010

Link:
CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning - Yahoo Finance

Machine learning is pivotal to every line of business, every organisation must have an ML strategy – BusinessLine

Swami Sivasubramanian, Vice-President, Amazon Machine Learning, AWS (Amazon Web Services), who leads a global AI/ML team, has built more than 30 AWS services, authored around 40 referred scientific papers and been awarded over 200 patents. He was also one of the primary authors for a paper titled, Dynamo: Amazons Highly Available Key-value Store, along with AWS CTO and VP, Werner Vogels, which received the ACM Hall of Fame award. In a conversation with BusinessLine from Seattle, Swami said people always assume AI and ML are futuristic technologies, but the fact is AI and ML are already here and it is happening all around us.excerpts:

Bengaluru, August 12

The popular use cases for AI/ML are predominantly in logistics, customer experience and e-commerce. What AI/ML use cases are likely to emerge in the post-Covid-19 environment?

We dont have to wait for post-Covid-19, were seeing this right now. Artificial Intelligence (AI) and Machine Learning (ML) are playing a key role in better understanding and addressing the Covid-19 crisis. In the fight against Covid-19, organisations have been quick to apply their machine learning expertise in several areas, including, scaling customer communications, understanding how Covid-19 spreads, and speeding up research and treatment. Were seeing adoption of AI/ML across all industries, verticals and sizes of business. We expect this to not only continue, but accelerate in the future.

Of AWSs 175+ services portfolio, how many are AI/ML services?

We dont break out that number, but what I can tell you is AWS offers the broadest and deepest set of machine learning services and supporting cloud infrastructure putting machine learning in the hands of every developer, data scientist and expert practitioner.

Then why has AWS not featured in Gartners Data Science and ML Platforms Magic Quadrant?

Gartner's inclusion criteria explicitly excluded providers who focus primarily on developers. However, the Cloud AI Developer Services Magic Quadrant does cite us as a leader. Also, the recently released Gartner Solution Scorecard, which evaluated our capabilities in the Data Science and Machine Learning space, scored Amazon SageMaker higher than offerings from the other major providers.

Where is India positioned on the AI/ML adoption curve compared to developed economies?

I think, India is in a really good place. I remember visiting some of our customers and start-ups in India, there is so much innovation happening in India. I happen to believe that transformation comes because at a ground level, developers start adopting technologies and this is one of those things where I think India, especially at a ground level when it comes to the start-up ecosystem, have been jumping into in a big way to adopt machine learning technology.

For example, machine learning is embedded in every aspect of what Freshworks, a B2B unicorn in India, is doing. In fact, they build something like 33,000 models and they are iterating and theyre trying to build ML models, again using some of our technologies like Amazon SageMaker. Theyve cut down from eight weeks to less than one week. redBus, which Im a big fan of as I travel back and forth between Chennai to Bengaluru, is also using some of our ML technologies and their productivity has increased. One of the key things we need to be cognizant of is that machine learning technology is not going to get mainstream adoption if people are just using it for extremely leading-edge use cases. It should be used in everyday use cases. I think even in India now, it is starting to get into mainstream use cases in a big and meaningful way. For instance, Dish TV uses AWS Elemental, our video processing service to process video content and then they feed it into Amazon Rekognition to flag inappropriate content. There are start-ups like CreditVidya, who are building an ML platform on AWS to analyze behavioural data of customers and make better recommendations.

The greater the adoption of AI/ML, the more job losses are likely as organisations fire people to induct skilled talent. Please comment.

One thing is for sure, there is change coming and technology is driving it. Im very optimistic about the future. I remember the days where there used to be manual switching of telephones, but then we moved to automated switching. Its not like those jobs went away. All these people re-educated themselves and they are actually doing more interesting, more challenging jobs. Lifelong education is going to be critical. In Amazon, my team, for instance, runs Machine Learning University. We train our own engineers and Amazon Associates on various opportunities and expose them to leading-edge technology such as machine learning. Now, we are actually making this available for free as part of the AWS Training and Certification programs. In November 2018 we made it free, and within the first 48 hours of us making this free, we had more than one lakh people registered to learn. So, there is a huge appetite for it. In 2012, we decided, every organisation within Amazon had to have a machine learning strategy, even when machine learning was not even actually considered cool. So Jeff and the leadership team said, machine learning is going to be such a pivotal thing for every line of business irrespective of whether they run cloud computing or supply chain or financial technology data, and we required every business group in their yearly planning, to include how they were going to leverage machine learning in their business. And no, we do not plan to was not considered an acceptable answer.

What AI/ML tools do AWS offer, and for whom?

The vast majority of ML being done in the cloud today is on AWS. With an extensive portfolio of services at all three layers of the technology stack, more customers reference using AWS for machine learning than any other provider. AWS released more than 250 machine learning features and capabilities in 2019, with tens of thousands of customers using the services, spurred by the broad adoption of Amazon SageMaker since AWS re:Invent 2017. Our customers include, American Heart Association, Cathay Pacific, Dow Jones, Expedia.com, Formula 1, GE Healthcare, UKs National Health Service, NASA JPL, Slack, Tinder, Twilio, United Nations, the World Bank, Ryanair, and Samsung, among others.

Our AI/ML services are meant for: Advanced developers and scientists who are comfortable building, tuning, training, deploying, and managing models themselves, AWS offers P2 and P3 instances at the bottom of the stack which provide up to six times better performance than any other GPU instances available in the cloud today together with AWSs deep learning AMI (Amazon Machine Image) that embeds all the major frameworks. And, unlike other providers who try to funnel everybody into using only one framework, AWS supports all the major frameworks because different frameworks are better for different types of workloads.

At the middle layer of the stack, organisations that want to use machine learning in an expansive way can leverage Amazon SageMaker, a fully managed service that removes the heavy lifting, complexity, and guesswork from each step of the ML process, empowering everyday developers and scientists to successfully use ML. SageMaker is a sea-level change for everyday developers being able to access and build machine learning models. Its kind of incredible, in just a few months, how many thousands of developers started building machine learning models on top of AWS with SageMaker.

At the top layer of the stack, AWS provides solutions, such as Amazon Rekognition for deep-learning-based video and image analysis, Amazon Polly for translating text to speech, Amazon Lex for building conversations, Amazon Transcribe for converting speech to text, Amazon Translate for translating text between languages, and Amazon Comprehend for understanding relationships and finding insights within text. Along with this broad range of services and devices, customers are working alongside Amazons expert data scientists in the Amazon Machine Learning Solutions Lab to implement real-life use cases. We have a pretty giant investment in all layers of the machine learning stack and we believe that most companies, over time, will use multiple layers of that stack and have applications that are infused with ML.

Why would customers opt for AWSs AI/ML services versus competitor offerings from Microsoft, Google?

At Amazon, we always approach everything we do by focusing on our customers. We have thousands of engineers at Amazon committed to ML and deep learning, and its a big part of our heritage. Within AWS, weve been focused on bringing that knowledge and capability to our customers by putting ML into the hands of every developer and data scientist. But we do take a different approach to ML than others may we know that the only constant within the history of ML is change. Thats why we will always provide a great solution for all the frameworks and choices that people want to make by providing all of the major solutions so that developers have the right tool for the right job. And our customers are responding! Today, the vast majority of ML and deep learning in the cloud is running on AWS, with meaningfully more customer references for machine learning than any other provider. In fact, 85 per cent of TensorFlow being run in the cloud, is run on AWS.

See original here:
Machine learning is pivotal to every line of business, every organisation must have an ML strategy - BusinessLine

Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts. – DocWire…

This article was originally published here

Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts.

PLoS Med. 2020 Jun;17(6):e1003149

Authors: Atabaki-Pasdar N, Ohlsson M, Viuela A, Frau F, Pomares-Millan H, Haid M, Jones AG, Thomas EL, Koivula RW, Kurbasic A, Mutie PM, Fitipaldi H, Fernandez J, Dawed AY, Giordano GN, Forgie IM, McDonald TJ, Rutters F, Cederberg H, Chabanova E, Dale M, Masi F, Thomas CE, Allin KH, Hansen TH, Heggie A, Hong MG, Elders PJM, Kennedy G, Kokkola T, Pedersen HK, Mahajan A, McEvoy D, Pattou F, Raverdy V, Hussler RS, Sharma S, Thomsen HS, Vangipurapu J, Vestergaard H, t Hart LM, Adamski J, Musholt PB, Brage S, Brunak S, Dermitzakis E, Frost G, Hansen T, Laakso M, Pedersen O, Ridderstrle M, Ruetten H, Hattersley AT, Walker M, Beulens JWJ, Mari A, Schwenk JM, Gupta R, McCarthy MI, Pearson ER, Bell JD, Pavo I, Franks PW

AbstractBACKGROUND: Non-alcoholic fatty liver disease (NAFLD) is highly prevalent and causes serious health complications in individuals with and without type 2 diabetes (T2D). Early diagnosis of NAFLD is important, as this can help prevent irreversible damage to the liver and, ultimately, hepatocellular carcinomas. We sought to expand etiological understanding and develop a diagnostic tool for NAFLD using machine learning.METHODS AND FINDINGS: We utilized the baseline data from IMI DIRECT, a multicenter prospective cohort study of 3,029 European-ancestry adults recently diagnosed with T2D (n = 795) or at high risk of developing the disease (n = 2,234). Multi-omics (genetic, transcriptomic, proteomic, and metabolomic) and clinical (liver enzymes and other serological biomarkers, anthropometry, measures of beta-cell function, insulin sensitivity, and lifestyle) data comprised the key input variables. The models were trained on MRI-image-derived liver fat content (<5% or 5%) available for 1,514 participants. We applied LASSO (least absolute shrinkage and selection operator) to select features from the different layers of omics data and random forest analysis to develop the models. The prediction models included clinical and omics variables separately or in combination. A model including all omics and clinical variables yielded a cross-validated receiver operating characteristic area under the curve (ROCAUC) of 0.84 (95% CI 0.82, 0.86; p < 0.001), which compared with a ROCAUC of 0.82 (95% CI 0.81, 0.83; p < 0.001) for a model including 9 clinically accessible variables. The IMI DIRECT prediction models outperformed existing noninvasive NAFLD prediction tools. One limitation is that these analyses were performed in adults of European ancestry residing in northern Europe, and it is unknown how well these findings will translate to people of other ancestries and exposed to environmental risk factors that differ from those of the present cohort. Another key limitation of this study is that the prediction was done on a binary outcome of liver fat quantity (<5% or 5%) rather than a continuous one.CONCLUSIONS: In this study, we developed several models with different combinations of clinical and omics data and identified biological features that appear to be associated with liver fat accumulation. In general, the clinical variables showed better prediction ability than the complex omics variables. However, the combination of omics and clinical variables yielded the highest accuracy. We have incorporated the developed clinical models into a web interface (see: https://www.predictliverfat.org/) and made it available to the community.TRIAL REGISTRATION: ClinicalTrials.gov NCT03814915.

PMID: 32559194 [PubMed as supplied by publisher]

Read more from the original source:
Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts. - DocWire...

What a machine learning tool that turns Obama white can (and cant) tell us about AI bias – The Verge

Its a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.

Its not just Obama, either. Get the same algorithm to generate high-resolution images of actress Lucy Liu or congresswoman Alexandria Ocasio-Cortez from low-resolution inputs, and the resulting faces look distinctly white. As one popular tweet quoting the Obama example put it: This image speaks volumes about the dangers of bias in AI.

But whats causing these outputs and what do they really tell us about AI bias?

First, we need to know a little a bit about the technology being used here. The program generating these images is an algorithm called PULSE, which uses a technique known as upscaling to process visual data. Upscaling is like the zoom and enhance tropes you see in TV and film, but, unlike in Hollywood, real software cant just generate new data from nothing. In order to turn a low-resolution image into a high-resolution one, the software has to fill in the blanks using machine learning.

In the case of PULSE, the algorithm doing this work is StyleGAN, which was created by researchers from NVIDIA. Although you might not have heard of StyleGAN before, youre probably familiar with its work. Its the algorithm responsible for making those eerily realistic human faces that you can see on websites like ThisPersonDoesNotExist.com; faces so realistic theyre often used to generate fake social media profiles.

What PULSE does is use StyleGAN to imagine the high-res version of pixelated inputs. It does this not by enhancing the original low-res image, but by generating a completely new high-res face that, when pixelated, looks the same as the one inputted by the user.

This means each depixelated image can be upscaled in a variety of ways, the same way a single set of ingredients makes different dishes. Its also why you can use PULSE to see what Doom guy, or the hero of Wolfenstein 3D, or even the crying emoji look like at high resolution. Its not that the algorithm is finding new detail in the image as in the zoom and enhance trope; its instead inventing new faces that revert to the input data.

This sort of work has been theoretically possible for a few years now, but, as is often the case in the AI world, it reached a larger audience when an easy-to-run version of the code was shared online this weekend. Thats when the racial disparities started to leap out.

PULSEs creators say the trend is clear: when using the algorithm to scale up pixelated images, the algorithm more often generates faces with Caucasian features.

It does appear that PULSE is producing white faces much more frequently than faces of people of color, wrote the algorithms creators on Github. This bias is likely inherited from the dataset StyleGAN was trained on [...] though there could be other factors that we are unaware of.

In other words, because of the data StyleGAN was trained on, when its trying to come up with a face that looks like the pixelated input image, it defaults to white features.

This problem is extremely common in machine learning, and its one of the reasons facial recognition algorithms perform worse on non-white and female faces. Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. Not coincidentally, its white men who dominate AI research.

But exactly what the Obama example reveals about bias and how the problems it represents might be fixed are complicated questions. Indeed, theyre so complicated that this single image has sparked heated disagreement among AI academics, engineers, and researchers.

On a technical level, some experts arent sure this is even an example of dataset bias. The AI artist Mario Klingemann suggests that the PULSE selection algorithm itself, rather than the data, is to blame. Klingemann notes that he was able to use StyleGAN to generate more non-white outputs from the same pixelated Obama image, as shown below:

These faces were generated using the same concept and the same StyleGAN model but different search methods to Pulse, says Klingemann, who says we cant really judge an algorithm from just a few samples. There are probably millions of possible faces that will all reduce to the same pixel pattern and all of them are equally correct, he told The Verge.

(Incidentally, this is also the reason why tools like this are unlikely to be of use for surveillance purposes. The faces created by these processes are imaginary and, as the above examples show, have little relation to the ground truth of the input. However, its not like huge technical flaws have stopped police from adopting technology in the past.)

But regardless of the cause, the outputs of the algorithm seem biased something that the researchers didnt notice before the tool became widely accessible. This speaks to a different and more pervasive sort of bias: one that operates on a social level.

Deborah Raji, a researcher in AI accountability, tells The Verge that this sort of bias is all too typical in the AI world. Given the basic existence of people of color, the negligence of not testing for this situation is astounding, and likely reflects the lack of diversity we continue to see with respect to who gets to build such systems, says Raji. People of color are not outliers. Were not edge cases authors can just forget.

The fact that some researchers seem keen to only address the data side of the bias problem is what sparked larger arguments about the Obama image. Facebooks chief AI scientist Yann LeCun became a flashpoint for these conversations after tweeting a response to the image saying that ML systems are biased when data is biased, and adding that this sort of bias is a far more serious problem in a deployed product than in an academic paper. The implication being: lets not worry too much about this particular example.

Many researchers, Raji among them, took issue with LeCuns framing, pointing out that bias in AI is affected by wider social injustices and prejudices, and that simply using correct data does not deal with the larger injustices.

Others noted that even from the point of view of a purely technical fix, fair datasets can often be anything but. For example, a dataset of faces that accurately reflected the demographics of the UK would be predominantly white because the UK is predominantly white. An algorithm trained on this data would perform better on white faces than non-white faces. In other words, fair datasets can still created biased systems. (In a later thread on Twitter, LeCun acknowledged there were multiple causes for AI bias.)

Raji tells The Verge she was also surprised by LeCuns suggestion that researchers should worry about bias less than engineers producing commercial systems, and that this reflected a lack of awareness at the very highest levels of the industry.

Yann LeCun leads an industry lab known for working on many applied research problems that they regularly seek to productize, says Raji. I literally cannot understand how someone in that position doesnt acknowledge the role that research has in setting up norms for engineering deployments.

When contacted by The Verge about these comments, LeCun noted that hed helped set up a number of groups, inside and outside of Facebook, that focus on AI fairness and safety, including the Partnership on AI. I absolutely never, ever said or even hinted at the fact that research does not play a role is setting up norms, he told The Verge.

Many commercial AI systems, though, are built directly from research data and algorithms without any adjustment for racial or gender disparities. Failing to address the problem of bias at the research stage just perpetuates existing problems.

In this sense, then, the value of the Obama image isnt that it exposes a single flaw in a single algorithm; its that it communicates, at an intuitive level, the pervasive nature of AI bias. What it hides, however, is that the problem of bias goes far deeper than any dataset or algorithm. Its a pervasive issue that requires much more than technical fixes.

As one researcher, Vidushi Marda, responded on Twitter to the white faces produced by the algorithm: In case it needed to be said explicitly - This isnt a call for diversity in datasets or improved accuracy in performance - its a call for a fundamental reconsideration of the institutions and individuals that design, develop, deploy this tech in the first place.

Update, Wednesday, June 24: This piece has been updated to include additional comment from Yann LeCun.

Follow this link:
What a machine learning tool that turns Obama white can (and cant) tell us about AI bias - The Verge

Researchers use machine learning to build COVID-19 predictons – Binghamton University

By Chris Kocher

June 16, 2020

As parts of the U.S. tentatively reopen amid the COVID-19 pandemic, the nations long-term health continues to depend on tracking the virus and predicting where it might surge next.

Finding the right computer models can be tricky, but two researchers at Binghamton Universitys Thomas J. Watson School of Engineering and Applied Science believe they have an innovative way to solve those problems, and they are sharing their work online.

Using data collected from around the world by Johns Hopkins University, Arti Ramesh and Anand Seetharam both assistant professors in the Department of Computer Science have built several prediction models that take advantage of artificial intelligence. Assisting the research is PhD student Raushan Raj.

Arti Ramesh, assistant professor, computer science

Machine learning allows the algorithms to learn and improve without being explicitly programmed. The models examine trends and patterns from the 50 countries where coronavirus infection rates are highest, including the U.S., and can often predict within a 10% margin of error what will happen for the next three days based on the data for the past 14 days.

We believe that the past data encodes all of the necessary information, Seetharam said. These infections have spread because of measures that have been implemented or not implemented, and also because how some people have been adhering to restrictions or not. Different countries around the world have different levels of restrictions and socio-economic status.

For their initial study, Ramesh and Seetharam inputted global infection numbers through April 30, which allowed them to see how their predictions played out through May.

Certain anomalies can lead to difficulties. For instance, data from China was not included because of concerns about government transparency regarding COVID-19. Also, with health resources often taxed to the limit, tracking the virus spread sometimes wasnt the priority.

Anand Seetharam, assistant professor, computer science

We have seen in many countries that they have counted the infections but not attributed it on the day they were identified, Ramesh said. They will add them all on one day, and suddenly theres a shift in the data that our model is not able to predict.

Although infection rates are declining in many parts of the U.S., they are rising in other countries, and U.S. health officials fear a second wave of COVID-19 when people tired of the lockdown fail to follow safely guidelines such as wearing face masks.

The main utility of this study is to prepare hospitals and healthcare workers with proper equipment, Seetharam said. If they know that the next three days are going to see a surge and the beds at their hospitals are all filled up, theyll need to construct temporary beds and things like that.

As the coronavirus sweeps around the world, Ramesh and Seetharam continue to gather data so that their models can become more accurate. Other researchers or healthcare officials who want to utilize their models can find them posted online.

UNIVERSITY JOINS CORONAVIRUS FIGHT

Faculty, staff and students are leading Binghamton Universitys efforts in the coronavirus pandemic. Here are just a few examples:

Each data point is a day, and if it stretches longer, it will produce more interesting patterns in the data, Ramesh said. Then we will use more complex models, because they need more complex data patterns. Right now, those dont exist so were using simpler models, which are also easier to run and understand.

Ramesh and Seetharams paper is called Ensemble Regression Models for Short-term Prediction of Confirmed COVID-19 Cases.

Earlier this year, they launched a different tracking project, gathering data from Twitter to determine how Americans dealt with the early days of the COVID-19 pandemic.

Read more:
Researchers use machine learning to build COVID-19 predictons - Binghamton University

Machine Learning As A Service In Manufacturing Market Impact Of Covid-19 And Benchmarking – Cole of Duty

Market Overview

Machine learning has become a disruptive trend in the technology industry with computers learning to accomplish tasks without being explicitly programmed. The manufacturing industry is relatively new to the concept of machine learning. Machine learning is well aligned to deal with the complexities of the manufacturing industry.

Request For Report [emailprotected]https://www.trendsmarketresearch.com/report/sample/9906

Manufacturers can improve their product quality, ensure supply chain efficiency, reduce time to market, fulfil reliability standards, and thus, enhance their customer base through the application of machine learning. Machine learning algorithms offer predictive insights at every stage of the production, which can ensure efficiency and accuracy. Problems that earlier took months to be addressed are now being resolved quickly.

The predictive failure of equipment is the biggest use case of machine learning in manufacturing. The predictions can be utilized to create predictive maintenance to be done by the service technicians. Certain algorithms can even predict the type of failure that may occur so that correct replacement parts and tools can be brought by the technician for the job.

Market Analysis

According to Infoholic Research, Machine Learning as a Service (MLaaS) Market will witness a CAGR of 49% during the forecast period 20172023. The market is propelled by certain growth drivers such as the increased application of advanced analytics in manufacturing, high volume of structured and unstructured data, the integration of machine learning with big data and other technologies, the rising importance of predictive and preventive maintenance, and so on. The market growth is curbed to a certain extent by restraining factors such as implementation challenges, the dearth of skilled data scientists, and data inaccessibility and security concerns to name a few.

Segmentation by Components

The market has been analyzed and segmented by the following components Software Tools, Cloud and Web-based Application Programming Interface (APIs), and Others.

Get Complete TOC with Tables and [emailprotected]https://www.trendsmarketresearch.com/report/discount/9906

Segmentation by End-users

The market has been analyzed and segmented by the following end-users, namely process industries and discrete industries. The application of machine learning is much higher in discrete than in process industries.

Segmentation by Deployment Mode

The market has been analyzed and segmented by the following deployment mode, namely public and private.

Regional Analysis

The market has been analyzed by the following regions as Americas, Europe, APAC, and MEA. The Americas holds the largest market share followed by Europe and APAC. The Americas is experiencing a high adoption rate of machine learning in manufacturing processes. The demand for enterprise mobility and cloud-based solutions is high in the Americas. The manufacturing sector is a major contributor to the GDP of the European countries and is witnessing AI driven transformation. Chinas dominant manufacturing industry is extensively applying machine learning techniques. China, India, Japan, and South Korea are investing significantly on AI and machine learning. MEA is also following a high growth trajectory.

Vendor Analysis

Some of the key players in the market are Microsoft, Amazon Web Services, Google, Inc., and IBM Corporation. The report also includes watchlist companies such as BigML Inc., Sight Machine, Eigen Innovations Inc., Seldon Technologies Ltd., and Citrine Informatics Inc.

<<< Get COVID-19 Report Analysis >>>https://www.trendsmarketresearch.com/report/covid-19-analysis/9906

Benefits

The study covers and analyzes the Global MLaaS Market in the manufacturing context. Bringing out the complete key insights of the industry, the report aims to provide an opportunity for players to understand the latest trends, current market scenario, government initiatives, and technologies related to the market. In addition, it helps the venture capitalists in understanding the companies better and take informed decisions.

Read the rest here:
Machine Learning As A Service In Manufacturing Market Impact Of Covid-19 And Benchmarking - Cole of Duty

Cloud Machine Learning Market 2019 Break Down by Top Companies, Countries, Applications, Challenges, Opportunities and Forecast 2026 – Cole of Duty

A new market report by Market Research Intellect on the Cloud Machine Learning Market has been released with reliable information and accurate forecasts for a better understanding of the current and future market scenarios. The report offers an in-depth analysis of the global market, including qualitative and quantitative insights, historical data, and estimated projections about the market size and share in the forecast period. The forecasts mentioned in the report have been acquired by using proven research assumptions and methodologies. Hence, this research study serves as an important depository of the information for every market landscape. The report is segmented on the basis of types, end-users, applications, and regional markets.

The research study includes the latest updates about the COVID-19 impact on the Cloud Machine Learning sector. The outbreak has broadly influenced the global economic landscape. The report contains a complete breakdown of the current situation in the ever-evolving business sector and estimates the aftereffects of the outbreak on the overall economy.

Get Sample Copy with TOC of the Report to understand the structure of the complete report @ https://www.marketresearchintellect.com/download-sample/?rid=194333&utm_source=COD&utm_medium=888

The report also emphasizes the initiatives undertaken by the companies operating in the market including product innovation, product launches, and technological development to help their organization offer more effective products in the market. It also studies notable business events, including corporate deals, mergers and acquisitions, joint ventures, partnerships, product launches, and brand promotions.

Leading Cloud Machine Learning manufacturers/companies operating at both regional and global levels:

Sales and sales broken down by Product:

Sales and sales divided by Applications:

The report also inspects the financial standing of the leading companies, which includes gross profit, revenue generation, sales volume, sales revenue, manufacturing cost, individual growth rate, and other financial ratios.

The report also focuses on the global industry trends, development patterns of industries, governing factors, growth rate, and competitive analysis of the market, growth opportunities, challenges, investment strategies, and forecasts till 2026. The Cloud Machine Learning Market was estimated at USD XX Million/Billion in 2016 and is estimated to reach USD XX Million/Billion by 2026, expanding at a rate of XX% over the forecast period. To calculate the market size, the report provides a thorough analysis of the market by accumulating, studying, and synthesizing primary and secondary data from multiple sources.

To get Incredible Discounts on this Premium Report, Click Here @ https://www.marketresearchintellect.com/ask-for-discount/?rid=194333&utm_source=COD&utm_medium=888

The market is predicted to witness significant growth over the forecast period, owing to the growing consumer awareness about the benefits of Cloud Machine Learning. The increase in disposable income across the key geographies has also impacted the market positively. Moreover, factors like urbanization, high population growth, and a growing middle-class population with higher disposable income are also forecasted to drive market growth.

According to the research report, one of the key challenges that might hinder the market growth is the presence of counter fit products. The market is witnessing the entry of a surging number of alternative products that use inferior ingredients.

Key factors influencing market growth:

Reasons for purchasing this Report from Market Research Intellect

Customized Research Report Using Corporate Email Id @ https://www.marketresearchintellect.com/need-customization/?rid=194333&utm_source=COD&utm_medium=888

Customization of the Report:

Market Research Intellect also provides customization options to tailor the reports as per client requirements. This report can be personalized to cater to your research needs. Feel free to get in touch with our sales team, who will ensure that you get a report as per your needs.

Thank you for reading this article. You can also get chapter-wise sections or region-wise report coverage for North America, Europe, Asia Pacific, Latin America, and Middle East & Africa.

To summarize, the Cloud Machine Learning market report studies the contemporary market to forecast the growth prospects, challenges, opportunities, risks, threats, and the trends observed in the market that can either propel or curtail the growth rate of the industry. The market factors impacting the global sector also include provincial trade policies, international trade disputes, entry barriers, and other regulatory restrictions.

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

View original post here:
Cloud Machine Learning Market 2019 Break Down by Top Companies, Countries, Applications, Challenges, Opportunities and Forecast 2026 - Cole of Duty

COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market Research Methodology: Business Plans, Inventive Technology, Growth…

The global COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market report is based on comprehensive analysis conducted by experienced and professional experts. The report mentions, factors that are influencing growth such as drivers, restrains of the market. The report offers in-depth analysis of trends and opportunities in the COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market. The report offers figurative estimations and predicts future for upcoming years on the basis of the recent developments and historic data. For the gathering information and estimating revenue for all segments, researchers have used top-down and bottom-up approach. On the basis of data collected from primary and secondary research and trusted data sources the report offers future predictions of revenue and market share.

The Leading Market Players Covered in this Report are : AIBrain,Amazon,Anki,CloudMinds,Deepmind,Google,Facebook,IBM,Iris AI,Apple,Luminoso,Qualcomm .

For Better Understanding, Download FREE Sample Copy of COVID-19 Impact on Global Artificial Intelligence and Machine Learning Report in Just One Single Step @ https://www.researchmoz.us/enquiry.php?type=S&repid2691666

Key Questions Answered in This Report:

Impact of Covid-19 in COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market:The utility-owned segment is mainly being driven by increasing financial incentives and regulatory supports from the governments globally. The current utility-owned COVID-19 Impact on Global Artificial Intelligence and Machine Learning are affected primarily by the COVID-19 pandemic. Most of the projects in China, the US, Germany, and South Korea are delayed, and the companies are facing short-term operational issues due to supply chain constraints and lack of site access due to the COVID-19 outbreak. Asia-Pacific is anticipated to get highly affected by the spread of the COVID-19 due to the effect of the pandemic in China, Japan, and India. China is the epic center of this lethal disease. China is a major country in terms of the chemical industry.

Key Businesses Segmentation of COVID-19 Impact on Global Artificial Intelligence and Machine Learning MarketOn the basis on the end users/applications,this report focuses on the status and outlook for major applications/end users, sales volume, COVID-19 Impact on Global Artificial Intelligence and Machine Learning market share and growth rate of COVID-19 Impact on Global Artificial Intelligence and Machine Learning foreach application, including-

On the basis of product,this report displays the sales volume, revenue (Million USD), product price, COVID-19 Impact on Global Artificial Intelligence and Machine Learning market share and growth rate ofeach type, primarily split into-

COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market Regional Analysis Includes: Asia-Pacific(Vietnam, China, Malaysia, Japan, Philippines, Korea, Thailand, India, Indonesia, and Australia) Europe(Turkey, Germany, Russia UK, Italy, France, etc.) North America(the United States, Mexico, and Canada.) South America(Brazil etc.) The Middle East and Africa(GCC Countries and Egypt.)

Key Insights that Study is going to provide: The 360-degree COVID-19 Impact on Global Artificial Intelligence and Machine Learning market overview based on a global and regional level Market Share & Sales Revenue by Key Players & Emerging Regional Players Competitors In this section, various COVID-19 Impact on Global Artificial Intelligence and Machine Learning industry leading players are studied with respect to their company profile, product portfolio, capacity, price, cost, and revenue. A separate chapter on COVID-19 Impact on Global Artificial Intelligence and Machine Learning market Entropy to gain insights on Leaders aggressiveness towards market [Merger & Acquisition / Recent Investment and Key Developments] Patent Analysis** No of patents / Trademark filed in recent years.

Grab Maximum Discount on COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market Research Report [Single User | Multi User | Corporate Users] @https://www.researchmoz.us/enquiry.php?type=E&repid2691666

Table of Content:Global COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market Size, Status and Forecast 20261. Report Overview2. Market Analysis by Types3. Product Application Market4. Manufacturers Profiles/Analysis5. Market Performance for Manufacturers6. Regions Market Performance for Manufacturers7. Global COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market Performance (Sales Point)8. Development Trend for Regions (Sales Point)9. Upstream Source, Technology and Cost10. Channel Analysis11. Consumer Analysis12. Market Forecast 2020-202613. Conclusion

For More Information Kindly Contact: ResearchMozMr. Rohit Bhisey,90 State Street,Albany NY,United States 12207Tel: +1-518-621-2074USA-Canada Toll Free: 866-997-4948Email: [emailprotected]Media Release @ https://www.researchmoz.us/pressreleaseFollow me on Blogger: https://trendingrelease.blogspot.com/

Original post:
COVID-19 Impact on Global Artificial Intelligence and Machine Learning Market Research Methodology: Business Plans, Inventive Technology, Growth...

Neuromorphic Computing Drives The Landscape Of Emerging Memories For Artificial Intelligence SoCs – SemiEngineering

New techniques based on intensive computing and massive amounts of distributed memory.

The pace of deep machine learning and artificial intelligence (AI) is changing the world of computing at all levels of hardware architecture, software, chip manufacturing, and system packaging. Two major developments have opened the doors to implementing new techniques in machine learning. First, vast amounts of data, i.e., Big Data, are available for systemsto process. Second, advanced GPU architectures now support distributed computing parallelization. With these two developments, designers can take advantage of new techniques that rely on intensive computing and massive amounts of distributed memory to offer new, powerful compute capabilities.

Neuromorphic computing-based machine learning utilizes techniques of Spiking Neural Networks (SNN), Deep Neural Networks (DNN) and Restricted Boltzmann Machines (RBM). Combined with Big Data, Big Compute is utilizing statistically based High-Dimensional Computing (HDC) that operates on patterns, supports reasoning built on associative memory and on continuous learning to mimic human memory learning and retention sequences.

Emerging memories range from Compute-In-memory SRAMs (CIM), STT-MRAMs, SOT-MRAMs, ReRAMs, CB-RAMs, and PCMs. The development of each type is simultaneously trying to enable a transformation in computation for AI. Together, they are advancing the scale of computational capabilities, energy efficiency, density, and cost.

To read more, click here.

See the original post:
Neuromorphic Computing Drives The Landscape Of Emerging Memories For Artificial Intelligence SoCs - SemiEngineering

InterDigital, Blacknut, and Nvidia unveil worlds first Cloud gaming solution with AI-enabled user interface – TelecomTV

WILMINGTON, Del., June 03, 2020 (GLOBE NEWSWIRE) -- InterDigital, Inc. (NASDAQ:IDCC), a mobile and video technology research and development company, today introduced the worlds first cloud gaming solution with an AI and machine learning-enabled user interface, presented in collaborative partnership with cloud gaming trailblazer Blacknut and in cooperation with GPU pioneer Nvidia. The tripartite collaboration represents the first time that an AI and machine learning-driven user interface is utilized, wearable-free, with a live cloud gaming solution. The technology demonstrates the incredible potential of integrating localized and far-Edge enabled AI capabilities into home gaming experiences.

The AI and machine learning-enabled user interface is connected to a cloud gaming solution that operates without joysticks or wearable accessories. The demonstration leverages unique technologies, including real-time video analysis on home and local edge devices, dynamic adaptation to available compute resources, and shared AI models managed through an in-home AI hub, to implement a cutting-edge gaming experience.

In the demonstration, users play a first-person view snowboarding game streamed by Blacknut and displayed on a commercial television. Users do not require a joystick or handheld controller to play the game; instead, their movements and interactions are tracked by AI processing of the live video capture of the users movements. The users presence is detected using an AI model and his or her body movements are matched with the snowboarder in the game, in real time, using InterDigitals low latency Edge AI running on a local AI accelerator. The groundbreaking demo addresses the challenges of ensuring the lowest possible end-to-end latency from gesture capture to game action, while accelerating inference of concurrent AI models serving multiple applications to deliver an interactive and more seamless gaming experience. This demonstration enables AI and machine learning tasks to be completed locally, revolutionizing our current implementation of cloud gaming solutions.

We are so proud of the work of this demonstration, as it displays the real potential of AI and edge computing, highlights the power of industry collaboration, and helps blaze a trail for new cloud gaming capabilities. Of course, such a success would not have been possible without the utmost implication of all the teams from Interdigital, Blacknut, and Nvidia, and I would like to take the opportunity to credit and thank their outstanding work, said Laurent Depersin, Director of the Home Experience Lab at InterDigital.

The far-Edge AI and machine learning technologies put forth by InterDigital bring a plethora of new capabilities to the cloud gaming experience. Far-Edge AI enables low-latency analysis to deliver an interactive and entertaining experience, reduces cloud computing costs by leveraging available computing resources, and saves significant bandwidth by prioritizing up-linking. In addition, far-Edge AI in edge cloud architecture offers an important solution for privacy concerns by localizing computing and supports a variety of new and emerging vertical applications beyond gaming, including smart home and security, remote healthcare, and robotics.

Cloud gaming with far-Edge AI leverages artificial intelligence and localized Edge computing to showcase the ways an interactive television or gaming experience can be enhanced by the localized AI analysis of a cameras video stream. Ongoing research in the real-time processing of user generated data will drive new innovations and vertical applications in the home, from cloud gaming to remote medical care, and those innovations will be enhanced by the ability to execute artificial intelligence models under low latency conditions.

Blacknuts mission is to bring to our customers unlimited hours of gaming fun in the simplest manner, said Pascal Manchon, CTO at Blacknut. Our unique cloud gaming solution allows to free games from dedicated consoles or hardware. Using AI and machine learning to transform the human body itself in a full-fledge game controller was challenging but Blacknuts close collaboration with Interdigital and NVidia led to outstanding performances. And yes, it is addictive and fun to play this way!

Cloud gaming is an exciting industry use case that leverages innovations in network architecture, video streaming and content delivery to shape the future of interactive gaming and entertainment. This worlds first cloud gaming solution, and the broader exploration of AI-enabled cloud solutions, would not be possible without a commitment to collaboration with industry leaders and partners.

See original here:
InterDigital, Blacknut, and Nvidia unveil worlds first Cloud gaming solution with AI-enabled user interface - TelecomTV

Machine Learning Takes UWB Localization to the Next Level – Eetasia.com

Article By : Nitin Dahad

Imec uses machine learning algorithms in chip design to achieve cm accuracy and low-power ultra-wideband (UWB) localization...

Imec this week said it has developed next generation ultra-wideband (UWB) technology that uses digital RF and machine learning to achieve a ranging accuracy of less than 10cm in challenging environments while consuming 10 times less power than todays implementations.

The research and innovation hub announced two new innovations from its secure proximity research program for secure and very high accuracy ranging technology. One is hardware-based, with a digital-style RF circuit design such as its all-digital phase locked loop (PLL), to achieve a low power consumption of less than 4mW/20mW (Tx/Rx), which it claims is up to 10 times better than todays implementations. The second is software-based enhancements which utilize machine learning based error correction algorithms to allow less than 10cm ranging accuracy in challenging environments.

Explaining the context imec said ultra-wideband technology is currently well suited to support a variety of high accuracy and secure wireless ranging use-cases, such as the smart lock solutions commonly being applied in automotive; it automatically unlocks a cars doors as its owner approaches, while locking the car when the owner moves away.

However, despite its benefits such as being inherently more difficult to compromise than some alternatives, its potential has largely remained untapped because of its higher power consumption and larger footprint. Hence imec said the hardware and software innovations it has introduced mark an important step to unlocking the technologys full potential, and opens up the opportunity for micro-localization services beyond the secure keyless access that its been widely promoted for so far, to AR/VR gaming, asset tracking and robotics.

Christian Bachmann, the program manager at imec, said, UWBs power consumption, chip size and associated cost have been prohibitive factors to the technologys adoption, especially when it comes to the deployment of wireless ranging applications. Imecs brand-new UWB chip developments result in a significant reduction of the technologys footprint based on digital-style RF-concepts: we have been able to integrate an entire transceiver including three receivers for angle-of-arrival measurements on an area of less than 1mm.

He added this is when implemented on advanced semiconductor process nodes applicable to IoT sensor node devices. The new chip is also compliant with the new IEEE 802.15.4z standard supported by high-impact industry consortia such as the Car Connectivity Consortium (CCC) and Fine Ranging (FiRa).

Complementing the hardware developments, researchers from IDLab (an imec research group at Ghent University) have come up with software-based enhancements that significantly improve UWBs wireless ranging performance in challenging environments. This is particularly in factories or warehouses where people and machines constantly move around, and with metallic obstacles causing massive reflection all of which impact the quality of UWBs localization and distance measurements.

Using machine learning, it has created smart anchor selection algorithms that detect the (non) line-of-sight between UWB anchors and the mobile devices that are being tracked. Building on that knowledge, the ranging quality is estimated, and ranging errors are corrected. The approach also comes with machine learning features that enable adaptive tuning of the networks physical layer parameters, which allows appropriate steps to then be initiated to mitigate those ranging errors for instance by tuning the anchors radios.

Professor Eli De Poorter from IDLab said, We have already demonstrated an UWB ranging accuracy of better than 10cm in such very challenging industrial environments, which is a factor of two improvement compared to existing approaches. Additionally, while UWB localization use-cases are typically custom-built and often depend on manual configuration, our smart anchor selection software works in any scenario as it runs in the application layer.

Through these adaptive configurations, the next-generation low power and high-accuracy UWB chips can be utilized in a wide range of other applications such as improved contact tracing during epidemics using small and privacy-aware devices.

In fact, imec has already licensed the technology to its spin-off Lopos, which hasreleased a wearable that enables enforcement of Covid-19 social distancingby warning employees through an audible or haptic alarm when they are violating safe distance guidelines while approaching each other.

Choosing UWB instead of Bluetooth, Lopos SafeDistance wearable operates as a standalone solution which weighs 75g and has a battery life of 2-5 days. The UWB-technology based device enables safe, highly accurate (< 15cm error margin) distance measurement. When two wearables approach each other, the exact distance between the devices (which is adjustable) is measured and an alarm is activated when a minimum safety distance is not respected.

Since it is standalone, no personal data is logged and there is no gateway, server or other infrastructure required. Lopos has already ramped up production to meet market demand, with multiple large-scale orders received over the last few weeks from companies active in a wide range of different sectors.

Related Posts:

More here:
Machine Learning Takes UWB Localization to the Next Level - Eetasia.com

Butterfly landmines mapped by drones and machine learning – The Engineer

27th May 20209:41 am27th May 20209:41 am

IEDs and so-called butterfly landminescould be detected over wide areas using drones and advanced machine learning, according to research from Binghamton University, State University at New York.

The team had previously developed a method that allowed for the accurate detection of butterfly landmines using low-cost commercial drones equipped with infrared cameras.

EPSRC-funded project takes dual approach to clearing landmines

Their new research focuses on automated detection of landmines using convolutional neural networks (CNN), which they say is the standard machine learning method for object detection and classification in the field of remote sensing. This method is a game-changer in the field, said Alek Nikulin, assistant professor of energy geophysics at Binghamton University.

All our previous efforts relied on human-eye scanning of the dataset, Nikulin said in a statement.Rapid drone-assisted mapping and automated detection of scatterable mine fields would assist in addressing the deadly legacy of widespread use of small scatterable landmines in recent armed conflicts and allow to develop a functional framework to effectively address their possible future use.

There are at least 100 million military munitions and explosives of concern devices in the world, of various size, shape and composition. Furthermore,an estimated twenty landmines are placed for every landmine removed in conflict regions

Millions of these are surface plastic landmines with low-pressure triggers, such as the mass-produced Soviet PFM-1 butterfly landmine. Nicknamed for their small size and butterfly-like shape, these mines are extremely difficult to locate and clear due to their small size, low trigger mass and a design that mostly excluded metal components, making them virtually invisible to metal detectors.

The design of the mine combined with a low triggering weight have earned it notoriety as the toy mine, due to a high casualty rate among small children who find these devices while playing and who are the primary victims of the PFM-1 in post-conflict nations, like Afghanistan.

The researchers believe that these detection and mapping techniques are generalisable and transferable to other munitions and explosives. They could be adapted to detect and map disturbed soil for improvised explosive devices (IEDs).

The use of Convolutional Neural Network-based approaches to automate the detection and mapping of landmines is important for several reasons, the researchers said in a paper published inRemote Sensing. One, it is much faster than manually counting landmines from an orthoimage (i.e. an aerial image that has been geometrically corrected). Two, it is quantitative and reproducible, unlike subjective human error-prone ocular detection. And three, CNN-based methods are easily generalisable to detect and map any objects with distinct sizes and shapes from any remotely sensed raster images.

Read more:
Butterfly landmines mapped by drones and machine learning - The Engineer

Northern Trust rolls out machine learning tech for FX management solutions – The TRADE News

Northern Trust has deployed machine learning models within its FX currency management solutions business, designed to enable greater oversight of thoughts of daily data points.

The solution has been developed in partnership with Lumint, an outsourced FX execution services provider, and will help buy-side firms reduce risk throughout the currency management lifecycle.

The technology utilised by the Robotic Oversight System (ROSY) for Northern Trust systematically scans newly arriving, anonymised data to identify anomalies across multi-dimensional data sets. It is also built on machine learning models developed by Lumint using a cloud platform that allows for highly efficient data processing.

In a data-intensive business, ROSY acts like an additional member of the team working around the clock to find and flag anomalies. The use of machine learning to detect data outliers enables us to provide increasingly robust and intuitive solutions to enhance our oversight and risk management, which can be particularly important in volatile markets, said Andy Lemon, head of currency management, Northern Trust.

Northern Trust announced astrategic partnership with Lumint in 2018to deliver currency management services with portfolio, share class and lookthrough hedging solutions alongside transparency and analytics tools.

Northern Trusts deployment of ROSY amplifies the scalability of its already highly automated currency hedging operation; especially for the more sophisticated products such as look-through hedging offered to its global clients, added Alex Dunegan, CEO, Lumint.

The solution is the latest rollout of machine learning technology by Northern Trust, as the bank continues to leverage new technologies across its businesses. In August last year, Northern Trust developed a new pricing engine within its securities lending business by utilising machine learning and advanced statistical technology.

Read more:
Northern Trust rolls out machine learning tech for FX management solutions - The TRADE News

Cloud Storage Market to Reach USD 297.54 Billion by 2027; Higher Adoption of Machine Learning to Boost Growth, Says Fortune Business Insights -…

Key Companies Covered in Cloud Storage Market Research Report Are Amazon Web Services, Inc., Dell Technologies Inc., Dropbox, Fujitsu Ltd, Inc., Google, Inc., Hewlett Packard Enterprise Development LP, IBM Corporation, Microsoft Corporation, Oracle, pCloud AG, Rackspace, Inc., VMware, Inc.

PUNE, India, May 18, 2020 /PRNewswire/ -- The global cloud storage market is set to gain traction from the rising adoption of autonomous systems and machine learning. Besides, the introduction to unique video systems, internet of things (IoT), and remote sensing technologies are driving the market growth. This information is provided by Fortune Business Insights in a recent study, titled, "Cloud Storage Market Size, Share & Industry Analysis, By Component (Storage Model, and Services), By Deployment (Private, Public, and Hybrid), By Enterprise Size (SMEs, and Large Enterprises), By Vertical (BFSI, IT and Telecommunication, Government and Public Sector, Manufacturing, Healthcare and Life Sciences, Retail and Consumer Goods, Media and Entertainment, and Others), and Regional Forecast, 2020-2027." The study further mentions that the cloud storage market size was USD 49.13 billion in 2019 and is projected to reach USD 297.54 billion by 2027, exhibiting a CAGR of 25.3% during the forecast period.

Highlights of the Report

Get Sample PDF Brochure:https://www.fortunebusinessinsights.com/enquiry/request-sample-pdf/cloud-storage-market-102773

An Overview of the Impact of COVID-19 on this Market:

The emergence of COVID-19 has brought the world to a standstill. We understand that this health crisis has brought an unprecedented impact on businesses across industries. However, this too shall pass. Rising support from governments and several companies can help in the fight against this highly contagious disease. There are some industries that are struggling and some are thriving. Overall, almost every sector is anticipated to be impacted by the pandemic.

We are taking continuous efforts to help your business sustain and grow during COVID-19 pandemics. Based on our experience and expertise, we will offer you an impact analysis of coronavirus outbreak across industries to help you prepare for the future.

Click here to get the short-term and long-term impact of COVID-19 on this Market.Please visit:https://www.fortunebusinessinsights.com/cloud-storage-market-102773

Drivers & Restraints-

Covid-19 Pandemic to Boost Growth Backed by Rising Usage of Cloud Storage Solutions

Cloud storage solutions are gaining more popularity at present as workforces are inclining towards a distributed work environment. These solutions aid workforces in collaborating and staying connected. The outbreak of Covid-19 pandemic is enabling several organizations to support remote working, as well as manage the vast amount of data smoothly. Microsoft, for instance, has surged the benefits of Windows and extended Azure cloud credits for non-profit and critical care organizations, such as food & nutrition, public safety, and health support. In addition to that, the utilization of analytics-driven platforms is helping companies in the generating a large amount of data. They are therefore, preferring hybrid cloud storage solutions more than the conventional ones. However, the occurrence of data breaches may hamper the cloud storage market growth in the coming years.

Segment-

BFSI Segment to Grow Steadily Fueled by Need for Improving Consumer Experience

Based on vertical, the banking, financial services and insurance (BFSI) segment generated 22.4% cloud storage market share in 2019. The industry deals with large volumes of customer data on regular bases. It delivers efficient services to the customers. To serve them better, they require cloud storage technology as it poses as a transformative digital solution. This solution provides a high level of scalability, agility, and data security to the industry. Cloud storage systems not only improve consumer experience and revenues, but also enhance the operational efficiency. These factors are set to drive the growth of the BFSI segment in the near future.

Speak to Analyst:https://www.fortunebusinessinsights.com/enquiry/speak-to-analyst/cloud-storage-market-102773

Regional Analysis-

North America to Remain Dominant Owing to Rising Adoption of Various Digital Services

Regionally, the market is divided into Latin America, Europe, Asia Pacific, the Middle East and Africa, and North America. Amongst these, North America procured USD 19.85 billion revenue in 2019 and is set to dominate the market. This growth is attributable to the rising adoption of several digital services, such as electronic signatures and e-commerce in the U.S. Also, the increasing rate of cybercrime would contribute to the growth. However, the outbreak of Covid-19 pandemic is expected to obstruct growth by affecting the technological investments of industry giants. Asia Pacific, on the other hand, is projected to exhibit an astonishing growth during the forecast period backed by the increasing usage of smartphones.

Competitive Landscape-

Key Companies Focus on Expanding Product Offerings to Surge Revenue

Microsoft, IBM, and Amazon are some of the top companies operating in the global market. They are striving to widen their product offerings by keeping up with the latest trends. They will also be able to surge their revenue this way. Below are two of the latest industry developments:

Fortune Business Insights presents a list of all the companies operating in the global Cloud Storage Market. They are as follows:

Quick Buy Cloud Storage Market Research Report:https://www.fortunebusinessinsights.com/checkout-page/102773

Detailed Table of Content

TOC Continued...!!!

Get your Customized Research Report:https://www.fortunebusinessinsights.com/enquiry/customization/cloud-storage-market-102773

Have a Look at Related Research Insights:

Cloud Analytics MarketSize, Share & Industry Analysis, By Deployment Type (Public Cloud, Private Cloud, and Hybrid Cloud), By Organization Size (Small And Medium-Sized Enterprises (SMEs) and Large Enterprises), By End-User (BFSI, IT and Telecommunications, Retail and Consumer Goods, Healthcare and Life Sciences, Manufacturing, Education, and Others) and Regional Forecast, 2019-2026

Cloud Computing MarketSize, Share & Industry Analysis, By Type (Public Cloud, Private Cloud, Hybrid Cloud), By Service (Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS)), By Industry (Banking, Financial Services, and Insurance (BFSI), IT and Telecommunications, Government, Consumer Goods and Retail, Healthcare, Manufacturing, Others (Energy and Utilities, Education), and Regional Forecast, 2020-2027

Cloud Gaming MarketSize, Share & Industry Analysis, By Device (Smartphone, Laptop/Tablets, Personal Computer (PC), Smart TV and Consoles), By Streaming Type (File Streaming and Video Streaming), By End-Users (Business to Business (B2B) and Business to Consumers (B2C)), and Regional Forecast, 2020-2027

Cloud security MarketSize, Share & Industry Analysis, By Component (Solutions, Services), By Security Type (Application Security, Database Security, Endpoint Security, Network Security, Web and Email Security), By Deployment (Private, Public, Hybrid), By End-User (Large scale enterprise , Small & medium enterprise), By Industry Verticals (Healthcare, BFSI, IT & Telecom, Government Agencies)Others and Regional Forecast, 2019-2026

Retail Cloud MarketSize, Share & Industry Analysis, By Model Type (Infrastructure as a Service, Platform as a Service and Software as a Service), By Deployment (Public, Private and Hybrid Cloud), By Solution (Supply Chain Management, Workforce Management, Customer Management, Reporting & Analytics, Data Security, Omni-Channel), By Enterprise Size (Small & Medium and Large Enterprise) and Regional Forecast, 2019-2026

Location Analytics MarketSize, Share & Industry Analysis, By Component (Solution, Services), By Location Type (Indoor, Outdoor), By Deployment Type (Cloud, On-Premises), By End-User (Retail, Government, Energy and Utilities, Healthcare, Travel and Transportation, Telecommunications, and Others) and Regional Forecast, 2019-2026

Security Analytics MarketSize, Share & Industry Analysis, By Component (Solutions, and Services), By Application (Network Security Analytics, Web Security Analytics, Endpoint Security Analytics, and Application Security Analytics), By Vertical (BFSI, Government and Defense, IT and Telecommunication, Manufacturing, Healthcare, Energy and Utilities, and Others), and Regional Forecast, 2020-2027

Retail Analytics MarketSize, Share and Industry Analysis By Type (Software, Services), By Deployment (On-Premises, Cloud), By Organization Size (SMEs, Large Enterprises), By Function (Customer Management, Supply Chain, Merchandising, In-Store Operations, and Strategy & Planning) and Regional Forecast 2019-2026

About Us:

Fortune Business Insightsoffers expert corporate analysis and accurate data, helping organizations of all sizes make timely decisions. We tailor innovative solutions for our clients, assisting them address challenges distinct to their businesses. Our goal is to empower our clients with holistic market intelligence, giving a granular overview of the market they are operating in.

Our reports contain a unique mix of tangible insights and qualitative analysis to help companies achieve sustainable growth. Our team of experienced analysts and consultants use industry-leading research tools and techniques to compile comprehensive market studies, interspersed with relevant data.

At Fortune Business Insights, we aim at highlighting the most lucrative growth opportunities for our clients. We therefore offer recommendations, making it easier for them to navigate through technological and market-related changes. Our consulting services are designed to help organizations identify hidden opportunities and understand prevailing competitive challenges.

Contact Us:Fortune Business Insights Pvt. Ltd.308, Supreme Headquarters,Survey No. 36, Baner,Pune-Bangalore Highway,Pune- 411045, Maharashtra,India.Phone:US: +1-424-253-0390UK: +44-2071-939123APAC: +91-744-740-1245Email:[emailprotected]Fortune Business InsightsLinkedIn|Twitter|Blogs

Read Press Release:https://www.fortunebusinessinsights.com/press-release/cloud-storage-market-9909

Logo - https://mma.prnewswire.com/media/881202/Fortune_Business_Insights_Logo.jpg Photo - https://mma.prnewswire.com/media/1169294/Cloud_Storage_Market.jpg

SOURCE Fortune Business Insights

See the original post:
Cloud Storage Market to Reach USD 297.54 Billion by 2027; Higher Adoption of Machine Learning to Boost Growth, Says Fortune Business Insights -...

New Research Claims to Have Found a Solution to Machine Learning Attacks – Analytics Insight

AI has been making some major strides in the computing world in recent years. But that also means they have become increasingly vulnerable to security concerns. Just by examining the power usage patterns or signatures during operations, one may able to gain access to sensitive information housed by a computer system. And in AI, machine learning algorithms are more prone to such attacks. The same algorithms are employed in smart home devices, cars to identify different forms of images and sounds that are embedded with specialized computing chips.

These chips rely on using neural networks, instead of a cloud computing server located in a data center miles away. Due to such physical proximity, the neural networks can perform computations, at a faster rate, with minimal delay. This also makes it simple for hackers to reverse-engineer the chips inner workings using a method known as differential power analysis (DPA). Thereby, it is a warning threat for the Internet of Things/edge devices because of their power signatures or electromagnetic radiation signage. If leaked, the neural model, including weights, biases, and hyper-parameters, can violate data privacy and intellectual property rights.

Recently a team of researchers of North Carolina State University presented a preprint paper at the 2020 IEEE International Symposium on Hardware Oriented Security and Trust in San Jose, California. The paper mentions about the DPA framework to neural-network classiers. First, it shows DPA attacks during inference to extract the secret model parameters such as weights and biases of a neural network. Second, it proposes the rst countermeasures against these attacks by augmenting masking. The resulting design uses novel masked components such as masked adder trees for fully connected layers and masked Rectier Linear Units for activation functions. The team is led by Aydin Aysu, an assistant professor of electrical and computer engineering at North Carolina State University in Raleigh.

While DPA attacks have been successful against targets like the cryptographic algorithms that safeguard digital information and the smart chips found in ATM cards or credit cards, the team observes neural networks as possible targets, with perhaps even more profitable payoffs for the hackers or rival competitors. They can further unleash adversarial machine learning attacks that can confuse the existing neural network

The team focused on common and simple binarized neural networks (an efcient network for IoT/edge devices with binary weights and activation values) that are adept at doing computations with less computing resources. They began by demonstrating how power consumption measurements can be exploited to reveal the secret weight and values that help determine a neural networks computations. Using random known inputs, for multiple numbers of time, the adversary computes the corresponding power activity on an intermediate estimate of power patterns linked with the secret weight values of BNN, in a highly-parallelized hardware implementation.

Then the team designed a countermeasure to secure the neural network against such an attack via masking (an algorithm-level defense that can produce resilient designs independent of the implementation technology). This is done by splitting intermediate computations into two randomized shares that are different each time the neural network runs the same intermediate computation. This prevents an attacker from using a single intermediate computation to analyze different power consumption patterns. While the process requires tuning for protecting specific machine learning models, they can be executed on any form of computer chip that runs on a neural network, viz., Field Programmable Gate Arrays (FPGA), and Application-specific Integrated Circuits (ASIC). Under this defense technique, a binarized neural network requires the hypothetical adversary to perform 100,000 sets of power consumption measurements instead of just 200.

However, there are certain main concerns involved in the masking technique. During initial masking, the neural networks performance dropped by 50 percent and needed nearly double the computing area on the FPGA chip. Second, the team expressed the possibility of attackers avoid the basic masking defense by analyzing multiple intermediate computations instead of a single computation, thus leading to a computational arms race where they are split into further shares. Adding more security to them can be time-consuming.

Despite this, we still need active countermeasures against DPA attacks. Machine Learning (ML) is a critical new target with several motivating scenarios to keep the internal ML model secret. While Aysu explains that research is far from done, his research is supported by both the U.S. National Science Foundation and the Semiconductor Research Corporations Global Research Collaboration. He anticipates receiving funding to continue this work for another five years and hopes to enlist more Ph.D. students interested in the effort.

Interest in hardware security is increasing because, at the end of the day, the hardware is the root of trust, Aysu says. And if the root of trust is gone, then all the security defenses at other abstraction levels will fail.

Read the original here:
New Research Claims to Have Found a Solution to Machine Learning Attacks - Analytics Insight

Genomics and Machine Learning for In Vitro Sensitization Testing of Challenging Chemicals, Upcoming Webinar Hosted by Xtalks – PR Web

Xtalks Life Science Webinars

TORONTO (PRWEB) May 11, 2020

Predictive toxicology is a discipline that aims to proactively identify adverse human health and environmental effects in response to chemical exposure. GARD Genomic Allergen Rapid Detection is a next-generation, animal-free testing strategy framework for assessment and characterization of chemical sensitizers. The GARD platform integrates state-of-the-art technological components, including utilization of cell cultures of human immunological cells, omics-based evaluation of transcriptional patterns of endpoint-specific genomic biomarker signatures and machine learning-assisted classification-models.

To this end, the GARD platform provides accurate, cost effective and efficient assessment of skin and respiratory sensitizing capabilities of neat chemicals, complex formulations, mixtures and solid materials. GARD assays are successfully applied throughout the value chain of chemical and life science industries, including safety-based screening of candidates during preclinical research and development, monitoring of protocol changes and batch variations, monitoring of occupational health and for registration and regulatory approval.

This webinar will introduce the developmental phases of the GARD assays and discuss the technological origins of the observed high predictive performance, how the assays help industries overcome their specific challenges in safety testing in a broad applicability domain, and illustrate how GARD assays facilitate efficient decision-making in compliance with the principles of the 3Rs.

Join Andy Forreryd, PhD, SenzaGen AB and Henrik Johansson, PhD, Chief Scientist, SenzaGen AB in a live webinar on Wednesday, May 26, 2020 at 10am EDT (3pm BST/UK).

For more information or to register for this event, visit Genomics and Machine Learning for In Vitro Sensitization Testing of Challenging Chemicals.

ABOUT XTALKS

Xtalks, powered by Honeycomb Worldwide Inc., is a leading provider of educational webinars to the global life science, food and medical device community. Every year thousands of industry practitioners (from life science, food and medical device companies, private & academic research institutions, healthcare centers, etc.) turn to Xtalks for access to quality content. Xtalks helps Life Science professionals stay current with industry developments, trends and regulations. Xtalks webinars also provide perspectives on key issues from top industry thought leaders and service providers.

To learn more about Xtalks visit http://xtalks.comFor information about hosting a webinar visit http://xtalks.com/why-host-a-webinar/

Share article on social media or email:

More:
Genomics and Machine Learning for In Vitro Sensitization Testing of Challenging Chemicals, Upcoming Webinar Hosted by Xtalks - PR Web

Machine Learning Just Classified Over Half a Million Galaxies – Universe Today

Humanity is still a long way away from a fully artificial intelligence system. For now at least, AI is particularly good at some specialized tasks, such as classifying cats in videos. Now it has a new skill set: identifying spiral patterns in galaxies.

As with all AI skills, this one started out with categorized data. In this case, that data consisted of images of galaxies taken by the Subaru Telescope in Mauna Kea, Hawaii. The telescope is run by the National Astronomical Observatory of Japan (NAOJ), and has identified upwards of 560,000 galaxies in images it has taken.

Only a small sub-set of those half a million were manually categorized by scientists at NAOJ. The scientists then trained a deep-learning algorithm to identify galaxies that contained a spiral pattern, similar to the Milky Way. When applied to a further sub-set of the half a million galaxies (known as a test set), the algorithm accurately classified 97.5% of the galaxies surveyed as either spiral or non-spiral.

The research team then applied the algorithm to the fully 560,000 galaxies identified in the data so far. It classified about 80,000 of them as spiral, leaving about 480,000 as non-spiral galaxies. Admittedly, there may be some galaxies that are actually spirals that were not identified as such by the algorithm, as they might only be visible edge-on from Earths vantage point. In that case, even human classifiers would have a hard time correctly identifying a galaxy as a spiral.

The next step for the researchers is to train the deep learning algorithm to identify even more types and sub-types of galaxies. But to do that, they will need even more well categorized data. To help with that process, they have launched GALAXY CRUISE, a citizen science project where volunteers help to identify galaxies that are merging or colliding. They will be following in the footsteps of another effort by scientists at the Sloan Digital Sky Survey, which used Galaxy Zoo, collection of citizen science projects, to train a AI algorithm to identify spiral vs non-spiral galaxies as well. After the manual classification is done, the team hopes to upgrade the AI algorithm and analyze all half a million galaxies again to see how many of them might be colliding. Who knows, a few of those colliding galaxies might even look like cats.

Learn More:EurekaAlert: Classifying galaxies with artificial intelligencePhysics Letters B: Classifying galaxies with AI and people powerUniverse Today: Try your hand at identifying galaxiesUnite.ai: Astronomers Apply AI to Discover and Classify Galaxies

Like Loading...

Here is the original post:
Machine Learning Just Classified Over Half a Million Galaxies - Universe Today

Mphasis Partners With Ashoka University to Create ‘Mphasis Laboratory for Machine Learning and Computational Thinking’ – AiThority

Mphasis,an Information Technology solutions provider specialising in cloud and cognitive services,is coming together withAshoka University toset upalaboratory for machine learning and computational thinking, through a grant of INR 10 crore that Mphasis F1Foundation, the CSR arm ofthe company. The Mphasis Laboratory for Machine Learning and Computational Thinking will apply ML and design thinking to produce world-class papers and compelling proof-of-concepts of systems/prototypes with a potential for large societal impact.

The laboratory will be the setting for cutting edge research and a novel educational initiative that is focused on bringing thoroughly researched, pedagogy-based learning modules to Indian students. Through this laboratory, Mphasis and Ashoka University will work to translate research activity into educational modules focusing on the construction of entire systems that allow students to understand and experientially recreate the project. This approach to education is aimed creating a more engaging and widely accessible mode of learning.

Recommended AI News: AppTek Ranks First in Multiple Tracks of the 2020 Evaluation Campaign of the IWSLT

Mphasis believes that in order to fully embrace the digital learning paradigm, one needs to champion for accessibility and invest in quality education in mainstream academic spaces. Through this partnership, we hope to encourage students across disciplines and socio-economic backgrounds to learn and flourish. As Ashoka University also has a strong focus on diverse liberal arts disciplines, we hope to find avenues to expand some of Mphasis efforts towards Design (CX Design and Design Thinking) through this collaboration and eventually tap into the talent pool from Ashoka, saidNitin Rakesh, Chief Executive Officer, Mphasis.

Being ready to welcome students into the world of virtual learning is not enough Mphasis and Ashoka seek to enable an innovative pedagogy based on a problem-solving approach to learning about AI, ML, Design Thinking and System Design. Through this grant, Mphasis and Ashoka will establish avenues for knowledge exchange in the areas of Core Machine Learning, Information Curation, Accessibility for persons with disabilities and Health & Medicine. They seek to encourage a hands-on learning approach in areas such as core machine learning and information curation, which form the foundation of solution-driven design. They also seek to address the accessibility barrier through public-domain placement of all intellectual property produced in the laboratory which will benefit millions of students across the country.

Recommended AI News: Identity Automation Announces New CEO

We stand at the threshold of a discontinuity brought about by an increased ability to sense and produce enormous amounts of data and to create extremely large clusters driven by parallel runtimes.These developments have enabled ML and other data-driven approaches to become the paradigm of choice for complex problem solving. There is now a considerable opportunity to improving life-at-large based on these capabilities, said Prof. Ravi Kothari, HOD, Computer Science at Ashoka University.

With that as our over-arching goal, we proposed the creation of aLaboratory for Machine Learning and Computational Thinkingand found heartening support in Mphasis.saidAshish Dhawan, Founder & Chairman, Board of Trustees, Ashoka University.

While universities world-over have taken great strides to bring quality education to digital platforms, higher educational institutions in India have begun to address questions surrounding accessibility in a post-COVID setting. The collaboration between Mphasis and Ashoka is pioneering in its effort to establish a centre of excellence for collaborative and human-centred design that aims to fuel data-driven solutions for real-life challenges and address key areas of reform at the larger community level.

Recommended AI News: IBM Reveals Next-Generation IBM POWER10 Processor

Read the original:
Mphasis Partners With Ashoka University to Create 'Mphasis Laboratory for Machine Learning and Computational Thinking' - AiThority

NeuralCam Launches NeuralCam Live App Using Machine Learning to Turn iPhones into Smart Webcams – MarkTechPost

An era of virtual learning, when interviews, education, etc. are being conducted from home through laptops and the internet. The clarity of the camera for video calls, maybe work or class calls are the primary need of the hour. But laptop webcam still has 720p or 1080 resolutions with low color accuracy and light performance. Understanding the vast market for this NeuralCam introduces an app that converts an apple iPhone into smart webcam. The best part of the deal is its free.

Neuralcam live platform uses machine learning to generate a high-quality computer video stream using the iPhones front camera. Prerequisites are installing the IOS app and MAC driver. iPhone sends a live stream to your computer with features such as video enhancement. Video processing will be handled in the device rather than on the computer. The company is also building an IOS SDK for third-party video calling and streaming apps to control the enhancement features.

The main attractions of the NeuralCam live are

Few shortcomings at present are

A roadmap has been planned by NeuralCam to overcome these drawbacks. They also plan to release windows support soon and serve industries like education, health care, and entertainment.

Related

AI Marketing Strategy Intern: Young inspired management student with a solid background in engineering and technical know-how gathered through work experience with Robert Bosch engineering and business solutions. As a continuous learner, she works towards keeping up-to date with cutting-edge technologies while honing her management and strategy skills to develop a holistic understanding of the modern tech driven industries and drive developments towards better solutions.

Originally posted here:
NeuralCam Launches NeuralCam Live App Using Machine Learning to Turn iPhones into Smart Webcams - MarkTechPost

State of the Art in Automated Machine Learning – InfoQ.com

Key Takeaways

In recent years, machine learning has been very successful in solving a wide range of problems.

In particular, neural networks have reached human, and sometimes super-human, levels of ability in tasks such as language translation, object recognition, game playing, and even driving cars.

With this growth in capability has come a growth in complexity. Data scientists and machine learning engineers must perform feature engineering, design model architectures, and optimize hyperparameters.

Since the purpose of the machine learning is to automate a task normally done by humans, naturally the next step is to automate the tasks of data scientists and engineers.

This area of research is called automated machine learning, or AutoML.

There have been many exciting developments in AutoML recently, and it's important to take a look at the current state of the art and learn about what's happening now and what's coming up in the future.

InfoQ reached out to the following subject matter experts in the industry to discuss the current state and future trends in AutoML space.

InfoQ:What is AutoML and why is it important?

Francesca Lazzeri:AutoML is the process of automating the time consuming, iterative tasks of machine learning model development, including model selection and hyperparameter tuning. When automated systems are used, the high costs of running a single experiment (e.g. training a deep neural network) and the high sample complexity (i.e. large number of experiments required) can be decreased. Auto ML is important because data scientists, analysts, and developers across industries can leverage it to:

Matthew Tovbin:Similarly to how we use software to automate repetitive or complex processes, automated machine learning is a set of techniques we apply to efficiently build predictive models without manual effort. Such techniques include methods for data processing, feature engineering, model evaluation, and model serving. With AutoML, we can focus on higher-level objectives such as answering questions and delivering business values faster while avoiding mundane tasks, e.g., data wrangling, by standardizing the methods we apply.

Adrian de Wynter:AutoML is the idea that the machine learning process, from data selection to modeling, can be automated by a series of algorithms and heuristics. In its most extreme version, AutoML is a fully automated system: you give it data, and it returns a model (or models) that generalizes to unseen data. The common hurdles that modelers face, such as tuning hyperparameters, feature selection--even architecture selection--are handled by a series of algorithms and heuristics.

I think its importance stems from the fact that a computer does precisely what you want it to do, and it is fantastic at repetition. The large majority of the hurdles I mentioned above are precisely that: repetition. Finding a hyperparameter set that works for a problem is arduous. Finding a hyperparameter set and an architecture that works for a problem is even harder. Add to the mix data preprocessing, the time spent on debugging code, and trying to get the right environment to work, and you start wondering whether computers are actually helping you solve said problem, or just getting in the way. Then, you have a new problem, and you have to start all over again.

The key insight of AutoML is that you might be able to get away by using some things you tried out before (i.e., your prior knowledge) to speed up your modeling process. It turns out that said process is effectively an algorithm, and thus it can be written into a computer program for automation.

Leah McGuire:AutoML is machine learning experts automating themselves. Creating quality models is a complex, time-consuming process. It requires understanding the dataset and question to be answered. This understanding is then used to collect and join the needed data, select features to use, clean the data and features, transform the features into values that can be used by a model, select an appropriate model type for the question, and tune feature-engineering and model parameters. AutoML uses algorithms based on machine learning best practices to build high-quality models without time-intensive work from an expert.

AutoML is important because it makes it possible to create high quality models with less time and expertise. Companies, non-profits, and government agencies all collect vast amounts of data; in order for this data to be utilized, it needs to be synthesized to answer pertinent questions. Machine learning is an effective way of synthesizing data to answer relevant questions, particularly if you do not have the resources to employ analysts to spend huge amounts of time looking at the data. However, machine learning requires both expertise and time to implement. AutoML seeks to decrease these barriers. This means that more data can be analyzed and used to make decisions.

Marios Michailidis:Broadly speaking, I would call it the process of automatically deriving or extracting useful information from data via harnessing the power of machines. Digital data is being produced at an incredible pace. Now that companies have found ways to harness it to extract value, it has become imperative to invest in data science and machine learning. However, the supply of data science (in human resource) is not enough to meet the current needs, hence making existing data scientists more productive is of the essence. This is where the notion of automated machine learning can provide the most value, via equipping the existing data scientists with tools and processes that can make their work easier, quicker, and generally more efficient.

InfoQ:What parts of the ML process can be automated and what are some parts unlikely to be automated?

Lazzeri:With Automated ML, the following tasks can be automated:

However, there are a few important tasks that cannot be automated during the model development cycle, such us developing industry-specific knowledge and data acumen, which are hard to automate and it is impossible to not keep humans in the loop. Another important aspect to consider is about operationalizing machine learning models: AutoML is very useful for the machine learning model development cycle; however, for the automation of the deployment step, there are other tools that need to be used, such as MLOps, which enables data science and IT teams to collaborate and increase the pace of model development and deployment via monitoring, validation, and governance of machine learning models.

Tovbin:Through the years of development of the machine learning domain, we have seen that a large number of tasks around data manipulation, feature engineering, feature selection, model evaluation, hyperparameter tuning can be defined as an optimization problem and, with enough computing power, efficiently automated. We can see numerous proofs for that not only in research but also in the software industry as platform offerings or open-source libraries. All these tools use predefined methods for data processing, model training, and evaluation.

The creative approach to framing problems and applying new techniques to existing problems is the one that is not likely to be replicated by machine automation, due to a large number of possible permutations, complex context, and expertise the machine lacks. As an example, look at the design of neural net architectures and their applications, a problem where the search space is so ample, where the progress is still mostly human-driven.

de Wynter:In theory, the entire ML process is computationally hard. From fitting data to, say, a neural network, to hyperparameter selection, to neural architecture search (NAS), these are all hard problems in the general case. However, all of these components have been automated with varying degrees of success for specific problems thanks to a combination of algorithmic advances, computational power, and patience.

I would like to think that the data preprocessing step and feature selection processes are the hardest to automate, given that a machine learning model will only learn what it has seen, and its performance (and hence the solution provided by the system) is dependent on its input. That said, there is a growing body of research on that aspect, too, and I hope that it will not remain hard for many natural problems.

McGuire:I would break the process of creating a machine learning model into four main components: data ETL and cleaning, feature engineering, model selection and tuning, and model explanation and evaluation.

Data cleaning can be relatively straight forward or incredibly challenging, depending on your data set. One of the most important factors is history; if you have information about your data at every point in time, data cleaning can be automated quite well. If you have only a static representation of current state, cleaning becomes much more challenging. Older data systems designed before relatively cheap storage tend to keep only the current state of information. This means that many important datasets do not have a history of actions taken on the data. Cleaning this type of history-less data has been a challenge for AutoML to provide good quality models for our customers.

Feature engineering is - again - a combination of easy and extremely difficult to automate steps. Some types of feature engineering are easy to automate given sufficient metadata about particular features. For example, parsing a phone number to validate and extract the location from the area code is straightforward as long as you know that a particular string is a phone number. However, feature engineering that requires intimate, domain-specific knowledge of how a business works are unlikely to be automated. For example, if profits from a sale need to account for local taxes before being analyzed for cost-to-serve, some human input is likely required to establish this relationship (unless you have a massive amount of data to learn from). One reason deep learning has overtaken feature engineering in fields like vision and speech is the massive amounts of high quality training data. Tabular data is often quite source specific making it difficult to generalize and feature engineering remains a challenge. In addition, defining the correct way to combine sources of data is often incredibly complex and labor intensive. Once you have the relationship defined, the combination can be automated, but establishing this relationship takes a fair amount of manual work and is unlikely to be automated any time soon.

Model selection and tuning is the easiest component to automate and many libraries already do this; there are even AutoML algorithms to find entirely new deep learning architectures. However, model selection and tuning libraries assume that the data you are using for modeling is clean and that you have a good way of evaluating the efficacy of your model. Massive data sets also help. Establishing clean datasets and evaluation frameworks still remain the biggest challenges.

Model explanations have been an important area of research for machine learning in general. While it is not strictly speaking part of AutoML, the growth of AutoML makes it even more important. It is also the case that the way in which you implement automation has implications for explainability. Specifically tracking metadata about what was tried and selected determines how deep explanations can go. Building explanations into AutoML requires a conscious effort and is very important. At some point the automation has to stop and someone will look at and use the result. The more information the model provides about how it works the more useful it is to the end consumer.

Michailidis:I would divide the areas where automation can be applied to the following main areas:

Regarding problems which are hard to automate, the first thing that pops into my mind is anything related to translating the business problem into a machine learning problem. For AutoML to succeed, it would require mapping the business problem into a type of solvable machine learning problem. It will also need to be supported by the right data quality/relevancy. The testing of the model and the success criteria need to be defined carefully by the data scientist.

Another area that will be hard for AutoML to succeed is whenethical dilemmasmay arise from the use of machine learning. For example, if there is an accident involved due to an algorithmic error, who will be responsible? I feel this kind of situation can be a challenge for AutoML.

InfoQ: What type of problems or use cases are better candidates to use AutoML?

Lazzeri:Classification, regression, and time series forecasting are the best candidates for AutoML. Azure Machine Learning offers featurizations specifically for these tasks, such as deep neural network text featurizers for classification.

Common classification examples include fraud detection, handwriting recognition, and object detection. Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. For example automobile price based on features like, gas mileage, safety rating, etc.

Finally, building forecasts is an integral part of any business, whether its revenue, inventory, sales, or customer demand. Data Scientists can use automated ML to combine techniques and approaches and get a recommended, high-quality time series forecast.

Tovbin:Classification or regression problems relying on structured or semi-structured data, where one can define an evaluation metric, can usually be automated. For example, predicting user churn, real estate price prediction, autocomplete.

de Wynter:It depends. Let us assume that you want the standard goal of machine learning: you need to learn an unseen probability distribution from samples. You also know that there is some AutoML system that does an excellent job for various, somewhat related tasks. Theres absolutely no reason why you shouldnt automate it, especially if you dont have the time to be trying out possible solutions by yourself.

I do need to point out, however, that in theory a model that performs well for a specific problem does not have any guarantees around other problemsin fact, it is well-known that there exists at least one task where it will fail. Still, this statement is quite general and can be worked around in practice.

On the other hand, from an efficiency point of view, a problem that has been studied for years by many researchers might not be a great candidate, unless you are particularly interested in marginal improvements. This follows immediately from the fact that most AutoML results, and more concretely, NAS results, for well-known problems usually are equivalent within a small delta to the human-designed solutions. However, making the problem "interesting" (e.g., by including newer constraints such as parameter size) makes it effectively a new problem, and again perfect for AutoML.

McGuire:If you have a clean dataset that has a very well defined evaluation method it is a good candidate for AutoML. Early advances in AutoML have focused on areas such as hyper parameter tuning. This is a well defined but time consuming problem. These AutoML solutions are essentially taking advantage of increases in computational power combined with models of the problem space to arrive at solutions that are often better than an expert could achieve with less human time input. The key here is the clean dataset with a direct and easily measurable effect on the well defined evaluation set. AutoML will maximize your evaluation criteria very well. However, if there is any mismatch between that criteria and what you are trying to do or any confounding factors in the data AutoML will not see that in the way a human expert (hopefully) would.

Michailidis:Well-defined problemsare good use cases for AutoML. In these problems, the preparatory work has already been done. There are clear inputs and outputs and well-defined success criteria. Under these constraints, AutoML can produce the best results.

InfoQ: What are some important research problems in AutoML?

Lazzeri:An interesting research open question in AutoML is the problem of feature selection in supervised learning tasks. This is also called the differentiable feature selection problem, a gradient-based search algorithm for feature selection. Feature selection remains a crucial step in machine learning pipelines and continues to see active research: a few researchers from Microsoft Research are developing a feature selection method that is statistically efficient and computationally efficient.

Tovbin:The two significant ones that come to my mind are the transparency and bias of trained models.

Both experts and users often disagree or do not understand why ML systems, especially automated ones, make specific predictions. It is crucial to provide deeper insights into model predictions to allow users to gain confidence in such predictive systems. For example, when providing recommendations of products to consumers, a system can additionally highlight the contributing factors that influenced particular recommendations. In order to provide such functionality, in addition to the trained model, one would need to maintain additional metadata and expose it together with provided recommendations, which often cannot be easily achieved due to the size of the data or privacy concerns.

The same concerns apply to model bias, but the problem has different roots, e.g., incorrect data collection resulting in skewed datasets. This problem is more challenging to address because we often need to modify business processes and costly software. With applied automation, one can detect invalid datasets and sometimes even data collection practices early and allow removing bias from model predictions.

de Wynter:I think first and foremost, provably efficient and correct algorithms for hyperparameter optimization (HPO) and NAS. The issue with AutoML is that you are solving the problem of, well, problem solving (or rather, approximation), which is notoriously hard in the computational sense. We as researchers often focus on testing a few open benchmarks and call it a day, but, more often than not, such algorithms fail to generalize, and, as it was pointed out last year, they tend to not outperform a simple random search.

There is also the issue that from a computational point of view, a fully automated AutoML system will face problems that are not necessarily similar to the ones that it has seen before; or worse, they might have a similar input but completely different solutions. Normally, this is related to the field of "learning to learn", which often involves some type of reinforcement learning (or neural network) to learn how previous ML systems solved a problem, and approximately solve a new one.

McGuire:I think there is a lot of interesting work to do on automating feature engineering and data cleaning. This is where most of the time is spent in machine learning and domain expertise can be hugely important. Add to that the fact that most real world data is extremely messy and complex and you see that the biggest gains from automation are from automating as much data processing and transformation as possible.

Automating the data preparation work that currently takes a huge amount of human expertise and time is not a simple task. Techniques that have removed the need for custom feature engineering in fields like vision and language do not currently generalize to small messy datasets. You can use deep learning to identify pictures of cats because a cat is a cat and all you need to do is get enough labeled data to let a complex model fill in the features for you. A table tracking customer information for a bank is very different from a table tracking customer information for a clothing store. Using these datasets to build models for your business is a small data problem. Such problems cannot be solved simply by throwing enough data at a model that can capture the complexities on its own. Hand cleaning and feature engineering can use many different approaches and determining the best is currently something of an art form. Turning these steps into algorithms that can be applied across a wide range of data is a challenging but important area of research.

Being able to automatically create and more importantly explain models of such real world data is invaluable. Storage is cheap but experts are not. There is a huge amount of data being collected in the world today. Automating the cleaning and featurization of such data provides the opportunity to use it to answer important real world questions.

Michailidis:I personally find the area of (automation-aided)explainable AIand machine learning interpretability very interesting and very important for bridging the gap between Blackbox modelling and a model that stakeholders can comfortably trust.

Another area I am interested in is "model compression". I think it can be a huge game changer if we can automatically go from a powerful, complicated solution down to a much simpler one that canbasically produce the same/similar performance, but much faster, utilizing less resources.

InfoQ What are some AutoML techniques and open-source tool practitioners can use now?

Lazzeri:AutoML democratizes the machine learning model development process, and empowers its users, no matter their data science expertise, to identify an end-to-end machine learning pipeline for any problem. There are several AutoML techniques that practitioners can use now, my favorite ones are:

Tovbin:In recent years we have seen an explosion of tooling for machine learning practitioners starting from cloud platforms (Google Cloud AutoML, Salesforce Einstein, AWS SageMaker Autopilot, H2O AutoML) to open-source software (TPOT, AutoSklearn, TransmogrifAI). Here one can find more information on these and other solutions:

de Wynter:Disclaimer: I work for Amazon. This is an active area of research, and theres quite a few well-known algorithms (with more appearing every day) focusing on different parts of the pipeline, and with well-known successes on various problems. Its hard to name them all, but some of the best-known examples are grid search, Bayesian, and gradient-based methods for HPO; and search strategies (e.g., hill climbing), population/RL-based methods (e.g., ENAS, DARTS for one-shot NAS, and the algorithm used for AmoebaNet) for NAS. On the other hand, full end-to-end systems have achieved good results for a variety of problems.

McGuire:Well of course I need to mention our own open source AutoML library TransmogrifAI. We focus mainly on automating data cleaning and feature engineering with some model selection and are built on top of Spark.

There are also a large number of interesting AutoML libraries coming out in python including Hyperopt, scikit-optimize, and TPOT.

Michailidis:In the open source space, H2O.ai for has a tool called AutoML, that incorporates many of the elements I mentioned in the previous questions. It is also very scalable and can be used in any OS.Other tools are the autosklearnor autoweka.

InfoQ: What are the limitations of AutoML?

Lazzeri:Auto ML is raising a few challenges such as model parallelization, result collection, resource optimization, and iteration. Searching for the best model and hyperparameters is an iterative process constrained by many limitations, such as compute, money and time. Machine learning pipelines provide a solution to answer those AutoML challenges with a clear definition of the process and automation features. Azure Machine Learning pipeline is an independently executable workflow of a complete machine learning task. Pipelines should focus on machine learning tasks such as:

Tovbin:One problem that AutoML does not handle well is complex data types. The majority of automated methods expect certain data types, e.g., numerical, categorical, text, geo coordinates, and, therefore, specific distributions. Such methods are a poor fit to handle more complicated scenarios, such as behavioral data, e.g., online store visit sessions.

Another problem is feature engineering that needs to consider domain-specific properties of the data. For example, if we would like to build a system to automate email classification for an insurance sales team. The input from the sales team members that define which parts of the email are and are not necessary would usually be more valuable than a metric. When building such systems, it is essential to reinforce the system with domain expert feedback to achieve more reliable results.

de Wynter:There is the practical limitation of the sheer amount of computational resources you have to throw at a problem to get it solved. It is not a true obstacle insofar as you can always use more machines, but--environmentally speakingthere are consequences associated with such a brute-force approach. Now, not all of AutoML is brute-force (as I mentioned earlier, this is a computationally hard problem, so brute-forcing a problem will only get you so far), and relies heavily on heuristics, but you still need sizable compute to solve a given AutoML problem, since you have to try out multiple solutions end-to-end. Theres a push in the science community to obtain better, "greener" algorithms, and I think its fantastic and the way to go.

From a theoretical point of view, the hardness of AutoML is quite interestingultimately, it is a statement on how intrinsically difficult the problem is, regardless of what type or number of computers you use. Add to that what I mentioned earlier that there is no such thing as "one model to rule them all," (theoretically) and AutoML becomes a very complex computational problem.

Lastly, current AutoML systems have a well-defined model search space (e.g., neural network layers, or a mix of classifiers), which is expected to work for every input problem. This is not the case. However, the search spaces that provably generalize well for all possible problems are somewhat hard to implement in practice, so there is still an open question on how to bridge such a gap.

McGuire:I dont think AutoML is ready to replace having a human in the loop. AutoML can build a model, but as we automate more and more of modeling, developing tools to provide transparency into what the model is doing becomes more and more important. Models are only as good as the data used to build them. As we move away from having a human spending time to clean and deeply understand relationships in the data we need to provide new tools to allow users of the model to understand what the models are doing. You need a human to take a critical look at the models and the elements of the data they use and ask: is this the right thing to predict, and is this data OK to use? Without tools to answer these questions for AutoML models we run the risk unintentionally shooting ourselves in the foot. We need the ability to ensure we are not using inappropriate models or perpetuating and reinforcing issues and biases in society without realizing it.

Michailidis:This was covered mostly in previous sections. Another thing I would like to mention is that performance is greatly affected by theresources allocated. More powerful machines will be to cover a search space of potential algorithms, features and techniques much faster.

These tools (unless they are built to support very specific applications)do not have domain knowledgebut are made to solve generic problems. For example, they would not know out of the box that if a field in the data is called "distance travelled" and another one is called "duration in time" , they can be used to compute "speed" which may be an important feature for a given task. They may have a chance to generate that feature via stochastically trying different transformations in the data but a domain expert would figure this out much quicker, hence these tools will produce better results under the hands of an experienced data practitioner. Hence, these tools will be more successful if they have the option to incorporate domain knowledge coming from the expert.

The panelists agreed that AutoML is important because it saves time and resources, removing much of the manual work and allowing data scientist to deliver business value faster and more efficiently. The panelists predict, however, that AutoML will not likely remove the need for a "human in the loop," particularly for industry-specific knowledge and the ability to translate business problems into machine-learning problems. Important research areas in AutoML include feature engineering and model explanation.

The panelists highlighted several existing commercial and open-source AutoML tools and described the different parts of the machine-learning process that can be automated. Several panelists noted that one limitation of AutoML is the amount of computational resources required, while others pointed out the need for domain knowledge and model transparency.

Francesca Lazzeri, PhD is an experienced scientist and machine learning practitioner with over 12 years of both academic and industry experience. She is the author of a number of publications, including technology journals, conferences, and books. She currently leads an international team of cloud advocates and AI developers at Microsoft. Before joining Microsoft, she was a research fellow at Harvard University in the Technology and Operations Management Unit. Find her on Twitter:@frlazzeriand Medium:@francescalazzeri

Matthew Tovbinis a Co-Founder of Faros AI, a software automation platform for DevOps. Before founding Faros AI, he acted as Software Engineering Architect at Salesforce, developing the Salesforce Einstein AI platform, which powers the worlds smartest CRM. In addition, Matthew is a creator of TransmogrifAI, co-organizer of Scala Bay meetup, presenter and an active member in numerous functional programming groups. Matthew lives in the San Francisco Bay area with his wife and kid, enjoys photography, hiking, good whisky and computer gaming.

Adrian de Wynteris an Applied Scientist in Alexa AIs Secure AI Foundations organization. His work can be categorized in three broad, sometimes overlapping, areas: language modeling, neural architecture search, and privacy-preserving machine learning. His research interests involve meta-learning and natural language understanding, with a special emphasis on the computational foundations of these topics.

Leah McGuireis a Machine Learning Architect at Salesforce, working on automating as many of the steps involved in machine learning as possible. This automation has been instrumental in developing and shipping a number of customer facing machine learning offerings at Salesforce. Our goal is to bring intelligence to each customers unique data and business goals. Before focusing on developing machine learning products, she completed a PhD and a Postdoctoral Fellowship in Computational Neuroscience at the University of California, San Francisco, and at University of California, Berkeley, where she studied the neural encoding and integration of sensory signals.

MariosMichailidisis a Competitive data scientist at H2O.ai, developing the next generation of machine learning products in the AutoML space. He holds a Bsc in accounting Finance from the University of Macedonia in Greece, an Msc in Risk Management from the University of Southampton and a PhD in machine learning from the University College London (UCL) with focus on ensemble modelling. He is the creator ofKazAnova, a freeware GUI for credit scoring and data mining 100% made in Java as well as is the creator ofStackNet Meta-Modelling Framework. In his spare time he loves competing on data science challenges where he was ranked1st out of 500,000 members in the popular Kaggle.comdata science platform.

Read more from the original source:
State of the Art in Automated Machine Learning - InfoQ.com