Machine learning helps Invisalign patients find their perfect smile – CIO

The mobile computing trend requires enterprises to meet consumers' expectations for accessing information and completing tasks from a smartphone. But there's a converse to that arrangement: Mobile has also become the go-to digital platform companies use to market their goods and services.

Align Technology, which offers the Invisalign orthodontic device to straighten teeth, is embracing the trend with a mobile platform that both helps patients coordinate care with their doctors and entices new customers. The My Invisalign app includes detailed content on how the Invisalign system works, as well as machine learning (ML) technology to simulate what wearers' smiles will look like after using the medical device.

"It's a natural extension to help doctors and patients stay in touch," says Align Technology Chief Digital Officer Sreelakshmi Kolli, who joined the company as a software engineer in 2003 and has spent the past few years digitizing the customer experience and business operations. The development of My Invisalign also served as a pivot point for Kolli to migrate the company to agile and DevSecOps practices.

My Invisalign is a digital on-ramp for a company that has relied on pitches from enthusiastic dentists and pleased patients to help Invisalign find a home in the mouths of more than 8 million customers. An alternative to clunky metal braces, Invisalign comprises sheer plastic aligners that straighten patients' teeth gradually over several months. Invisalign patients swear by the device, but many consumers remain on the fence about a device with a $3,000 to $5,000 price range that is rarely covered completely by insurance.

Visit link:
Machine learning helps Invisalign patients find their perfect smile - CIO

Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind – Forbes

(Photo by Vitaly NevarTASS via Getty Images)

Wind farms have traditionally made less money for the electricity they produce because they have been unable to predict how windy it will be tomorrow.

The way a lot of power markets work is you have to schedule your assets a day ahead, said Michael Terrell, the head of energy market strategy at Google. And you tend to get compensated higher when you do that than if you sell into the market real-time.

Well, how do variable assets like wind schedule a day ahead when you don't know the wind is going to blow? Terrell asked, and how can you actually reserve your place in line?

We're not getting the full benefit and the full value of that power.

Heres how: Google and the Google-owned Artificial Intelligence firm DeepMind combined weather data with power data from 700 megawatts of wind energy that Google sources in the Central United States. Using machine learning, they have been able to better predict wind production, better predict electricity supply and demand, and as a result, reduce operating costs.

What we've been doing is working in partnership with the DeepMind team to use machine learning to take the weather data that's available publicly, actually forecast what we think the wind production will be the next day, and bid that wind into the day-ahead markets, Terrell said in a recent seminar hosted by the Stanford Precourt Institute of Energy. Stanford University posted video of the seminar last week.

The result has been a 20 percent increase in revenue for wind farms, Terrell said.

The Department of Energy listed improved wind forecasting as a first priority in its 2015 Wind Vision report, largely to improve reliability: Improve Wind Resource Characterization, the report said at the top of its list of goals. Collect data and develop models to improve wind forecasting at multiple temporal scalese.g., minutes, hours, days, months, years.

Googles goal has been more sweeping: to scrub carbon entirely from its energy portfolio, which consumes as much power as two San Franciscos.

Google achieved an initial milestone by matching its annual energy use with its annual renewable-energy procurement, Terrell said. But the company has not been carbon-free in every location at every hour, which is now its new goalwhat Terrell calls its 24x7 carbon-free goal.

We're really starting to turn our efforts in this direction, and we're finding that it's not something that's easy to do. It's arguably a moon shot, especially in places where the renewable resources of today are not as cost effective as they are in other places.

The scientists at London-based DeepMind have demonstrated that artificial intelligence can help by increasing the market viability of renewables at Google and beyond.

Our hope is that this kind of machine learning approach can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide, said DeepMind program manager Sims Witherspoon and Google software engineer Carl Elkin. In a Deepmind blog post, they outline how they boosted profits for Googles wind farms in the Southwest Power Pool, an energy market that stretches across the plains from the Canadian border to north Texas:

Using a neural network trained on widely available weather forecasts and historical turbine data, we configured the DeepMind system to predict wind-power output 36 hours ahead of actual generation. Based on these predictions, our model recommends how to make optimal hourly delivery commitments to the power grid a full day in advance.

The DeepMind system predicts wind-power output 36 hours in advance, allowing power producers to make ... [+] more lucrative advance bids to supply power to the grid.

See the rest here:
Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind - Forbes

Associations with No Place to Meet Are Turning to JUNO, A Live and On-Demand Digital Platform – AiThority

JUNO is pleased to announce the only all-in-one, live, andon-demand learning platformutilizing four human motivators to engage users and maximize the value of their experience. Gone are the days of multiple platforms, contracts, and vendors to secure ongoing engagement and learning with members. JUNO was built for a post-COVID-19reality. Greater strain and tighter budgets require a flexible solution to handle the New Normal and beyond.

Recommended AI News: Hawaii Signs Participating Addendum with DroneUp Providing Public Sector Agencies Access to Drone Services

JUNO facilitates full user engagement by offering these tools and features that meet the emerging user in their most desired expectations.

Connection: From hybrid to completely digital events, virtual meetings are the wave of the future. In fact, Microsoft teams alone have seen 2.7 billion meeting minutes in one day, a 200 percent increase. JUNO onboards users around interests, strengths, and desired improvement areas and allows machine learning triggers to recommend peer connections, mainstage, and breakout learning opportunities.

Gamification: 60% of all start-ups gamify their user experience because gamificationworks! By triggering real and powerful human emotions, users generate higher levels ofhappiness, intrigue, and excitement resulting in desires to engage further and stay involved longer.From profile building to polls, quizzes, and continued learning, JUNO ensures that every user action has value.

Recommended AI News: 3 Steps To Channel Customer Feedback Into Product Innovation

Business growth: So how will JUNO help your business grow? JUNO supports users and partners by facilitating business connections through live exhibit experiences, digital think-tank sessions, suggested collaboration partnerships, and skills-based visibility tools.

Ongoing learning: Live events must move past the transactional into the transformational. JUNO Creates EQ and IQ learning pathways to engage users on all levels. From certification and badging to goal setting and performance commitments, JUNO offers a diverse set of actions for users to personally develop.

In a time in which what got you here wont get you there, JUNO delivers the get you there solution, Former PCMA CEO, Deborah Sexton.

Recommended AI News: NVIDIA Accelerates Apache Spark, Worlds Leading Data Analytics Platform

Share and Enjoy !

See original here:
Associations with No Place to Meet Are Turning to JUNO, A Live and On-Demand Digital Platform - AiThority

Machine Learning Market 2020 | Analyzing The COVID-19 Impact Followed By Restraints, Opportunities And Projected Developments – 3rd Watch News

Trusted Business Insights answers what are the scenarios for growth and recovery and whether there will be any lasting structural impact from the unfolding crisis for the Machine Learning market.

Trusted Business Insights presents an updated and Latest Study on Machine Learning Market 2019-2026. The report contains market predictions related to market size, revenue, production, CAGR, Consumption, gross margin, price, and other substantial factors. While emphasizing the key driving and restraining forces for this market, the report also offers a complete study of the future trends and developments of the market.The report further elaborates on the micro and macroeconomic aspects including the socio-political landscape that is anticipated to shape the demand of the Machine Learning market during the forecast period (2019-2029).It also examines the role of the leading market players involved in the industry including their corporate overview, financial summary, and SWOT analysis.

Get Sample Copy of this Report @ Global Machine Learning Market 2020 (Includes Business Impact of COVID-19)

Global Machine Learning Market Insights, Ongoing Trends, End-use Applications, Market Size, Growth, and Forecast to 2029 is a research report on the target market, and is in process of completion at Trusted Business Insights. The report contains information and data, and inputs that have been verified and validated by experts in the target industry. The report presents a thorough study of annual revenues, historical data and information, key developments and strategies by major players that offer applications in the market. Besides critical data and information, the report includes key and ongoing trends, factors that driving market growth, factors that are potential restraints to market growth, as well as opportunities that can be leveraged for potential revenue generation in untapped regions and countries, as well as threats or challenges. The global treadmill ergometer market is segmented on the basis of application, end user, and region. Regions are further branched into key countries, and revenue shares and growth rates for each of the segment, and region as well as key countries have been provided in the final report.

Request Covid 19 Impact

Machine Learning: Overview

Machine Learning (ML) is a sub-segment of Artificial Intelligence (AI) platform. This scientific concept studies computational learning, statistics and algorithms models of computers used to perform specific tasks without input of instructions, and recognition of patterns in AI. Basically, it explores and analysis construction of statistical data and algorithms and estimates forecasts on analyzed data. Various applications of ML include Optical Character Recognition (OCR), e-mail filtering, detection of network intruders, learning to rank, and computer vision.

Machine learning has paved its way across several applications. In advertising sector, ML is implemented to analyze customers behavior, which can help in improving advertising strategies. AI-driven marketing and advertising is based on usage of various models in order to automate and optimize, and to use data into appropriate actions. In case of banking, financial services, and insurance (BFSI), machine learning is used to manage process such as assets management and loan approval, among others. Security, and management and publishing of documents are among other applications of machine learning.

In the recent past, the scope of applications of machine learning technology has widened into certain new aspects. For instance, the US Defense department plans to implement machine learning in combat vehicles for predictive maintenance, to determine when and where the repair and maintenance is required. In stock market, this technology is being used to make estimations and projections about the market with approximately 60% accuracy level.

Dynamics: Global Machine Learning Market

The machine learning market in North America is expected to record dominant share and is projected to continue with its dominance over the 10-year forecast period. This can be attributable to increasing investments and higher adoption of machine learning technology by to numerous organizations in BFSI sector in the region. In 2019 for instance, New York-based financial company, JPMorgan Chase & Co., invested in a startup Limeglass Ltd., which is a service provider of artificial intelligence, machine learning, and Natural Language Processing (NLP) to analyze organizational research. Limeglass Ltd. assists companies in developing technologically advanced products required for banking and finance.

The Asia Pacific machine learning market is projected to register highest growth rate over the 10-year forecast period. This is attributable to increasing adoption of advanced technologies including machine learning, along with a huge talent-base in countries such as China and India. In addition, emerging markets are projected to offer revenue opportunities by allowing entrance into these untapped markets and reach large consumer base that is willing to opt for AI-enabled products and services, which is further projected to drive Asia Pacific market growth. In 2018 for instance, NITI Aayog a policy think-tank of the Government of India, in collaboration with a multinational technology company, Google LLC will train and incubate AI-based firms and start-ups in India.

Global Machine Learning Market Segmentation:

Segmentation by Component:

HardwareSoftwareServices

Segmentation by Enterprise Size:

Small and Medium Enterprises (SMEs)Large Enterprises

Segmentation by End-use Industry:

HealthcareBFSILawRetailAdvertising & MediaAutomotive & TransportationAgricultureManufacturingOthers

Quick Read Table of Contents of this Report @ Global Machine Learning Market 2020 (Includes Business Impact of COVID-19)

Trusted Business InsightsShelly ArnoldMedia & Marketing ExecutiveEmail Me For Any ClarificationsConnect on LinkedInClick to follow Trusted Business Insights LinkedIn for Market Data and Updates.US: +1 646 568 9797UK: +44 330 808 0580

See the original post:
Machine Learning Market 2020 | Analyzing The COVID-19 Impact Followed By Restraints, Opportunities And Projected Developments - 3rd Watch News

How to overcome AI and machine learning adoption barriers – Gigabit Magazine – Technology News, Magazine and Website

Matt Newton, Senior Portfolio Marketing Manager at AVEVA, on how to overcome adoption barriers for AI and machine learning in the manufacturing industry

There has been a considerable amount of hype around Artificial Intelligence (AI) and Machine Learning (ML) technologies in the last five or so years.

So much so that AI has become somewhat of a buzzword full of ideas and promise, but something that is quite tricky to execute in practice.

At present, this means that the challenge we run into with AI and ML is a healthy dose of scepticism.

For example, weve seen several large companies adopt these capabilities, often announcing they intend to revolutionize operations and output with such technologies but then failing to deliver.

In turn, the ongoing evolution and adoption of these technologies is consequently knocked back. With so many potential applications for AI and ML it can be daunting to identify opportunities for technology adoption that can demonstrate real and quantifiable return on investment.

Many industries have effectively reached a sticking point in their adoption of AI and ML technologies.

Typically, this has been driven by unproven start-up companies delivering some type of open source technology and placing a flashy exterior around it, and then relying on a customer to act as a development partner for it.

However, this is the primary problem customers are not looking for prototype and unproven software to run their industrial operations.

Instead of offering a revolutionary digital experience, many companies are continuing to fuel their initial scepticism of AI and ML by providing poorly planned pilot projects that often land the company in a stalled position of pilot purgatory, continuous feature creep and a regular rollout of new beta versions of software.

This practice of the never ending pilot project is driving a reluctance for customers to then engage further with innovative companies who are truly driving digital transformation in their sector with proven AI and ML technology.

A way to overcome these challenges is to demonstrate proof points to the customer. This means showing how AI and ML technologies are real and are exactly like wed imagine them to be.

Naturally, some companies have better adopted AI and ML than others, but since much of this technology is so new, many are still struggling to identify when and where to apply it.

For example, many are keen to use AI to track customer interests and needs.

In fact, even greater value can be discovered when applying AI in the form of predictive asset analytics on pieces of industrial process control and manufacturing equipment.

AI and ML can provide detailed, real-time insights on machinery operations, exposing new insights that humans cannot necessarily spot. Insights that can drive huge impact on businesses bottom line.

AI and ML is becoming incredibly popular in manufacturing industries, with advanced operations analysis often being driven by AI. Many are taking these technologies and applying it to their operating experiences to see where economic savings can be made.

All organisations want to save money where they can and with AI making this possible.

These same organisations are usually keen to invest in further digital technologies. Successfully implementing an AI or ML technology can significantly reduce OPEX and further fuel the digital transformation of an overall enterprise.

Understandably, we are seeing the value of AI and ML best demonstrated in the manufacturing sector in both process and batch automation.

For example, using AI to figure out how to optimize the process to achieve higher production yields and improve production quality. In the food and beverage sectors, AI is being used to monitor production line oven temperatures, flagging anomalies - including moisture, stack height and color - in a continually optimised process to reach the coveted golden batch.

The other side of this is to use predictive maintenance to monitor the behaviour of equipment and improve operational safety and asset reliability.

A combination of both AI and ML is fused together to create predictive and prescriptive maintenance. Where AI is used to spot anomalies in the behavior of assets and recommended solution is prescribed to remediate potential equipment failure.

Predictive and Prescriptive maintenance assist with reducing pressure on O&M costs, improving safety, and reducing unplanned shutdowns.

Both AI, machine learning and predictive maintenance technologies are enabling new connections to be made within the production line, offering new insights and suggestions for future operations.

Now is the time for organisations to realise that this adoption and innovation is offering new clarity on the relationship between different elements of the production cycle - paving the way for new methods to create better products at both faster speeds and lower costs.

See the article here:
How to overcome AI and machine learning adoption barriers - Gigabit Magazine - Technology News, Magazine and Website

Twitter adds former Google VP and A.I. guru Fei-Fei Li to board as it seeks to play catch up with Google and Facebook – CNBC

Twitter has appointed Stanford professor and former Google vice president Fei-Fei Li to its board as an independent director.

The social media platform said that Li's expertise in artificial intelligence (AI) will bring relevant perspectives to the board. Li's appointment may also help Twitter to attract top AI talent from other companies in Silicon Valley.

Li left her role as chief scientist of AI/ML (artificial intelligence/machine learning) at Google Cloud in October 2018 after being criticized for comments she made in relation to the controversial Project Maven initiative with the Pentagon, which saw Google AI used to identify drone targets from blurry drone video footage.

When details of the project emerged, Google employees objected, saying that they didn't want their AI technology used in military drones. Some quit in protest and around 4,000 staff signed a petition that called for "a clear policy stating that neither Google nor its contractors will ever build warfare technology."

While Li wasn't directly involved in the project, a leaked email suggested she was more concerned about what the public would make of Google's involvement in the project as opposed to the ethics of the project itself.

"This is red meat to the media to find all ways to damage Google," she wrote, according to a copy of the emailobtained by the Intercept. "You probably heardElon Muskand his comment about AI causing WW3."

"I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry. Google Cloud has been building our theme on Democratizing AI in 2017, and Diane (Greene, head of Google Cloud) and I have been talking about Humanistic AI for enterprise. I'd be super careful to protect these very positive images."

Up until that point, Li was seen very much as a rising star at Google. In the one year and 10 months she was there, she oversaw basic science AI research, all of Google Cloud's AI/ML products and engineering efforts, and a newGoogle AI lab in China.

While at Google she maintained strong links to Stanford and in March 2019 she launched the Stanford University Human-Centered AI Institute (HAI), which aims to advance AI research, education, policy and practice to benefit humanity.

"With unparalleled expertise in engineering, computer science and AI, Fei-Fei brings relevant perspectives to the board as Twitter continues to utilize technology to improve our service and achieve our long-term objectives," said Omid Kordestani, executive chairman of Twitter.

Twitter has been relatively slow off the mark in the AI race. Itacquired British start-up Magic Pony Technologies in 2016 for up to $150 million as part of an effort to beef up its AI credentials, but its AI efforts remain fairly small compared to other firms. It doesn't have the same reputation as companies like Google and Facebook when it comes to AI and machine-learning breakthroughs.

Today the company uses an AI technique called deep learning to recommend tweets to its users and it also uses AI to identify racist content and hate speech, or content from extremist groups.

Competition for AI talent is fierce in Silicon Valley and Twitter will no doubt be hoping that Li can bring in some big names in the AI world given she is one of the most respected AI leaders in the industry.

"Twitter is an incredible example of how technology can connect the world in powerful ways and I am honored to join the board at such an important time in the company's history," said Li.

"AI and machine learning can have an enormous impact on technology and the people who use it. I look forward to leveraging my experience for Twitter as it harnesses this technology to benefit everyone who uses the service."

See original here:
Twitter adds former Google VP and A.I. guru Fei-Fei Li to board as it seeks to play catch up with Google and Facebook - CNBC

Could Machine Learning Replace the Entire Weather Forecast System? – HPCwire

Just a few months ago, a series of major new weather and climate supercomputing investments were announced, including a 1.2 billion order for the worlds most powerful weather and climate supercomputer and a tripling of the U.S. operational supercomputing capacity for weather forecasting. Weather and climate modeling are among the most power-hungry use cases for supercomputers, and research and forecasting agencies often struggle to keep up with the computing needs of models that are, in many cases, simulating the atmosphere of the entire planet as granularly and as regularly as possible.

What if that all changed?

In a virtual keynote for the HPC-AI Advisory Councils 2020 Stanford Conference, Peter Dueben outlined how machine learning might (or might not) begin to augment and even, eventually, compete with heavy-duty, supercomputer-powered climate models. Dueben is the coordinator for machine learning and AI activities at the European Centre for Medium-Range Weather Forecasts (ECMWF), a UK-based intergovernmental organization that houses two supercomputers and provides 24/7 operational weather services at several timescales. ECMWF is also the home of the Integrated Forecast System (IFS), which Dueben says is probably one of the best forecast models in the world.

Why machine learning at all?

The Earth, Dueben explained, is big. So big, in fact, that apart from being laborious, developing a representational model of the Earths weather and climate systems brick-by-brick isnt achieving the accuracy that you might imagine. Despite the computing firepower behind weather forecasting, most models remain at a 10 kilometer resolution that doesnt represent clouds, and the chaotic atmospheric dynamics and occasionally opaque interactions further complicate model outputs.

However, on the other side, we have a huge number of observations, Dueben said. Just to give you an impression, ECMWF is getting hundreds of millions of observations onto the site every day. Some observations come from satellites, planes, ships, ground measurements, balloons This data collected over the last several decades constituted hundreds of petabytes if simulations and climate modeling results were included.

If you combine those two points, we have a very complex nonlinear system and we also have a lot of data, he said. Theres obviously lots of potential applications for machine learning in weather modeling.

Potential applications of machine learning

Machine learning applications are really spread all over the entire workflow of weather prediction, Dueben said, breaking that workflow down into observations, data assimilation, numerical weather forecasting, and post-processing and dissemination. Across those areas, he explained, machine learning could be used for anything from weather data monitoring to learning the underlying equations of atmospheric motions.

By way of example, Dueben highlighted a handful of current, real-world applications. In one case, researchers had applied machine learning to detecting wildfires caused by lightning. Using observations for 15 variables (such as temperature, soil moisture and vegetation cover), the researchers constructed a machine learning-based decision tree to assess whether or not satellite observations included wildfires. The team achieved an accuracy of 77 percent which, Deuben said, doesnt sound too great in principle, but was actually quite good.

Elsewhere, another team explored the use of machine learning to correct persistent biases in forecast model results. Dueben explained that researchers were examining the use of a weak constraint machine learning algorithm (in this case, 4D-Var), which is a kind of algorithm that would be able to learn this kind of forecast error and correct it in the data assimilation process.

We learn, basically, the bias, he said, and then once we have learned the bias, we can correct the bias of the forecast model by just adding forcing terms to the system. Once 4D-Var was implemented on a sample of forecast model results, the biases were ameliorated. Though Dueben cautioned that the process is still fairly simplistic, a new collaboration with Nvidia is looking into more sophisticated ways of correcting those forecast errors with machine learning.

Dueben also outlined applications in post-processing. Much of modern weather forecasting focuses on ensemble methods, where a model is run many times to obtain a spread of possible scenarios and as a result, probabilities of various outcomes. We investigate whether we can correct the ensemble spread calculated from a small number of ensemble members via deep learning, Dueben said. Once again, machine learning when applied to a ten-member ensemble looking at temperatures in Europe improved the results, reducing error in temperature spreads.

Can machine learning replace core functionality or even the entire forecast system?

One of the things that were looking into is the emulation of different permutation schemes, Dueben said. Chief among those, at least initially, have been the radiation component of forecast models, which account for the fluxes of solar radiation between the ground, the clouds and the upper atmosphere. As a trial run, Dueben and his colleagues are using extensive radiation output data from a forecast model to train a neural network. First of all, its very, very light, Dueben said. Second of all, its also going to be much more portable. Once we represent radiation with a deep neural network, you can basically port it to whatever hardware you want.

Showing a pair of output images, one from the machine learning model and one from the forecast model, Dueben pointed out that it was hard to notice significant differences and even refused to tell the audience which was which. Furthermore, he said, the model had achieved around a tenfold speedup. (Im quite confident that it will actually be much better than a factor of ten, Dueben said.)

Dueben and his colleagues have also scaled their tests up to more ambitious realms. They pulled hourly data on geopotential height (Z500) which is related to air pressure and trained a deep learning model to predict future changes in Z500 across the globe using only that historical data. For this, no physical understanding is really required, Dueben said, and it turns out that its actually working quite well.

Still, Dueben forced himself to face the crucial question.

Is this the future? he asked. I have to say its probably not.

There were several reasons for this. First, Dueben said, the simulations were unstable, eventually blowing up if they were stretched too far. Second of all, he said, its also unknown how to increase complexity at this stage. We only have one field here. Finally, he explained, there were only forty years of sufficiently detailed data with which to work.

Still, it wasnt all pessimism. Its kind of unlikely that its going to fly and basically feed operational forecasting at one point, he said. However, having said this, there are now a number of papers coming out where people are looking into this in a much, much more complicated way than we have done with really sophisticated convolutional networks and they get, actually, quite good results. So who knows!

The path forward

The main challenge for machine learning in the community that were facing at the moment, Dueben said, is basically that we need to prove now that machine learning solutions can really be better than conventional tools and we need to do this in the next couple of years.

There are, of course, many roadblocks to that goal. Forecasting models are extraordinarily complicated; iterations on deep learning models require significant HPC resources to test and validate; and metrics of comparison among models are unclear. Dueben also outlined a series of major unknowns in machine learning for weather forecasting: could our explicit knowledge of atmospheric mechanisms be used to improve a machine learning forecast? Could researchers guarantee reproducibility? Could the tools be scaled effectively to HPC? The list went on.

Many scientists are working on these dilemmas as we speak, Dueben said, and Im sure we will have an enormous amount of progress in the next couple of years. Outlining a path forward, Dueben emphasized a mixture of a top-down and a bottom-up approach to link machine learning with weather and climate models. Per his diagram, this would combine neutral networks based on human knowledge of earth systems with reliable benchmarks, scalability and better uncertainty quantification.

As far as where he sees machine learning for weather prediction in ten years?

It could be that machine learning will have no long-term effect whatsoever that its just a wave going through, Dueben mused. But on the other hand, it could well be that machine learning tools will actually replace almost all conventional models that were working with.

Read the rest here:
Could Machine Learning Replace the Entire Weather Forecast System? - HPCwire

Major Companies in Machine Learning as a Service Market Struggle to Fulfil the Extraordinary Demand Intensified by COVID-81 – Jewish Life News

The latest report on the Machine Learning as a Service market provides an out an out analysis of the various factors that are projected to define the course of the Machine Learning as a Service market during the forecast period. The current trends that are expected to influence the future prospects of the Machine Learning as a Service market are analyzed in the report. Further, a quantitative and qualitative assessment of the various segments of the Machine Learning as a Service market is included in the report along with relevant tables, figures, and graphs. The report also encompasses valuable insights pertaining to the impact of the COVID-19 pandemic on the global Machine Learning as a Service market.

The report reveals that the Machine Learning as a Service market is expected to witness a CAGR growth of ~XX% over the forecast period (2019-2029) and reach a value of ~US$ XX towards the end of 2019. The regulatory framework, R&D activities, and technological advancements relevant to the Machine Learning as a Service market are enclosed in the report.

Request Sample Report @https://www.mrrse.com/sample/9077?source=atm

The market is segregated into different segments to provide a granular analysis of the Machine Learning as a Service market. The market is segmented on the basis of application, end-user, region, and more.

The market share, size, and forecasted CAGR growth of each Machine Learning as a Service market segment and sub-segment are included in the report.

competition landscape which include competition matrix, market share analysis of major players in the global machine learning as a service market based on their 2016 revenues and profiles of major players. Competition matrix benchmarks leading players on the basis of their capabilities and potential to grow. Factors including market position, offerings and R&D focus are attributed to companys capabilities. Factors including top line growth, market share, segment growth, infrastructure facilities and future outlook are attributed to companys potential to grow. This section also identifies and includes various recent developments carried out by the leading players.

Company profiling includes company overview, major business strategies adopted, SWOT analysis and market revenues for year 2014 to 2016. The key players profiled in the global machine learning as a service market include IBM Corporation, Google Inc., Amazon Web Services, Microsoft Corporation, BigMl Inc., FICO, Yottamine Analytics, Ersatz Labs Inc, Predictron Labs Ltd and H2O.ai. Other players include ForecastThis Inc., Hewlett Packard Enterprise, Datoin, Fuzzy.ai, and Sift Science Inc. among others.

The global machine learning as a service market is segmented as below:

By Deployment Type

By End-use Application

By Geography

Request For Discount On This Report @ https://www.mrrse.com/checkdiscount/9077?source=atm

Important Doubts Related to the Machine Learning as a Service Market Addressed in the Report:

Knowledgeable Insights Enclosed in the Report

Buy This Report @ https://www.mrrse.com/checkout/9077?source=atm

View post:
Major Companies in Machine Learning as a Service Market Struggle to Fulfil the Extraordinary Demand Intensified by COVID-81 - Jewish Life News

Developers: This new tool spots critical security bugs 97% of the time – TechRepublic

Microsoft claims a machine learning models its built for software developers can distinguish between security and non-security bugs 99% of the time.

Microsoft plans to open-source the methodology behind a machine learning algorithm that it claims can distinguish between security bugs and non-security bugs with 99% accuracy.

The company developed a machine learning model to help software developers more easily spot security issues and identify which ones need to prioritized.

By pairing the system with human security experts, Microsoft said it was able to develop an algorithm that was not only able to correctly identify security bugs with nearly 100% accuracy, but also correctly flag critical, high priority bugs 97% of the time.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

The company plans to open-source its methodology on GitHub "in the coming months".

According to Microsoft, its team of 47,000 developers generate some 30,000 bugs every month across its AzureDevOps and GitHub silos, causing headaches for security teams whose job it is to ensure critical security vulnerabilities don't go missed.

While tools that automatically flag and triaged bugs are available, sometimes false-positives are tagged or bugs are classified as low-impact issues when they are in fact more severe.

To remedy this, Microsoft set to work building a machine learning model capable of both classifying bugs as security or non-security issues, as well as identifying critical and non-critical bugs "with a level of accuracy that is as close as possible to that of a security expert."

This first involved feeding the model training data that had been approved by security experts, based on statistical sampling of security and non-security bugs. Once the production model had been approved, Microsoft set about programming a two-step learning model that would enable the algorithm to learn how to distinguish between security bugs and non-security bugs, and then assign labels to bugs indicating whether they were low-impact, important or critical.

Crucially, security experts were involved with the production model through every stage of the journey, reviewing and approving data to confirm labels were correct; selecting, training and evaluating modelling techniques; and manually reviewing random samples of bugs to assess the algorithm's accuracy.

Scott Christiansen, Senior Security Program Manager at Microsoft and Mayana Pereira, Microsoft Data and Applied Scientist, explained that the model was automatically re-trained with new data to it kept pace with the Microsoft's internal production cycle.

"The data is still approved by a security expert before the model is retrained, and we continuously monitor the number of bugs generated in production," they said.

"By applying machine learning to our data, we accurately classify which work items are security bugs 99 percent of the time. The model is also 97 percent accurate at labeling critical and non-critical security bugs.

"This level of accuracy gives us confidence that we are catching more security vulnerabilities before they are exploited."

From the hottest programming languages to the jobs with the highest salaries, get the developer news and tips you need to know. Weekly

Read the rest here:
Developers: This new tool spots critical security bugs 97% of the time - TechRepublic

Microsoft Office 365: How these Azure machine-learning services will make you more productive and efficient – TechRepublic

Office can now suggest better phrases in Word or entire replies in Outlook, design your PowerPoint slides, and coach you on presenting them. Microsoft built those features with Azure Machine Learning and big models - while keeping your Office 365 data private.

The Microsoft Office clients have been getting smarter for several years: the first version of Editor arrived in Word in 2016, based on Bing's machine learning, and it's now been extended to include the promised Ideas feature with extra capabilities. More and more of the new Office features in the various Microsoft 365 subscriptions are underpinned by machine learning.

You get the basic spelling and grammar checking in any version of Word. But if you have a subscription, Word, Outlook and a new Microsoft Editor browser extension will be able to warn you if you're phrasing something badly, using gendered idioms so common that you may not notice who they exclude, hewing so closely to the way your research sources phrased something that you need to either write it in your own words or enter a citation, or just not sticking to your chosen punctuation rules.

SEE:Choosing your Windows 7 exit strategy: Four options(TechRepublic Premium)

Word can use the real-world number comparisons that Bing has had for a while to make large numbers more comprehensible. It can also translate the acronyms you use inside your organization -- and distinguish them from what someone in another industry would mean by them. It can even recognise that those few words in bold are a heading and ask if you want to switch to a heading style so they show up in the table of contents.

Outlook on iOS uses machine learning to turn the timestamp on an email to a friendlier 'half an hour ago' when you have it read out your messages. Mobile and web Outlook use machine learning and natural-language processing to suggest three quick replies for some messages, which might include scheduling a meeting.

Excel has the same natural-language queries for spreadsheets as Power BI, letting you ask questions about your data. PowerPoint Designer can automatically crop pictures, put them in the right place on the slide and suggest a layout and design; it uses machine learning for text and slide structure analysis, image categorisation, recommending content to include and ranking the layout suggestions it makes. The Presenter Coach tells you if you're slouching, talking in a monotone or staring down at your screen all the time while you're talking, using machine learning to analyse your voice and posture from your webcam.

How PowerPoint Designer uses AML (Azure Machine Learning).

Image: Microsoft

Many of these features are built using the Azure Machine Learning service, Erez Barak, partner group program manager for AI Platform Management, told TechRepublic. At the other extreme, some call the pre-built Azure Cognitive Services APIs for things like speech recognition in the presentation coach, as well as captioning PowerPoint presentations in real-time and live translation into 60-plus languages (and those APIs are themselves built using AML).

Other features are based on customising pre-trained models like Turing Neural Language Generation, a seventeen-billion parameter deep-learning language model that can answer questions, complete sentences and summarize text -- useful for suggesting alternative phrases in Editor or email replies in Outlook. "We use those models in Office after applying some transfer learning to customise them," Barak explained. "We leverage a lot of data, not directly but by the transfer learning we do; that's based on big data to give us a strong natural-language understanding base. For everything we do in Office requires that context; we try to leverage the data we have from big models -- from the Turing model especially given its size and its leadership position in the market -- in order to solve for specific Office problems."

AML is a machine-learning platform for both Microsoft product teams and customers to build intelligent features that can plug into business processes. It provides automated pipelines that take large amounts of data stored in Azure Data Lake, merge and pre-process the raw data, and feed them into distributed training running in parallel across multiple VMs and GPUs. The machine-learning version of the automated deployment common in DevOps is known as MLOps. Office machine-learning models are often built using frameworks like PyTorch or TensorFlow; the PowerPoint team uses a lot of Python and Jupiter notebooks.

The Office data scientists experiment with multiple different models and variations; the best model then gets stored back into Azure Data Lake and downloaded into AML using the ONNX runtime (open-sourced by Microsoft and Facebook) to run in production without having to be rebuilt. "Packaging the models in the ONNX runtime, especially for PowerPoint Designer, helps us to normalise the models, which is great for MLOps; as you tie these into pipelines, the more normalised assets you have, the easier, simpler and more productive that process becomes," said Barak.

ONNX also helps with performance when it comes to running the models in Office, especially for Designer. "If you think about the number of inference calls or scoring calls happening, performance is key: every small percentage and sub-percentage point matters," Barak pointed out.

A tool like Designer that's suggesting background images and videos to use as content needs a lot of compute and GPU to be fast enough. Some of the Turing models are so large that they run on the FPGA-powered Brainwave hardware inside Azure because otherwise they'd be too slow for workloads like answering questions in Bing searches. Office uses the AML compute layer for training and production which, Barak said, "provides normalised access to different types of compute, different types of machines, and also provides a normalised view into the performance of those machines".

"Office's training needs are pretty much bleeding edge: think long-running, GPU-powered, high-bandwidth training jobs that could run for days, sometimes for weeks, across multiple cores, and require a high level of visibility into the end process as well as a high level of reliability," Barak explained. "We leverage a lot of high-performing GPUs for both training the base models and transfer learning." Although the size of training data varies between the scenarios, Barak estimates that fine-tuning the Turing base model with six months of data would use 30-50TB of data (on top of the data used to train the original model).

Acronyms accesses your Office 365 data, because it needs to know which acronyms your organisation uses.

Image: Mary Branscombe/TechRepublic

The data used to train Editor's rewrite suggestions includes documents written by people with dyslexia, and many of the Office AI features use anonymised usage data from Office 365 usage. Acronyms is one of the few features that specifically uses your own Office 365 data, because it needs to find out which acronyms your organisation uses, but that isn't shared with any other Office users. Microsoft also uses public data for many features rather than trying to mine that from private Office documents. The similarity checker uses Bing data, and Editor's sentence rewrite uses public data like Wikipedia as well as public news data to train on.

As the home of so many documents, Office 365 has a wealth of data, but it also has strong compliance policies and processes that Microsoft's data scientists must follow. Those policies change over time as laws change or Office gets accredited to new standards -- "think of it as a moving target of policies and commitments Office has made in the past and will continue to make," Barak suggested. "In order for us to leverage a subset of the Office data in machine learning, naturally, we adhere to all those compliance promises."

LEARN MORE:Office 365 Consumer pricing and features

But models like those used in Presentation Designer need frequent retraining (at least every month) to deal with new data, such as which of the millions of slide designs it suggests get accepted and are retained in presentations. That data is anonymised before it's used for training, and the training is automated with AML pipelines. But it's important to score retrained models consistently with existing models so you can tell when there's an improvement, or if an experiment didn't pan out, so data scientists need repeated access to data.

"People continuously use that, so we continuously have new data around people's preferences and choices, and we want to continuously retrain. We can't have a system that needs to be adjusted over and over again, especially in the world of compliance. We need to have a system that's automatable. That's reproducible -- and frankly, easy enough for those users to use," Barak said.

"They're using AML Data Sets, which allow them to access this data while using the right policies and guard rails, so they're not creating copies of the data -- which is a key piece of keeping the compliance and trust promise we make to customers. Think of them as pointers and views into subsets of the data that data scientists want to use for machine learning."It's not just about access; it's about repeatable access, when the data scientists say 'let's bring in that bigger model, let's do some transfer learning using the data'. It's very dynamic: there's new data because there's more activity or more people [using it]. Then the big models get refreshed on a regular basis. We don't just have one version of the Turing model and then we're done with it; we have continuous versions of that model which we want to put in the hands of data scientists with an end-to-end lifecycle."

Those data sets can be shared without the risk of losing track of the data, which means other data scientists can run experiments on the same data sets. This makes it easier for them to get started developing a new machine-learning model.

Getting AML right for Microsoft product teams also helps enterprises who want to use AML for their own systems. "If we nail the likes and complexities of Office, we enable them to use machine learning in multiple business processes," Barak said. "And at the same time we learn a lot about automation and requirements around compliance that also very much applies to a lot of our third-party customers."

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

Read more:
Microsoft Office 365: How these Azure machine-learning services will make you more productive and efficient - TechRepublic

Machine Learning as a Service Market Overview, Top Companies, Region, Application and Global Forecast by 2026 – Latest Herald

Xeround

Global Machine Learning as a Service Market Segmentation

This market was divided into types, applications and regions. The growth of each segment provides an accurate calculation and forecast of sales by type and application in terms of volume and value for the period between 2020 and 2026. This analysis can help you develop your business by targeting niche markets. Market share data are available at global and regional levels. The regions covered by the report are North America, Europe, the Asia-Pacific region, the Middle East, and Africa and Latin America. Research analysts understand the competitive forces and provide competitive analysis for each competitor separately.

To get Incredible Discounts on this Premium Report, Click Here @ https://www.marketresearchintellect.com/ask-for-discount/?rid=195381&utm_source=LHN&utm_medium=888

Machine Learning as a Service Market Region Coverage (Regional Production, Demand & Forecast by Countries etc.):

North America (U.S., Canada, Mexico)

Europe (Germany, U.K., France, Italy, Russia, Spain etc.)

Asia-Pacific (China, India, Japan, Southeast Asia etc.)

South America (Brazil, Argentina etc.)

Middle East & Africa (Saudi Arabia, South Africa etc.)

Some Notable Report Offerings:

-> We will give you an assessment of the extent to which the market acquire commercial characteristics along with examples or instances of information that helps your assessment.

-> We will also support to identify standard/customary terms and conditions such as discounts, warranties, inspection, buyer financing, and acceptance for the Machine Learning as a Service industry.

-> We will further help you in finding any price ranges, pricing issues, and determination of price fluctuation of products in Machine Learning as a Service industry.

-> Furthermore, we will help you to identify any crucial trends to predict Machine Learning as a Service market growth rate up to 2026.

-> Lastly, the analyzed report will predict the general tendency for supply and demand in the Machine Learning as a Service market.

Have Any Query? Ask Our Expert@ https://www.marketresearchintellect.com/need-customization/?rid=195381&utm_source=LHN&utm_medium=888

Table of Contents:

Study Coverage: It includes study objectives, years considered for the research study, growth rate and Machine Learning as a Service market size of type and application segments, key manufacturers covered, product scope, and highlights of segmental analysis.

Executive Summary: In this section, the report focuses on analysis of macroscopic indicators, market issues, drivers, and trends, competitive landscape, CAGR of the global Machine Learning as a Service market, and global production. Under the global production chapter, the authors of the report have included market pricing and trends, global capacity, global production, and global revenue forecasts.

Machine Learning as a Service Market Size by Manufacturer: Here, the report concentrates on revenue and production shares of manufacturers for all the years of the forecast period. It also focuses on price by manufacturer and expansion plans and mergers and acquisitions of companies.

Production by Region: It shows how the revenue and production in the global market are distributed among different regions. Each regional market is extensively studied here on the basis of import and export, key players, revenue, and production.

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

Tags: Machine Learning as a Service Market Size, Machine Learning as a Service Market Growth, Machine Learning as a Service Market Forecast, Machine Learning as a Service Market Analysis

Our Trending Reports

Aerospace and Defense Telemetry Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Aerospace Coatings Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Read the rest here:
Machine Learning as a Service Market Overview, Top Companies, Region, Application and Global Forecast by 2026 - Latest Herald

Who knows the secret of the black magic box? Boffins seek the secrets of AI learning by mapping digital neurons – The Register

Roundup OpenAI Microscope: Neural networks, often described as black boxes, are complicated; its difficult to understand how all the neurons in the different layers interact with one another. As a result, machine learning engineers have a hard time trying to interpret their models.

OpenAI Microscope, a new project launched this week, shows that it is possible to see which groups of neurons are activated in a model when it processes an image. In other words, its possible to see what features these neurons in the different layers are learning. For example, the tools show what parts of a neural network are looking at the wheels or the windows in an image of a car.

There are eight different visualisations that take you through eight popular models - you can explore them all here.

At the moment, its more of an educational resource. The Microscope tools wont help you interpret your own models because they cant be applied to custom neural networks.

Generating the millions of images and underlying data for a Microscope visualization requires running lots of distributed jobs, OpenAI explained. At present, our tooling for doing this isn't usable by anyone other than us and is entangled with other infrastructure.

The researchers hope that their visualisation tools might inspire people to study the connections between neurons. Were excited to see how the community will use Microscope, and we encourage you to reuse these assets. In particular, we think it has a lot of potential in supporting the Circuits collaborationa project to reverse engineer neural networks by analyzing individual neurons and their connectionsor similar work, it concluded.

Don't stand so close to me: Current social distancing guidelines require people to stay at least six feet away from each other to prevent the spread of the novel coronavirus.

But how do you enforce this rule? Well, you cant really but you can try. Landing AI, a Silicon Valley startup led by Andrew Ng, has built what it calls an AI-enabled social distancing detection tool.

Heres how it works: Machine learning software analyses camera footage of people walking around and translates the frames into a birds eye view, where each person is represented as a green dot. A calibration tool estimates how far apart these people or dots are from one another by counting the pixels between them in the images. If theyre less than six feet apart, the dots turn red.

Landing AI said it built the tool to help the manufacturing and pharmaceutical industries. For example, at a factory that produces protective equipment, technicians could integrate this software into their security camera systems to monitor the working environment with easy calibration steps, it said.

The detector could highlight people whose distance is below the minimum acceptable distance in red, and draw a line between to emphasize this. The system will also be able to issue an alert to remind people to keep a safe distance if the protocol is violated.

Landing AI built this prototype at the request of customers whose businesses are deemed essential during this time, a spokesperson told The Register.

The productionization of this system is still early and we are exploring a few ways to notify people when the social distancing protocol is not followed. The methods being explored include issuing an audible alert if people pass too closely to each other on the factory floor, and a nightly report that can help managers get additional insights into their team so that they can make decisions like rearranging the workspace if needed.

You can read more about the prototype here.

Amazon improves Alexas reading voice: Amazon has added a new speaking style for its digital assistant Alexa.

The long-form speaking style will supposedly make Alexa sound more natural when its reading webpages or articles aloud. The feature, built from a text-to-speech AI model, introduces more natural pauses as it recites paragraphs of text or switches from one character to another in dialogues.

Unfortunately, this function is only available for customers in the US at the moment. To learn how to implement the long-form speaking style, follow the rules here.

Zoox settles with Tesla over IP use: Self-driving car startup Zoox announced it had settled its lawsuit with Tesla and agreed to pay Musks auto biz damages of an undisclosed fee.

Zoox acknowledges that certain of its new hires from Tesla were in possession of Tesla documents pertaining to shipping, receiving, and warehouse procedures when they joined Zooxs logistics team, and Zoox regrets the actions of those employees, according to a statement. As part of the settlement, Zoox will also conduct enhanced confidentiality training to ensure that all Zoox employees are aware of and respect their confidentiality obligations.

The case [PDF], initially filed by Teslas lawyers last year, accused the startup and four of its employees of stealing proprietary documents describing its warehouses and operations, and attempting to get more of its employees to join Zoox.

NeurIPS deadline extended: Heres a bit of good news for AI researchers amid all the doom and gloom of the current coronavirus pandemic: the deadline for submitting research papers to the annual NeurIPS AI conference has been extended.

Now, academics have until 27 May to submit their abstracts and 3 June to submit their finished papers. It can be hard to work during current lockdown situations as people juggle looking after children and their jobs.

Due to continued COVID-19 disruption, we have decided to extend the NeurIPS submission deadline by just over three weeks, the program chairs announced this week.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Excerpt from:
Who knows the secret of the black magic box? Boffins seek the secrets of AI learning by mapping digital neurons - The Register

Artificial Intelligence That Can Evolve on Its Own Is Being Tested by Google Scientists – Newsweek

Computer scientists working for a high-tech division of Google are testing how machine learning algorithms can be created from scratch, then evolve naturally, based on simple math.

Experts behind Google's AutoML suite of artificial intelligence tools have now showcased fresh research which suggests the existing software could potentially be updated to "automatically discover" completely unknown algorithms while also reducing human bias during the data input process.

Read more

According to ScienceMag, the software, known as AutoML-Zero, resembles the process of evolution, with code improving every generation with little human interaction.

Machine learning tools are "trained" to find patterns in vast amounts of data while automating such processes and constantly being refined based on past experience.

But researchers say this comes with drawbacks that AutoML-Zero aims to fix. Namely, the introduction of bias.

"Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML," their team's paper states. "Innovation is also limited by having fewer options: you cannot discover what you cannot search for."

The analysis, which was published last month on arXiv, is titled "Evolving Machine Learning Algorithms From Scratch" and is credited to a team working for Google Brain division.

"The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms," Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.

As noted by ScienceMag, AutoML-Zero is designed to create a population of 100 "candidate algorithms" by combining basic random math, then testing the results on simple tasks such as image differentiation. The best performing algorithms then "evolve" by randomly changing their code.

The resultswhich will be variants of the most successful algorithmsthen get added to the general population, as older and less successful algorithms get left behind, and the process continues to repeat. The network grows significantly, in turn giving the system more natural algorithms to work with.

Haran Jackson, the chief technology officer (CTO) at Techspert, who has a PhD in Computing from the University of Cambridge, told Newsweek that AutoML tools are typically used to "identify and extract" the most useful features from datasetsand this approach is a welcome development.

"As exciting as AutoML is, it is restricted to finding top-performing algorithms out of the, admittedly large, assortment of algorithms that we already know of," he said.

"There is a sense amongst many members of the community that the most impressive feats of artificial intelligence will only be achieved with the invention of new algorithms that are fundamentally different to those that we as a species have so far devised.

"This is what makes the aforementioned paper so interesting. It presents a method by which we can automatically construct and test completely novel machine learning algorithms."

Jackson, too, said the approach taken was similar to the facts of evolution first proposed by Charles Darwin, noting how the Google team was able to induce "mutations" into the set of algorithms.

"The mutated algorithms that did a better job of solving real-world problems were kept alive, with the poorly-performing ones being discarded," he elaborated.

"This was done repeatedly, until a set of high-performing algorithms was found. One intriguing aspect of the study is that this process 'rediscovered' some of the neural network algorithms that we already know and use. It's extremely exciting to see if it can turn up any algorithms that we haven't even thought of yet, the impact of which to our daily lives may be enormous." Google has been contacted for comment.

The development of AutoML was previously praised by Alphabet's CEO Sundar Pichai, who said it had been used to improve an algorithm that could detect the spread of breast cancer to adjacent lymph nodes. "It's inspiring to see how AI is starting to bear fruit," he wrote in a 2018 blog post.

The Google Brain team members who collaborated on the paper said the concepts in the most recent research were a solid starting point, but stressed that the project is far from over.

"Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent... multiplicative interactions. These results are promising, but there is still much work to be done," the scientists' preprint paper noted.

Walsh told Newsweek: "The developers of AutoML-Zero believe they have produced a system that has the ability to output algorithms human developers may never have thought of.

"According to the developers, due to its lack of human intervention AutoML-Zero has the potential to produce algorithms that are more free from human biases. This theoretically could result in cutting-edge algorithms that businesses could rely on to improve their efficiency.

"However, it is worth bearing in mind that for the time being the AI is still proof of concept and it will be some time before it is able to output the complex kinds of algorithms currently in use. On the other hand, the research [demonstrates how] the future of AI may be algorithms produced by other machines."

View post:
Artificial Intelligence That Can Evolve on Its Own Is Being Tested by Google Scientists - Newsweek

OnDemand Webinar | Embracing Machine Learning & Intelligence to Improve Threat Hunting & Detection – BankInfoSecurity.com

Please fill out the following fields:

CountryUnited StatesCanadaIndiaAfghanistanAlbaniaAlgeriaAmerican SamoaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBoliviaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean Trty.Brunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCongoCook IslandsCosta RicaCote D'IvoireCroatiaCubaCyprusCzech RepublicDenmarkDjiboutiDominicaDominican RepublicEast TimorEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland Islands (Malvinas)Faroe IslandsFijiFinlandFranceFrance, MetropolitanFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuamGuatemalaGuineaGuinea-BissauGuyanaHaitiHondurasHong KongHungaryIcelandIndonesiaIran (Islamic Republic of)IraqIrelandIsraelItalyJamaicaJapanJordanKazakhstanKenyaKiribatiKoreaKorea (Democratic)KuwaitKyrgystanLaosLatviaLebanonLesothoLiberiaLibyaLiechtensteinLithuaniaLuxembourgMacauMacedoniaMadagascarMalawiMalaysiaMaldivesMaliMaltaMarshall IslandsMartiniqueMauritaniaMauritiusMayotteMexicoMicronesiaMoldovaMonacoMongoliaMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNetherlands AntillesNeutral ZoneNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorthern Mariana IslandsNorwayOmanPakistanPalauPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalPuerto RicoQatarReunionRomaniaRussian FederationRwandaSaint HelenaSaint Kitts and NevisSaint LuciaSaint Pierre and MiquelonSaint Vincent and the GrenadinesSamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSpainSri LankaSudanSurinameSvalbard and Jan MayenSwazilandSwedenSwitzerlandSyrian Arab RepublicTaiwanTajikistanTanzaniaThailandTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos IslandsTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUruguayUS Minor Outlying IslandsUzbekistanVanuatuVatican City StateVenezuelaViet NamVirgin Islands (British)Virgin Islands (US)Wallis and FutunaWestern SaharaYemenYugoslaviaZaireZambiaZimbabwe

Title Level Attorney / General Counsel / Counsel AVP Board of Director C Level - Other CCO CEO / President CFO Chairperson CIO CISO / CSO COO CRO CTO Director EVP / SVP / FVP Head Healthcare Professional Manager / Supervisor Partner VP --Other Title Level--

Job Function Anti-Money Laundering (AML) Audit BSA Business Continuity/Disaster Recovery Business Development Cashier / Customer Service / Administrative Clinical Healthcare Professional Compliance Compliance / BSA Data Management Debit/Credit Card/Electronic Banking eCommerce / eBusiness Executive Management Finance / Accounting Founder / Owner Fraud HR / Training Information Security Information Technology Legal Lending Loss Prevention Marketing Network / Systems / Web Operations Others Policies / Procedures Product Management Project Regulatory Affairs Risk Management Sales Security / Privacy Vendor Management --Other Job Function--

Subscription Preferences:

Banking

Risk Management

Data Breach

Careers

Subscribe

Read the rest here:
OnDemand Webinar | Embracing Machine Learning & Intelligence to Improve Threat Hunting & Detection - BankInfoSecurity.com

Teslas acquisition of DeepScale starts to pay off with new IP in machine learning – Electrek

Teslas acquisition of machine-learning startup DeepScale is starting to pay off, with the team hired through the acquisition starting to deliver new IP for the automaker.

Late last year, it was revealed that Tesla acquired DeepScale, a Bay Area-based startup that focuses on Deep Neural Network (DNN) for self-driving vehicles, for an undisclosed amount.

They specialized in computing power-efficient deep learning systems, which is also an area of focus for Tesla, who decided to design its own computer chip to power its self-driving software.

There was speculation that Tesla acquired the small startup team in order to accelerate its machine learning development.

Now we are seeing some of that teams work, thanks to a new patent application.

Just days after Tesla acquired the startup in October 2019, the automaker applied for a new patent with three members of DeepScale listed as inventors: Matthew Cooper, Paras Jain, and Harsimran Singh Sidhu.

The patent application called Systems and Methods for Training Machine Models with Augmented Data was published yesterday.

Tesla writes about it in the application:

Systems and methods for training machine models with augmented data. An example method includes identifying a set of images captured by a set of cameras while affixed to one or more image collection systems. For each image in the set of images, a training output for the image is identified. For one or more images in the set of images, an augmented image for a set of augmented images is generated. Generating an augmented image includes modifying the image with an image manipulation function that maintains camera properties of the image. The augmented training image is associated with the training output of the image. A set of parameters of the predictive computer model are trained to predict the training output based on an image training set including the images and the set of augmented images.

The system that the DeepScale team, now working under Tesla, is trying to patent here is related to training a neural net using data from several different sensors observing scenes, like the eight cameras in Teslas Autopilot sensor array.

They write about the difficulties of such a situation in the patent application:

In typical machine learning applications, data may be augmented in various ways to avoid overfitting the model to the characteristics of the capture equipment used to obtain the training data. For example, in typical sets of images used for training computer models, the images may represent objects captured with many different capture environments having varying sensor characteristics with respect to the objects being captured. For example, such images may be captured by various sensor characteristics, such as various scales (e.g., significantly different distances within the image), with various focal lengths, by various lens types, with various pre- or post-processing, different software environments, sensor array hardware, and so forth. These sensors may also differ with respect to different extrinsic parameters, such as the position and orientation of the imaging sensors with respect to the environment as the image is captured. All of these different types of sensor characteristics can cause the captured images to present differently and variously throughout the different images in the image set and make it more difficult to properly train a computer model.

Here they summarize their solution to the problem:

One embodiment is a method for training a set of parameters of a predictive computer model. This embodiment may include: identifying a set of images captured by a set of cameras while affixed to one or more image collection systems; for each image in the set of images, identifying a training output for the image; for one or more images in the set of images, generating an augmented image for a set of augmented images by: generating an augmented image for a set of augmented images by modifying the image with an image manipulation function that maintains camera properties of the image, and associating the augmented training image with the training output of the image; training the set of parameters of the predictive computer model to predict the training output based on an image training set including the images and the set of augmented images.

An additional embodiment may include a system having one or more processors and non-transitory computer storage media storing instructions that when executed by the one or more processors, cause the processors to perform operations comprising: identifying a set of images captured by a set of cameras while affixed to one or more image collection systems; for each image in the set of images, identifying a training output for the image; for one or more images in the set of images, generating an augmented image for a set of augmented images by: generating an augmented image for a set of augmented images by modifying the image with an image manipulation function that maintains camera properties of the image, and associating the augmented training image with the training output of the image; training the set of parameters of the predictive computer model to predict the training output based on an image training set including the images and the set of augmented images.

Another embodiment may include a non-transitory computer-readable medium having instructions for execution by a processor, the instructions when executed by the processor causing the processor to: identify a set of images captured by a set of cameras while affixed to one or more image collection systems; for each image in the set of images, identify a training output for the image; for one or more images in the set of images, generate an augmented image for a set of augmented images by: generate an augmented image for a set of augmented images by modifying the image with an image manipulation function that maintains camera properties of the image, and associate the augmented training image with the training output of the image; train the computer model to learn to predict the training output based on an image training set including the images and the set of augmented images.

As we previously reported, Tesla is going through a significant foundational rewrite in the Tesla Autopilot. As part of the rewrite, CEO Elon Musk says that the neural net is absorbing more and more of the problem.

It will also include a more in-depth labeling system.

Musk described 3D labeling as a game-changer:

Its where the car goes into a scene with eight cameras, and kind of paint a path, and then you can label that path in 3D.

This new way to train machine learning systems with multiple cameras, like Teslas Autopilot, with augmented data could be part of this new Autopilot update.

Here are some drawings from the patent application:

Heres Teslas patent application in full:

FTC: We use income earning auto affiliate links. More.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

View original post here:
Teslas acquisition of DeepScale starts to pay off with new IP in machine learning - Electrek

Google Engineers ‘Mutate’ AI to Make It Evolve Systems Faster Than We Can Code Them – ScienceAlert

Much of the work undertaken by artificial intelligence involves a training process known as machine learning, where AI gets better at a task such as recognising a cat or mapping a route the more it does it. Now that same technique is being use to create new AI systems, without any human intervention.

For years, engineers at Google have been working on a freakishly smart machine learning system known as theAutoML system(or automatic machine learning system), which is already capable of creating AI that outperforms anything we've made.

Now, researchers have tweaked it to incorporate concepts of Darwinian evolution and shown it can build AI programs that continue to improve upon themselves faster than they would if humans were doing the coding.

The new system is called AutoML-Zero, and although it may sound a little alarming, it could lead to the rapid development of smarter systems - for example, neural networked designed to more accurately mimic the human brain with multiple layers and weightings, something human coders have struggled with.

"It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks," write the researchers in their pre-print paper. "We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space."

The original AutoML system is intended to make it easier for apps to leverage machine learning, and already includes plenty of automated features itself, but AutoML-Zero takes the required amount of human input way down.

Using a simple three-step process - setup, predict and learn - it can be thought of as machine learning from scratch.

The system starts off with a selection of 100 algorithms made by randomly combining simple mathematical operations. A sophisticated trial-and-error process then identifies the best performers, which are retained - with some tweaks - for another round of trials. In other words, the neural network is mutating as it goes.

When new code is produced, it's tested on AI tasks - like spotting the difference between a picture of a truck and a picture of a dog - and the best-performing algorithms are then kept for future iteration. Like survival of the fittest.

And it's fast too: the researchers reckon up to 10,000 possible algorithms can be searched through per second per processor (the more computer processors available for the task, the quicker it can work).

Eventually, this should see artificial intelligence systems become more widely used, and easier to access for programmers with no AI expertise. It might even help us eradicate human bias from AI, because humans are barely involved.

Work to improve AutoML-Zero continues, with the hope that it'll eventually be able to spit out algorithms that mere human programmers would never have thought of. Right now it's only capable of producing simple AI systems, but the researchers think the complexity can be scaled up rather rapidly.

"While most people were taking baby steps, [the researchers] took a giant leap into the unknown," computer scientist Risto Miikkulainen from the University of Texas, Austin, who was not involved in the work, told Edd Gent at Science. "This is one of those papers that could launch a lot of future research."

The research paper has yet to be published in a peer-reviewed journal, but can be viewed online at arXiv.org.

The rest is here:
Google Engineers 'Mutate' AI to Make It Evolve Systems Faster Than We Can Code Them - ScienceAlert

PhD Research Fellowship in Machine Learning for Cognitive Power Management job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY – NTNU | 219138 -…

About the position

This is a researcher training position aimed at providing promising researcher recruits the opportunity of academic development in the form of a doctoral degree.

Bringing intelligence into Internet-of-Things systems is mostly constrained by the availability of energy. Devices need to be wireless and small in size to be economically feasible and need to replenish their energy buffers using energy harvesting. In addition, devices need to work autonomously, because it is unfeasible to operate them manually or change batteriesthere's simply too many of them. To make the best of the energy available, IoT devices should plan wisely how they spend their energy, that means, which tasks they should perform and when. This requires the development of policies. Due to the different situations, the various devices may find themselves in, it will also vary from device to device which policies are best, which suggests the use of machine learning for the autonomous, individual development of energy policies for IoT devices.

One special focus in this project is the modeling of the power supply of the IoT devices, that means, the submodule that combines energy harvesting and energy buffering. Both are processes that are highly stochastic and probabilistic and vary over time and with the age of the device yet have major impact on a devices ability to perform well. In addition, due to the constraints, the approach itself must be computationally feasible and not itself consume too much energy. Combining machine learning for power supplies with the application goals of the IoT device is therefore a research challenge.

You will reportto the Head of Department.

Duties of the position

Within this project, we will design and validate machine-learning approaches to model power supplies to know more about their current and future state, and energy budget policies that allow IoT devices to perform better and autonomously. The project is cross-disciplinary involving electronic design, software and statistical techniques and machine learning. Depending on the skills of the candidate, different aspect may be emphasized, for instance focusing on statistical modelling of relevant effects, transfer learning and model identification, and explainability of machine learning models. Experience with electronics may be beneficial but are not strictly required.

The research will be carried out in an interdisciplinary environment of several research groups, and under guidance of three supervisors,

The research environments include

Required selection criteria

The PhD-position's main objective is to qualify for work in research positions. The qualification requirement is that you have completed a masters degree or second degree (equivalent to 120 credits) with a strong academic background in computer science, statistical machine learning, applied mathematics, communication- and information technology, electrical engineering, electronic engineering, or an equivalent education with a grade of B or better in terms ofNTNUs grading scale. If you do not have letter grades from previous studies, you must have an equally good academic foundation. If you are unable to meet these criteria you may be considered only if you can document that you are particularly suitable for education leading to a PhD degree.

The appointment is to be made in accordance with the regulations in force concerningState Employees and Civil Servants and national guidelines for appointment as PhD, post doctor and research assistant.

Preferred selection criteria

Personal characteristics

In the evaluation of which candidate is best qualified, emphasis will be placed on education, experience and personal suitability, in terms of the qualification requirements specified in the advertisement.

We offer

Salary and condition

PhD candidate:

PhD candidates are remunerated in code 1017, and are normally remunerated at gross from NOK 479 600 per annum, depending on qualifications and seniority. From the salary, 2% is deducted as a contribution to the Norwegian Public Service Pension Fund.

The period of employment is 4 years including 25% of teaching assistance. Students at NTNU can also apply for this position as part of an integrated PhD program (https://www.ntnu.edu/iik/integrated-phd).

Appointment to a PhD position requires that you are admitted to the PhD programme in Information Security and Communication Technologywithin three months of employment, and that you participate in an organized PhD programme during the employment period.

The engagement is to be made in accordance with the regulations in force concerning State Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criteria in the latter law will be prohibited from recruitment to NTNU. After the appointment you must assume that there may be changes in the area of work.

It is a prerequisite you can be present at and accessible to the institution daily.

About the application

The application and supporting documentation to be used as the basis for the assessment must be in English.

Publications and other scientific work must follow the application. Please note that applications are only evaluated based on the information available on the application deadline. You should ensure that your application shows clearly how your skills and experience meet the criteria which are set out above.

The application must contain:

Joint works will be considered. If it is difficult to identify your contribution to joint works, you must attach a brief description of your participation.

NTNU is committed to following evaluation criteria for research quality according toThe San Francisco Declaration on Research Assessment - DORA.

General information

Working at NTNU

A good work environment is characterized by diversity. We encourage qualified candidates to apply, regardless of their gender, functional capacity or cultural background.

The city of Trondheimis a modern European city with a rich cultural scene. Trondheim is the innovation capital of Norway with a population of 200,000. The Norwegian welfare state, including healthcare, schools, kindergartens and overall equality, is probably the best of its kind in the world. Professional subsidized day-care for children is easily available. Furthermore, Trondheim offers great opportunities for education (including international schools) and possibilities to enjoy nature, culture and family life and has low crime rates and clean air quality.

As an employeeatNTNU, you must at all times adhere to the changes that the development in the subject entails and the organizational changes that are adopted.

Information Act (Offentleglova), your name, age, position and municipality may be made public even if you have requested not to have your name entered on the list of applicants.

Questions about the position can be directed to Frank Alexander Kraemer, via kraemer@ntnu.no

Please submit your application electronically via jobbnorge.no with your CV, diplomas and certificates. Applications submitted elsewhere will not be considered. Diploma Supplement is required to attach for European Master Diplomas outside Norway.

Chinese applicants are required to provide confirmation of Master Diploma fromChina Credentials Verification (CHSI).

Pakistani applicants are required to provide information of Master Diploma from Higher Education Commission (HEC) https://hec.gov.pk/english/pages/home.aspx

Applicants with degrees from Cameroon, Canada, Ethiopia, Eritrea, Ghana, Nigeria, Philippines, Sudan, Uganda and USA have to send their education documents as paper copy directly from the university college/university, in addition to enclose a copy with the application.

Application deadline: 13.09.2020

NTNU - knowledge for a better world

The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life.

Department of Information Security and Communication Technology

Research is vital to the security of our society. We teach and conduct research in cyber security, information security, communications networks and networked services. Our areas of expertise include biometrics, cyber defence, cryptography, digital forensics, security in e-health and welfare technology, intelligent transportation systems and malware. The Department of Information Security and Communication Technology is one of seven departments in theFaculty of Information Technology and Electrical Engineering

Deadline13th September 2020EmployerNTNU - Norwegian University of Science and TechnologyMunicipalityTrondheimScopeFulltimeDurationTemporaryPlace of serviceNTNU Campus Glshaugen

Originally posted here:
PhD Research Fellowship in Machine Learning for Cognitive Power Management job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY - NTNU | 219138 -...

2 books to strengthen your command of python machine learning – TechTalks

Image credit: Depositphotos

This post is part ofAI education, a series of posts that review and explore educational content on data science and machine learning. (In partnership withPaperspace)

Mastering machine learning is not easy, even if youre a crack programmer. Ive seen many people come from a solid background of writing software in different domains (gaming, web, multimedia, etc.) thinking that adding machine learning to their roster of skills is another walk in the park. Its not. And every single one of them has been dismayed.

I see two reasons for why the challenges of machine learning are misunderstood. First, as the name suggests, machine learning is software that learns by itself as opposed to being instructed on every single rule by a developer. This is an oversimplification that many media outlets with little or no knowledge of the actual challenges of writing machine learning algorithms often use when speaking of the ML trade.

The second reason, in my opinion, are the many books and courses that promise to teach you the ins and outs of machine learning in a few hundred pages (and the ads on YouTube that promise to net you a machine learning job if you pass an online course). Now, I dont what to vilify any of those books and courses. Ive reviewed several of them (and will review some more in the coming weeks), and I think theyre invaluable sources for becoming a good machine learning developer.

But theyre not enough. Machine learning requires both good coding and math skills and a deep understanding of various types of algorithms. If youre doing Python machine learning, you have to have in-depth knowledge of many libraries and also master the many programming and memory-management techniques of the language. And, contrary to what some people say, you cant escape the math.

And all of that cant be summed up in a few hundred pages. Rather than a single volume, the complete guide to machine learning would probably look like Donald Knuths famous The Art of Computer Programming series.

So, what is all this tirade for? In my exploration of data science and machine learning, Im always on the lookout for books that take a deep dive into topics that are skimmed over by the more general, all-encompassing books.

In this post, Ill look at Python for Data Analysis and Practical Statistics for Data Scientists, two books that will help deepen your command of the coding and math skills required to master Python machine learning and data science.

Python for Data Analysis, 2nd Edition, is written by Wes McKinney, the creator of the pandas, one of key libraries using in Python machine learning. Doing machine learning in Python involves loading and preprocessing data in pandas before feeding them to your models.

Most books and courses on machine learning provide an introduction to the main pandas components such as DataFrames and Series and some of the key functions such as loading data from CSV files and cleaning rows with missing data. But the power of pandas is much broader and deeper than what you see in a chapters worth of code samples in most books.

In Python for Data Analysis, McKinney takes you through the entire functionality of pandas and manages to do so without making it read like a reference manual. There are lots of interesting examples that build on top of each other and help you understand how the different functions of pandas tie in with each other. Youll go in-depth on things such as cleaning, joining, and visualizing data sets, topics that are usually only discussed briefly in most machine learning books.

Youll also get to explore some very important challenges, such as memory management and code optimization, which can become a big deal when youre handling very large data sets in machine learning (which you often do).

What I also like about the book is the finesse that has gone into choosing subjects to fit in the 500 pages. While most of the book is about pandas, McKinney has taken great care to complement it with material about other important Python libraries and topics. Youll get a good overview of array-oriented programming with numpy, another important Python library often used in machine learning in concert with pandas, and some important techniques in using Jupyter Notebooks, the tool of choice for many data scientists.

All this said, dont expect Python for Data Analysis to be a very fun book. It can get boring because it just discusses working with data (which happens to be the most boring part of machine learning). There wont be any end-to-end examples where youll get to see the result of training and using a machine learning algorithm or integrating your models in real applications.

My recommendation: You should probably pick up Python for Data Analysis after going through one of the introductory or advanced books on data science or machine learning. Having that introductory background on working with Python machine learning libraries will help you better grasp the techniques introduced in the book.

While Python for Data Analysis improves your data-processing and -manipulation coding skills, the second book well look at, Practical Statistics for Data Scientists, 2nd Edition, will be the perfect resource to deepen your understanding of the core mathematical logic behind many key algorithms and concepts that you often deal with when doing data science and machine learning.

The book starts with simple concepts such as different types of data, means and medians, standard deviations, and percentiles. Then it gradually takes you through more advanced concepts such as different types of distributions, sampling strategies, and significance testing. These are all concepts you have probably learned in math class or read about in data science and machine learning books.

But again, the key here is specialization.

On the one hand, the depth that Practical Statistics for Data Scientists brings to each of these topics is greater than youll find in machine learning books. On the other hand, every topic is introduced along with coding examples in Python and R, which makes it more suitable than classic statistics textbooks on statistics. Moreover, the authors have done a great job of disambiguating the way different terms are used in data science and other fields. Each topic is accompanied by a box that provides all the different synonyms for popular terms.

As you go deeper into the book, youll dive into the mathematics of machine learning algorithms such as linear and logistic regression, K-nearest neighbors, trees and forests, and K-means clustering. In each case, like the rest of the book, theres more focus on whats happening under the algorithms hood rather than using it for applications. But the authors have again made sure the chapters dont read like classic math textbooks and the formulas and equations are accompanied by nice coding examples.

Like Python for Data Analysis, Practical Statistics for Data Scientists can get a bit boring if you read it end to end. There are no exciting applications or a continuous process where you build your code through the chapters. But on the other hand, the book has been structured in a way that you can read any of the sections independently without the need to go through previous chapters.

My recommendation: Read Practical Statistics for Data Scientists after going through an introductory book on data science and machine learning. I definitely recommend reading the entire book once, though to make it more enjoyable, go topic by topic in-between your exploration of other machine learning courses. Also keep it handy. Youll probably revisit some of the chapters from time to time.

I would definitely count Python for Data Analysis and Practical Statistics for Data Scientists as two must-reads for anyone who is on the path of learning data science and machine learning. Although they might not be as exciting as some of the more practical books, youll appreciate the depth they add to your coding and math skills.

View post:
2 books to strengthen your command of python machine learning - TechTalks

Deep learning’s role in the evolution of machine learning – TechTarget

Machine learning had a rich history long before deep learning reached fever pitch. Researchers and vendors were using machine learning algorithms to develop a variety of models for improving statistics, recognizing speech, predicting risk and other applications.

While many of the machine learning algorithms developed over the decades are still in use today, deep learning -- a form of machine learning based on multilayered neural networks -- catalyzed a renewed interest in AI and inspired the development of better tools, processes and infrastructure for all types of machine learning.

Here, we trace the significance of deep learning in the evolution of machine learning, as interpreted by people active in the field today.

The story of machine learning starts in 1943 when neurophysiologist Warren McCulloch and mathematician Walter Pitts introduced a mathematical model of a neural network. The field gathered steam in 1956 at a summer conference on the campus of Dartmouth College. There, 10 researchers came together for six weeks to lay the ground for a new field that involved neural networks, automata theory and symbolic reasoning.

The distinguished group, many of whom would go on to make seminal contributions to this new field, gave it the name artificial intelligence to distinguish it from cybernetics, a competing area of research focused on control systems. In some ways these two fields are now starting to converge with the growth of IoT, but that is a topic for another day.

Early neural networks were not particularly useful -- nor deep. Perceptrons, the single-layered neural networks in use then, could only learn linearly separable patterns. Interest in them waned after Marvin Minsky and Seymour Papert published the book Perceptrons in 1969, highlighting the limitations of existing neural network algorithms and causing the emphasis in AI research to shift.

"There was a massive focus on symbolic systems through the '70s, perhaps because of the idea that perceptrons were limited in what they could learn," said Sanmay Das, associate professor of computer science and engineering at Washington University in St. Louis and chair of the Association for Computing Machinery's special interest group on AI.

The 1973 publication of Pattern Classification and Scene Analysis by Richard Duda and Peter Hart introduced other types of machine learning algorithms, reinforcing the shift away from neural nets. A decade later, Machine Learning: An Artificial Intelligence Approach by Ryszard S. Michalski, Jaime G. Carbonell and Tom M. Mitchell further defined machine learning as a domain driven largely by the symbolic approach.

"That catalyzed a whole field of more symbolic approaches to [machine learning] that helped frame the field. This led to many Ph.D. theses, new journals in machine learning, a new academic conference, and even helped to create new laboratories like the NASA Ames AI Research branch, where I was deputy chief in the 1990s," said Monte Zweben, CEO of Splice Machine, a scale-out SQL platform.

In the 1990s, the evolution of machine learning made a turn. Driven by the rise of the internet and increase in the availability of usable data, the field began to shift from a knowledge-driven approach to a data-driven approach, paving the way for the machine learning models that we see today.

The turn toward data-driven machine learning in the 1990s was built on research done by Geoffrey Hinton at the University of Toronto in the mid-1980s. Hinton and his team demonstrated the ability to use backpropagation to build deeper neural networks.

"This was a major breakthrough enabling new kinds of pattern recognition that were previously not feasible with neural nets," Zweben said. This added new layers to the networks and a way to strengthen or weaken connections back across many layers in the network, leading to the term deep learning.

Although possible in a lab setting, deep learning did not immediately find its way into practical applications, and progress stalled.

"Through the '90s and '00s, a joke used to be that 'neural networks are the second-best learning algorithm for any problem,'" Washington University's Das said.

Meanwhile, commercial interest in AI was starting to wane because the hype around developing an AI on par with human intelligence had gotten ahead of results, leading to an AI winter, which lasted through the 1980s. What did gain momentum was a type of machine learning using kernel methods and decision trees that enabled practical commercial applications.

Still, the field of deep learning was not completely in retreat. In addition to the ascendancy of the internet and increase in available data, another factor proved to be an accelerant for neural nets, according to Zweben: namely, distributed computing.

Machine learning requires a lot of compute. In the early days, researchers had to keep their problems small or gain access to expensive supercomputers, Zweben said. The democratization of distributed computing in the early 2000s enabled researchers to run calculations across clusters of relatively low-cost commodity computers.

"Now, it is relatively cheap and easy to experiment with hundreds of models to find the best combination of data features, parameters and algorithms," Zweben said. The industry is pushing this democratization even further with practices and associated tools for machine learning operations that bring DevOps principles to machine learning deployment, he added.

Machine learning is also only as good as the data it is trained on, and if data sets are small, it is harder for the models to infer patterns. As the data created by mobile, social media, IoT and digital customer interactions grew, it provided the training material deep learning techniques needed to mature.

By 2012, deep learning attained star status after Hinton's team won ImageNet, a popular data science challenge, for their work on classifying images using neural networks. Things really accelerated after Google subsequently demonstrated an approach to scaling up deep learning across clusters of distributed computers.

"The last decade has been the decade of neural networks, largely because of the confluence of the data and computational power necessary for good training and the adaptation of algorithms and architectures necessary to make things work," Das said.

Even when deep neural networks are not used directly, they indirectly drove -- and continue to drive -- fundamental changes in the field of machine learning, including the following:

Deep learning's predictive power has inspired data scientists to think about different ways of framing problems that come up in other types of machine learning.

"There are many problems that we didn't think of as prediction problems that people have reformulated as prediction problems -- language, vision, etc. -- and many of the gains in those tasks have been possible because of this reformulation," said Nicholas Mattei, assistant professor of computer science at Tulane University and vice chair of the Association for Computing Machinery's special interest group on AI.

In language processing, for example, a lot of the focus has moved toward predicting what comes next in the text. In computer vision as well, many problems have been reformulated so that, instead of trying to understand geometry, the algorithms are predicting labels of different parts of an image.

The power of big data and deep learning is changing how models are built. Human analysis and insights are being replaced by raw compute power.

"Now, it seems that a lot of the time we have substituted big databases, lots of GPUs, and lots and lots of machine time to replace the deep problem introspection needed to craft features for more classic machine learning methods, such as SVM [support vector machine] and Bayes," Mattei said, referring to the Bayesian networks used for modeling the probabilities between observations and outcomes.

The art of crafting a machine learning problem has been taken over by advanced algorithms and the millions of hours of CPU time baked into pretrained models so data scientists can focus on other projects or spend more time on customizing models.

Deep learning is also helping data scientists solve problems with smaller data sets and to solve problems in cases where the data has not been labeled.

"One of the most relevant developments in recent times has been the improved use of data, whether in the form of self-supervised learning, improved data augmentation, generalization of pretraining tasks or contrastive learning," said Juan Jos Lpez Murphy, AI and big data tech director lead at Globant, an IT consultancy.

These techniques reduce the need for manually tagged and processed data. This is enabling researchers to build large models that can capture complex relationships representing the nature of the data and not just the relationships representing the task at hand. Lpez Murphy is starting to see transfer learning being adopted as a baseline approach, where researchers can start with a pretrained model that only requires a small amount of customization to provide good performance on many common tasks.

There are specific fields where deep learning provides a lot of value, in image, speech and natural language processing, for example, as well as time series forecasting.

"The broader field of machine learning is enhanced by deep learning and its ability to bring context to intelligence. Deep learning also improves [machine learning's] ability to learn nonlinear relationships and manage dimensionality with systems like autoencoders," said Luke Taylor, founder and COO at TrafficGuard, an ad fraud protection service.

For example, deep learning can find more efficient ways to auto encode the raw text of characters and words into vectors representing the similarity and differences of words, which can improve the efficiency of the machine learning algorithms used to process it. Deep learning algorithms that can recognize people in pictures make it easier to use other algorithms that find associations between people.

More recently, there have been significant jumps using deep learning to improve the use of image, text and speech processing through common interfaces. People are accustomed to speaking to virtual assistants on their smartphones and using facial recognition to unlock devices and identify friends in social media.

"This broader adoption creates more data, enables more machine learning refinement and increases the utility of machine learning even further, pushing even further adoption of this tech into people's lives," Taylor said.

Early machine learning research required expensive software licenses. But deep learning pioneers began open sourcing some of the most powerful tools, which has set a precedent for all types of machine learning.

"Earlier, machine learning algorithms were bundled and sold under a licensed tool. But, nowadays, open source libraries are available for any type of AI applications, which makes the learning curve easy," said Sachin Vyas, vice president of data, AI and automation products at LTI, an IT consultancy.

Another factor in democratizing access to machine learning tools has been the rise of Python.

"The wave of open source frameworks for deep learning cemented the prevalence of Python and its data ecosystem for research, development and even production," Globant's Lpez Murphy said.

Many of the different commercial and free options got replaced, integrated or connected to a Python layer for widespread use. As a result, Python has become the de facto lingua franca for machine learning development.

Deep learning has also inspired the open source community to automate and simplify other aspects of the machine learning development lifecycle. "Thanks to things like graphical user interfaces and [automated machine learning], creating working machine learning models is no longer limited to Ph.D. data scientists," Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting, said.

For machine learning to keep evolving, enterprises will need to find a balance between developing better applications and respecting privacy.

Data scientists will need to be more proactive in understanding where their data comes from and the biases that may inadvertently be baked into it, as well as develop algorithms that are transparent and interpretable. They also need to keep pace with new machine learning protocols and the different ways these can be woven together with various data sources to improve applications and decisions.

"Machine learning provides more innovative applications for end users, but unless we're choosing the right data sets and advancing deep learning protocols, machine learning will never make the transition from computing a few results to providing actual intelligence," said Justin Richie, director of data science at Nerdery, an IT consultancy.

"It will be interesting to see how this plays out in different industries and if this progress will continue even as data privacy becomes more stringent," Richie said.

Originally posted here:
Deep learning's role in the evolution of machine learning - TechTarget

Fake data is great data when it comes to machine learning – Stacey on IoT

Its been a few years since Ilast wroteabout the idea of using synthetic data to train machine learning models.After having three recent discussions on the topic, I figured its time to revisit the technology, especially as it seems to be gaining ground in mainstream adoption.

Back in 2018, at Microsoft Build, I saw a demonstration of a drone flying over a pipeline as it inspected it for leaks or other damage. Notably, the drones visual inspection model was trained using both actual data and simulated data. Use of the synthetic data helped teach the machine learning model about outliers and novel conditions it wasnt able to encounter using traditional training. Italso allowed Microsoft researchers to train the model more quickly and without the need to embark on as many expensive, data-gathering flights as it would have had to otherwise.

The technology is finally starting to gain ground. In April, a startup calledAnyverse raised 3million ($3.37 million)for its synthetic sensor data,while another startup,AI.Reverie,published a paper about how it used simulated data to train a model to identify planes on airport runways.

After writing that initial story, I heard very little about synthetic data untilmy conversation earlier this month with Dan Jeavons, chief data scientist at Shell. When I asked him about Shells machine learning projects, using simulated data was one that he was incredibly excited about because it helps build models that can detect problems that occur only rarely.

I think its a really interesting way to get info on the edge cases that were trying to solve, he said. Even though we have a lot of data, the big problem that we have is that, actually, we often only had a very few examples of what were looking for.

In the oil business, corrosion in factories and pipelines is a big challenge, and one that can lead to catastrophic failures. Thats why companies are careful about not letting anything corrode to the point where it poses a risk. But that also means the machine learning models cant be trained on real-world examples of corrosion. So Shell uses synthetic data to help.

As Jeavons explained, Shell is also using synthetic data to try and solve the problem of people smoking at gas stations. Shelldoesnthave a lot of examples because the cameras dont always catch the smokers; in other cases, theyre too far away or arent facing the camera. So the company is working hard on combining simulated synthetic data with real data to build computer vision models.

Almost always the things were interested in are the edge cases rather than the general norm, said Jeavons. And its quite easy to detect the edge [deviating] from the standard pattern, but its quite hard to detect the specific thing that you want.

In the meantime, startup AI.Reverie endeavored to learn more about the accuracy of synthetic data. The paper it published, RarePlanes: Synthetic Data Takes Flight, lays out how its researchers combined satellite imagery of planes parked at airports that was annotated and validated by humans with synthetic data created by machine.

When using just synthetic data, the model was only about 55% percent accurate, whereas when it only used real-world data that number jumped to 73%. But by makingreal-world data 10% of the training sample and using synthetic data for the rest, the models accuracy came in at 69%.

Paul Walborsky, the CEO of AI.Reverie (and the former CEO at GigaOM; in other words, my former boss), says that synthetic datais going to be a big business. Companies using such data need to account for ways that their fake data can skew the model, but if they can do that, they can achieve robust models faster and at a lower cost than if they relied on real-world data.

So even though IoT sensors are throwing off petabytes of data, it would be impossible to annotate all of it and use it for training models. And as Jeavons points out, those petabytes of data may not have the situation you actually want the computer to look for. In other words, expect the wave of synthetic and simulated data to keep on coming.

Were convinced that, actually, this is going to be the future in terms of making things work well, said Jeavons, both in the cloud and at the edge for some of these complex use cases.

Related

Read the rest here:
Fake data is great data when it comes to machine learning - Stacey on IoT