How to overcome AI and machine learning adoption barriers – Gigabit Magazine – Technology News, Magazine and Website

Matt Newton, Senior Portfolio Marketing Manager at AVEVA, on how to overcome adoption barriers for AI and machine learning in the manufacturing industry

There has been a considerable amount of hype around Artificial Intelligence (AI) and Machine Learning (ML) technologies in the last five or so years.

So much so that AI has become somewhat of a buzzword full of ideas and promise, but something that is quite tricky to execute in practice.

At present, this means that the challenge we run into with AI and ML is a healthy dose of scepticism.

For example, weve seen several large companies adopt these capabilities, often announcing they intend to revolutionize operations and output with such technologies but then failing to deliver.

In turn, the ongoing evolution and adoption of these technologies is consequently knocked back. With so many potential applications for AI and ML it can be daunting to identify opportunities for technology adoption that can demonstrate real and quantifiable return on investment.

Many industries have effectively reached a sticking point in their adoption of AI and ML technologies.

Typically, this has been driven by unproven start-up companies delivering some type of open source technology and placing a flashy exterior around it, and then relying on a customer to act as a development partner for it.

However, this is the primary problem customers are not looking for prototype and unproven software to run their industrial operations.

Instead of offering a revolutionary digital experience, many companies are continuing to fuel their initial scepticism of AI and ML by providing poorly planned pilot projects that often land the company in a stalled position of pilot purgatory, continuous feature creep and a regular rollout of new beta versions of software.

This practice of the never ending pilot project is driving a reluctance for customers to then engage further with innovative companies who are truly driving digital transformation in their sector with proven AI and ML technology.

A way to overcome these challenges is to demonstrate proof points to the customer. This means showing how AI and ML technologies are real and are exactly like wed imagine them to be.

Naturally, some companies have better adopted AI and ML than others, but since much of this technology is so new, many are still struggling to identify when and where to apply it.

For example, many are keen to use AI to track customer interests and needs.

In fact, even greater value can be discovered when applying AI in the form of predictive asset analytics on pieces of industrial process control and manufacturing equipment.

AI and ML can provide detailed, real-time insights on machinery operations, exposing new insights that humans cannot necessarily spot. Insights that can drive huge impact on businesses bottom line.

AI and ML is becoming incredibly popular in manufacturing industries, with advanced operations analysis often being driven by AI. Many are taking these technologies and applying it to their operating experiences to see where economic savings can be made.

All organisations want to save money where they can and with AI making this possible.

These same organisations are usually keen to invest in further digital technologies. Successfully implementing an AI or ML technology can significantly reduce OPEX and further fuel the digital transformation of an overall enterprise.

Understandably, we are seeing the value of AI and ML best demonstrated in the manufacturing sector in both process and batch automation.

For example, using AI to figure out how to optimize the process to achieve higher production yields and improve production quality. In the food and beverage sectors, AI is being used to monitor production line oven temperatures, flagging anomalies - including moisture, stack height and color - in a continually optimised process to reach the coveted golden batch.

The other side of this is to use predictive maintenance to monitor the behaviour of equipment and improve operational safety and asset reliability.

A combination of both AI and ML is fused together to create predictive and prescriptive maintenance. Where AI is used to spot anomalies in the behavior of assets and recommended solution is prescribed to remediate potential equipment failure.

Predictive and Prescriptive maintenance assist with reducing pressure on O&M costs, improving safety, and reducing unplanned shutdowns.

Both AI, machine learning and predictive maintenance technologies are enabling new connections to be made within the production line, offering new insights and suggestions for future operations.

Now is the time for organisations to realise that this adoption and innovation is offering new clarity on the relationship between different elements of the production cycle - paving the way for new methods to create better products at both faster speeds and lower costs.

Read the original:
How to overcome AI and machine learning adoption barriers - Gigabit Magazine - Technology News, Magazine and Website

Millions of historic newspaper images get the machine learning treatment at the Library of Congress – TechCrunch

Historians interested in the way events and people were chronicled in the old days once had to sort through card catalogs for old papers, then microfiche scans, then digital listings but modern advances can index them down to each individual word and photo. A new effort from the Library of Congress has digitized and organized photos and illustrations from centuries of news using state of the art machine learning.

Led by Ben Lee, a researcher from the University of Washington occupying the Librarys Innovator in Residence position, the Newspaper Navigator collects and surfaces data from images from some 16 million pages of newspapers throughout American history.

Lee and his colleagues were inspired by work already being done in Chronicling America, an ongoing digitization effort for old newspapers and other such print materials. While that work used optical character recognition to scan the contents of all the papers, there was also a crowdsourced project in which people identified and outlined images for further analysis. Volunteers drew boxes around images relating to World War I, then transcribed the captions and categorized the picture.

This limited effort set the team thinking.

I loved it because it emphasized the visual nature of the pages seeing the visual diversity of the content coming out of the project, I just thought it was so cool, and I wondered what it would be like to chronicle content like this from all over America, Lee told TechCrunch.

He also realized that what the volunteers had created was in fact an ideal set of training data for a machine learning system. The question was, could we use this stuff to create an object detection model to go through every newspaper, to throw open the treasure chest?

The answer, happily, was yes. Using the initial human-powered work of outlining images and captions as training data, they built an AI agent that could do so on its own. After the usual tweaking and optimizing, they set it loose on the full Chronicling America database of newspaper scans.

It ran for 19 days nonstop definitely the largest computing job Ive ever run, said Lee. But the results are remarkable: millions of images spanning three centuries (from 1789 to 1963) and organized with metadata pulled from their own captions. The team describes their work in a paper you can read here.

Assuming the captions are at all accurate, these images until recently only accessible by trudging through the archives date by date and document by document can be searched for by their contents, like any other corpus.

Looking for pictures of the president in 1870? No need to browse dozens of papers looking for potential hits and double-checking the contents in the caption just search Newspaper Navigator for president 1870. Or if you want editorial cartoons from the World War II era, you can just get all illustrations from a date range. (The team has already zipped up the photos into yearly packages and plans other collections.)

Here are a few examples of newspaper pages with the machine learning systems determinations overlaid on them (warning: plenty of hat ads and racism):

Thats fun for a few minutes for casual browsers, but the key thing is what it opens up for researchers and other sets of documents. The team is throwing a data jam today to celebrate the release of the data set and tools, during which they hope to both discover and enable new applications.

Hopefully it will be a great way to get people together to think of creative ways the data set can be used, said Lee. The idea Im really excited by from a machine learning perspective is trying to build out a user interface where people can build their own data set. Political cartoons or fashion ads, just let users define theyre interested in and train a classifier based on that.

A sample of what you might get if you asked for maps from the Civil War era.

In other words, Newspaper Navigators AI agent could be the parent for a whole brood of more specific ones that could be used to scan and digitize other collections. Thats actually the plan within the Library of Congress, where the digital collections team has been delighted by the possibilities brought up by Newspaper Navigator, and machine learning in general.

One of the things were interested in is how computation can expand the way were enabling search and discovery, said Kate Zwaard. Because we have OCR, you can find things it would have taken months or weeks to find. The Librarys book collection has all these beautiful plates and illustrations. But if you want to know like, what pictures are there of the Madonna and child, some are categorized, but others are inside books that arent catalogued.

That could change in a hurry with an image-and-caption AI systematically poring over them.

Newspaper Navigator, the code behind it and all the images and results from it are completely public domain, free to use or modify for any purpose. You can dive into the code at the projects GitHub.

Read the original:
Millions of historic newspaper images get the machine learning treatment at the Library of Congress - TechCrunch

Udacity partners with AWS to offer scholarships on machine learning for working professionals – Business Insider India

All applicants will be able to join the AWS Machine Learning Foundations Course. While applications are on currently, enrollment for the course begins on May 19.

This course will provide an understanding of software engineering and AWS machine learning concepts including production-level coding and practice object-oriented programming. They will also learn about deep learning techniques and its applications using AWS DeepComposer. Advertisement

A major reason behind the increasing uptake of such niche courses among the modern-age learners has to do with the growing relevance of technology across all spheres the world over. In its wake, many high-value job roles are coming up that require a person to possess immense technical proficiency and knowledge in order to assume them. And machine learning is one of the key components of the ongoing AI revolution driving digital transformation worldwide, said Gabriel Dalporto, CEO of Udacity.

The top 325 performers in the foundation course will be awarded with a scholarship to join Udacitys Machine Learning Engineer Nanodegree program. In this advanced course, the students will work on ML tools from AWS. This includes real-time projects that are focussed on specific machine learning skills.

Advertisement

The Nanodegree program scholarship will begin on August 19.

See also:Advertisement

Here are five apps you need to prepare for JEE Main and NEET competitive exams

Continued here:
Udacity partners with AWS to offer scholarships on machine learning for working professionals - Business Insider India

Understanding The Recognition Pattern Of AI – Forbes

Image and object recognition

Of the seven patterns of AI that represent the ways in which AI is being implemented, one of the most common is the recognition pattern. The main idea of the recognition pattern of AI is that were using machine learning and cognitive technology to help identify and categorize unstructured data into specific classifications. This unstructured data could be images, video, text, or even quantitative data. The power of this pattern is that were enabling machines to do the thing that our brains seem to do so easily: identify what were perceiving in the real world around us.

The recognition pattern is notable in that it was primarily the attempts to solve image recognition challenges that brought about heightened interest in deep learning approaches to AI, and helped to kick off this latest wave of AI investment and interest. The recognition pattern however is broader than just image recognition In fact, we can use machine learning to recognize and understand images, sound, handwriting, items, face, and gestures. The objective of this pattern is to have machines recognize and understand unstructured data. This pattern of AI is such a huge component of AI solutions because of its wide variety of applications.

The difference between structured and unstructured data is that structured data is already labelled and easy to interpret. However unstructured data is where most entities struggle. Up to 90% of an organization's data is unstructured data. It becomes necessary for businesses to be able to understand and interpret this data and that's where AI steps in. Whereas we can use existing query technology and informatics systems to gather analytic value from structured data, it is almost impossible to use those approaches with unstructured data. This is what makes machine learning such a potent tool when applied to these classes of problems.

Machine learning has a potent ability to recognize or match patterns that are seen in data. Specifically, we use supervised machine learning approaches for this pattern. With supervised learning, we use clean well-labeled training data to teach a computer to categorize inputs into a set number of identified classes. The algorithm is shown many data points, and uses that labeled data to train a neural network to classify data into those categories. The system is making neural connections between these images and it is repeatedly shown images and the goal is to eventually get the computer to recognize what is in the image based on training. Of course, these recognition systems are highly dependent on having good quality, well-labeled data that is representative of the sort of data that the resultant model will be exposed to in the real world. Garbage in is garbage out with these sorts of systems.

The many applications of the recognition pattern

The recognition pattern allows a machine learning system to be able to essentially look at unstructured data, categorize it, classify it, and make sense of what otherwise would just be a blob of untapped value. Applications of this pattern can be seen across a broad array of applications from medical imaging to autonomous vehicles, handwriting recognition to facial recognition, voice and speech recognition, or identifying even the most detailed things in videos and data of all types. Machine-learning enabled recognition has added significant power to security and surveillance systems, with the power to observe multiple simultaneous video streams in real time and recognize things such as delivery trucks or even people who are in a place they ought not be at a certain time of day.

The business applications of the recognition pattern are also plentiful. For example, in online retail and ecommerce industries, there is a need to identify and tag pictures for products that will be sold online. Previously humans would have to laboriously catalog each individual image according to all its attributes, tags, and categories. Nowadays, machine learning-based recognition systems are able to quickly identify products that are not already in the catalog and apply the full range of data and metadata necessary to sell those products online without any human interaction. This is a great place for AI to step in and be able to do the task much faster and much more efficiently than a human worker who is going to get tired out or bored. Not to mention these systems can avoid human error and allow for workers to be doing things of more value.

Not only is this recognition pattern being used with images, it's also used to identify sound in speech. There are lots of apps that exist that can tell you what song is playing or even recognize the voice of somebody speaking. Another application of this recognition pattern is recognizing animal sounds. The use of automatic sound recognition is proving to be valuable in the world of conservation and wildlife study. Using machines that can recognize different animal sounds and calls can be a great way to track populations and habits and get a better all-around understanding of different species. There could even be the potential to use this in areas such as vehicle repair where the machine can listen to different sounds being made by an engine and tell the operator of the vehicle what is wrong and what needs to be fixed and how soon.

One of the most widely adopted applications of the recognition pattern of artificial intelligence is the recognition of handwriting and text. While weve had optical character recognition (OCR) technology that can map printed characters to text for decades, traditional OCR has been limited in its ability to handle arbitrary fonts and handwriting. Machine learning-enabled handwriting and text recognition is significantly better at this job, in which it can not only recognize text in a wide range of printed or handwritten mode, but it can also recognize the type of data that is being recorded. For example, if there is text formatted into columns or a tabular format, the system can identify the columns or tables and appropriately translate to the right data format for machine consumption. Likewise, the systems can identify patterns of the data, such as Social Security numbers or credit card numbers. One of the applications of this type of technology are automatic check deposits at ATMs. Customers insert their hand written checks into the machine and it can then be used to create a deposit without having to go to a real person to deposit your checks.

The recognition pattern of AI is also applied to human gestures. This is something already heavily in use by the video game industry. Players can make certain gestures or moves that then become in-game commands to move characters or perform a task. Another major application is allowing customers to virtually try on various articles of clothing and accessories. It's even being applied in the medical field by surgeons to help them perform tasks and even to train people on how to perform certain tasks before they have to perform them on a real person. Through the use of the recognition pattern, machines can even understand sign language and translate and interpret gestures as needed without human intervention.

In the medical industry, AI is being used to recognize patterns in various radiology imaging. For example, these systems are being used to recognize fractures, blockages, aneurysms, potentially cancerous formations, and even being used to help diagnose potential cases of tuberculosis or coronavirus infections. Analyst firm Cognilytica is predicting that within just a few years, machines will perform the first analysis of most radiology images with instant identification of anomalies or patterns before they go to a human radiologist for further evaluation.

The recognition pattern is also being applied to identify counterfeit products. Machine-learning based recognition systems are looking at everything from counterfeit products such as purses or sunglasses to counterfeit drugs.

The use of this pattern of AI is impacting every industry from using images to get insurance quotes to analyzing satellite images after natural disasters to assess damage.Given the strength of machine learning in identifying patterns and applying that to recognition, it should come as little surprise that this pattern of AI will continue to see widespread adoption. In fact, in just a few years we might come to take the recognition pattern of AI for granted and not even consider it to be AI. That just goes to the potency of this pattern of AI. .

Original post:
Understanding The Recognition Pattern Of AI - Forbes

Major Companies in Machine Learning as a Service Market Struggle to Fulfil the Extraordinary Demand Intensified by COVID-81 – Jewish Life News

The latest report on the Machine Learning as a Service market provides an out an out analysis of the various factors that are projected to define the course of the Machine Learning as a Service market during the forecast period. The current trends that are expected to influence the future prospects of the Machine Learning as a Service market are analyzed in the report. Further, a quantitative and qualitative assessment of the various segments of the Machine Learning as a Service market is included in the report along with relevant tables, figures, and graphs. The report also encompasses valuable insights pertaining to the impact of the COVID-19 pandemic on the global Machine Learning as a Service market.

The report reveals that the Machine Learning as a Service market is expected to witness a CAGR growth of ~XX% over the forecast period (2019-2029) and reach a value of ~US$ XX towards the end of 2019. The regulatory framework, R&D activities, and technological advancements relevant to the Machine Learning as a Service market are enclosed in the report.

Request Sample Report @https://www.mrrse.com/sample/9077?source=atm

The market is segregated into different segments to provide a granular analysis of the Machine Learning as a Service market. The market is segmented on the basis of application, end-user, region, and more.

The market share, size, and forecasted CAGR growth of each Machine Learning as a Service market segment and sub-segment are included in the report.

competition landscape which include competition matrix, market share analysis of major players in the global machine learning as a service market based on their 2016 revenues and profiles of major players. Competition matrix benchmarks leading players on the basis of their capabilities and potential to grow. Factors including market position, offerings and R&D focus are attributed to companys capabilities. Factors including top line growth, market share, segment growth, infrastructure facilities and future outlook are attributed to companys potential to grow. This section also identifies and includes various recent developments carried out by the leading players.

Company profiling includes company overview, major business strategies adopted, SWOT analysis and market revenues for year 2014 to 2016. The key players profiled in the global machine learning as a service market include IBM Corporation, Google Inc., Amazon Web Services, Microsoft Corporation, BigMl Inc., FICO, Yottamine Analytics, Ersatz Labs Inc, Predictron Labs Ltd and H2O.ai. Other players include ForecastThis Inc., Hewlett Packard Enterprise, Datoin, Fuzzy.ai, and Sift Science Inc. among others.

The global machine learning as a service market is segmented as below:

By Deployment Type

By End-use Application

By Geography

Request For Discount On This Report @ https://www.mrrse.com/checkdiscount/9077?source=atm

Important Doubts Related to the Machine Learning as a Service Market Addressed in the Report:

Knowledgeable Insights Enclosed in the Report

Buy This Report @ https://www.mrrse.com/checkout/9077?source=atm

Read the original:
Major Companies in Machine Learning as a Service Market Struggle to Fulfil the Extraordinary Demand Intensified by COVID-81 - Jewish Life News

Four projects receive funding from University of Alabama CyberSeed program – Alabama NewsCenter

Four promising research projects received funding from the University of Alabama CyberSeed program, part of the UA Office for Research and Economic Development.

The pilot seed-funding program promotes research across disciplines on campus while ensuring a stimulating and well-managed environment for high-quality research.

The funded projects come from four major thrusts of the UA Cyber Initiative that include cybersecurity, critical infrastructure protection, applied machine learning and artificial intelligence, and cyberinfrastructure.

These projects are innovative in their approach to using cutting-edge solutions to tackle critical challenges, said Dr. Jeffrey Carver, professor of computer science and chair of the UA Cyber Initiative.

One project will study cybersecurity of drones and develop strategies to mitigate potential attacks. Led by Dr. Mithat Kisacikoglu, assistant professor of electrical and computer engineering, and Dr. Travis Atkison, assistant professor of computer science, the research will produce a plan for the secure design of the power electronics in drones with potential for other applications.

Another project will use machine learning to probe the nature of dark matter using existing data from NASA. The work should position the research team, led by Dr. Sergei Gleyzer, assistant professor of physics and astronomy, and Dr. Brendan Ames, assistant professor of mathematics, to analyze images expected later this year from the Vera Rubin Observatory, the worlds largest digital camera.

The CyberSeed program is also funding work planning to use machine learning to accelerate discovery of candidates within a new class of alloys that can be used in real-world experiments. These new alloys, called high-entropy alloys or multi-principal component alloys, are thought to enhance mechanical performance. This project involves Drs. Lin Li and Feng Yan, assistant professors of metallurgical and materials engineering, and Dr. Jiaqi Gong, who begins as associate professor of computer science this month.

A team of researchers is involved in a project to use state-of-the-art cyberinfrastructure technology and hardware to collect, visualize, analyze and disseminate hydrological information. The research aims to produce a proof-of-concept system. The team includes Dr. Sagy Cohen, associate professor of geography; Dr. Brad Peter, a postdoctoral researcher of geography; Dr. Hamid Moradkhani, director of the UA Center for Complex Hydrosystems; Dr. Zhe Jiang, assistant professor of computer science; Dr. D. Jay Cervino, executive director of the UA Office of Information Technology; and Dr. Andrew Molthan with NASA.

The CyberSeed program came from a process that began in April 2019 with the first internal UA cybersummit to meet and define future opportunities. In July, ORED led an internal search for the chair of the Cyber Initiative,announcing Carver in August. In October, Carver led the second internal cybersummit, at which it was agreed the Cyber Initiative would define major thrusts and develop the CyberSeed program.

While concentrating in these areas specifically, the Cyber Initiative will continue to interact with other researchers across campus to identify other promising cyber-related research areas to grow the portfolio, Carver said.

This story originally appeared on the University of Alabamas website.

Read more here:
Four projects receive funding from University of Alabama CyberSeed program - Alabama NewsCenter

Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International

The effect the coronavirus pandemic is having on energy systems and environmental policy in Europe was discussed at a recent machine learning and climate change workshop, along with the help artificial intelligence can offer to those planning electricity access in Africa.

The impact of Covid-19 on the energy system was discussed in an online climate change workshop that also considered how machine learning can help electricity planning in Africa.

This years International Conference on Learning Representations event included a workshop held by the Climate Change AI group of academics and artificial intelligence industry representatives which considered how machine learning can help tackle climate change.

Bjarne Steffen, senior researcher at the energy politics group at ETH Zrich, shared his insights at the workshop on how Covid-19 and the accompanying economic crisis are affecting recently introduced green policies. The crisis hit at a time when energy policies were experiencing increasing momentum towards climate action, especially in Europe, said Steffen, who added the coronavirus pandemic has cast into doubt the implementation of such progressive policies.

The academic said there was a risk of overreacting to the public health crisis, as far as progress towards climate change goals was concerned.

Lobbying

Many interest groups from carbon-intensive industries are pushing to remove the emissions trading system and other green policies, said Steffen. In cases where those policies are having a serious impact on carbon-emitting industries, governments should offer temporary waivers during this temporary crisis, instead of overhauling the regulatory structure.

However, the ETH Zrich researcher said any temptation to impose environmental conditions to bail-outs for carbon-intensive industries should be resisted. While it is tempting to push a green agenda in the relief packages, tying short-term environmental conditions to bail-outs is impractical, given the uncertainty in how long this crisis will last, he said. It is better to include provisions that will give more control over future decisions to decarbonize industries, such as the government taking equity shares in companies.

Steffen shared with pv magazine readers an article published in Joule which can be accessed here, and which articulates his arguments about how Covid-19 could affect the energy transition.

Covid-19 in the U.K.

The electricity system in the U.K. is also being affected by Covid-19, according to Jack Kelly, founder of London-based, not-for-profit, greenhouse gas emission reduction research laboratory Open Climate Fix.

The crisis has reduced overall electricity use in the U.K., said Kelly. Residential use has increased but this has not offset reductions in commercial and industrial loads.

Steve Wallace, a power system manager at British electricity system operator National Grid ESO recently told U.K. broadcaster the BBC electricity demand has fallen 15-20% across the U.K. The National Grid ESO blog has stated the fall-off makes managing grid functions such as voltage regulation more challenging.

Open Climate Fixs Kelly noted even events such as a nationally-coordinated round of applause for key workers was followed by a dramatic surge in demand, stating:On April 16, the National Grid saw a nearly 1 GW spike in electricity demand over 10 minutes after everyone finished clapping for healthcare workers and went about the rest of their evenings.

Read pv magazines coverage of Covid-19; and tell us how it is affecting your solar and energy storage operations. Email editors@pv-magazine.com to share your experiences.

Climate Change AI workshop panelists also discussed the impact machine learning could have on improving electricity planning in Africa. The Electricity Growth and Use in Developing Economies (e-Guide) initiative funded by fossil fuel philanthropic organization the Rockefeller Foundationaims to use data to improve the planning and operation of electricity systems in developing countries.

E-Guide members Nathan Williams, an assistant professor at the Rochester Institute of Technology (RIT) in New York state, and Simone Fobi, a PhD student at Columbia University in NYC, spoke about their work at the Climate Change AI workshop, which closed on Thursday. Williams emphasized the importance of demand prediction, saying: Uncertainty around current and future electricity consumption leads to inefficient planning. The weak link for energy planning tools is the poor quality of demand data.

Fobi said: We are trying to use machine learning to make use of lower-quality data and still be able to make strong predictions.

The market maturity of individual solar home systems and PV mini-grids in Africa mean more complex electrification plan modeling is required.

Modeling

When we are doing [electricity] access planning, we are trying to figure out where the demand will be and how much demand will exist so we can propose the right technology, added Fobi. This makes demand estimation crucial to efficient planning.

Unlike many traditional modeling approaches, machine learning is scalable and transferable. Rochesters Williams has been using data from nations such as Kenya, which are more advanced in their electrification efforts, to train machine learning models to make predictions to guide electrification efforts in countries which are not as far down the track.

Williams also discussed work being undertaken by e-Guide members at the Colorado School of Mines, which uses nighttime satellite imagery and machine learning to assess the reliability of grid infrastructure in India.

Rural power

Another e-Guide project, led by Jay Taneja at the University of Massachusetts, Amherst and co-funded by the Energy and Economic Growth program by police reform organization Oxford Policy Management uses satellite imagery to identify productive uses of electricity in rural areas by detecting pollution signals from diesel irrigation pumps.

Though good quality data is often not readily available for Africa, Williams added, it does exist.

We have spent years developing trusting relationships with utilities, said the RIT academic. Once our partners realize the value proposition we can offer, they are enthusiastic about sharing their data We cant do machine learning without high-quality data and this requires that organizations can effectively collect, organize, store and work with data. Data can transform the electricity sector but capacity building is crucial.

By Dustin Zubke

This article was amended on 06/05/20 to indicate the Energy and Economic Growth program is administered by Oxford Policy Management, rather than U.S. university Berkeley, as previously stated.

View original post here:
Tackling climate change with machine learning: Covid-19 and the energy transition - pv magazine International

How machine learning can help media companies thrive in the 21st century – Straight.com

Many veteran journalists around the world have received a rude awakening in recent years, courtesy of Internet metrics.

Thanks to Google Analytics, Chartbeat, and other measuring tools, they've learned which articles and subjects resonate with online readers and which pieces fall flat.

As much as I might be interested in a particular topic, the numbers will quickly tell me if the public doesn't care.

On the positive side, Internet metrics reveal areas where there is tremendous public interest but which are largely unexplored by the media.

Quoting international-affairs expert Gwynne Dyer, I once wrote an article about how terrorism is overblown in the media. Much to my surprise, that article went viral.

The same thing happened after I interviewed Delhi-based writer Arundhati Roy about the 2014 Indian election.

More recently, an article quoting Stanford University epidemiologist John Ioannidis on the COVID-19 pandemic hit a nerve.

Many people are under the impression that media outlets go after clickbait like the Kardashians or the Royal Family to attract eyeballs to their websites.

While there's some truth to that, there's also another reality. Serious articles offering alternative views can yield a tremendous amount of Internet traffic.

Last month, I was astonished to learn that my articles on Straight.com generated more than 1.1-million page views, according to Chartbeat. Not a single one dealt with Meaghan or Harry.

This number was far higher the norm, so I publicly thanked Anton Tikhomirov, the brilliant senior vice president of technology and architecture of Media Central Corporation.

At the end of February, Media Central closed a deal with the McLeod family to buy theGeorgia Straight.The Ontario-based company also ownsNOW Magazine in Torontoand the CannCentral.com online publication about cannabis and psychedelics.

This morning, thanks to a Media Central news release, I learned more about the role that Anton is playing in making the Georgia Straight and NOW more resilient in the Internet age.

Using artificial intelligence, Anton and his team have expanded the digital advertising inventory across all of the company's properties "to monetize its growing audience of 6.5 million influential consumers through technology".

Here at the Straight, ad impressions have risen 25 percent over the past two months.

Ad impressions are up a stunning 405 percent in that same period at NOW. Keep in mind that all of this has occurred during a pandemic.

As a result, Media Central's overall programmatic ad revenue jumped by 389 percent in April.

Our digital advertising revenues are projected to dramatically surpass our legacy ad model as we move forward with our tech-heavy strategy," Anton says in the news release. "We are leveraging the latest technology to optimize bottom line growth, while ensuring our readers have the best possible experience.

"Programmatic ads are successful because they use machine learning to ensure consumer demand ad placements, driven by data, in real time."

Yes folks, computers are purchasing advertising from other computers.

When I started working at the Georgia Straight in the 1990s, nobody ever used terms like "machine learning" and "artificial intelligence".

Only in recent years has it dawned on me that machine learning could be a salvation for media companies in a world increasingly dominated by Facebook, Google, Apple, Amazon, and Alibaba.

We're still not at a point where the robots can do my joband for that, I'm grateful. But technology has gotten very good at letting me know when I'm striking out or whacking the ball over the fence. It's also a revenue generator.

Long gone are those days when media outlets simply operated on hunches to survive.

Read more:
How machine learning can help media companies thrive in the 21st century - Straight.com

Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights

Last year, the fastest-growing job title in the world was that of the machine learning (ML) engineer, and this looks set to continue for the foreseeable future. According to Indeed, the average base salary of an ML engineer in the US is $146,085, and the number of machine learning engineer openings grew by 344% between 2015 and 2018. Machine learning engineers dominate the job postings around artificial intelligence (A.I.), with 94% of job advertisements that contain AI or ML terminology targeting machine learning engineers specifically.

This demonstrates that organizations understand how profound an effect machine learning promises to have on businesses and society. AI and ML are predicted to drive a Fourth Industrial Revolution that will see vast improvements in global productivity and open up new avenues for innovation; by 2030, its predicted that the global economy will be$15.7 trillion richersolely because of developments from these technologies.

The scale of demand for machine learning engineers is also unsurprising given how complex the role is. The goal of machine learning engineers is todeploy and manage machine learning modelsthat process and learn from the patterns and structures in vast quantities of data, into applications running in production, to unlock real business value while ensuring compliance with corporate governance standards.

To do this, machine learning engineers have to sit at the intersection of three complex disciplines. The first discipline is data science, which is where the theoretical models that inform machine learning are created; the second discipline is DevOps, which focuses on the infrastructure and processes for scaling the operationalization of applications; and the third is software engineering, which is needed to make scalable and reliable code to run machine learning programs.

Its the fact that machine learning engineers have to be at ease in the language of data science, software engineering, and DevOps that makes them so scarceand their value to organizations so great. A machine learning engineer has to have a deep skill-set; they must know multiple programming languages, have a very strong grasp of mathematics, and be able to understand andapply theoretical topics in computer science and statistics. They have to be comfortable with taking state-of-the-art models, which may only work in a specialized environment, andconverting them into robust and scalable systems that are fit for a business environment.

As a burgeoning occupation, the role of a machine learning engineer is constantly evolving. The tools and capabilities that these engineers have in 2020 are radically different from those they had available in 2015, and this is set to continue evolve as the specialism matures. One of the best ways to understand what the role of a machine learning engineer means to an organization is to look at the challenges they face in practice, and how they evolve over time.

Four major challenges that every machine learning engineer has to deal with are data provenance, good data, reproducibility, and model monitoring.

Across a models development and deployment lifecycle, theres interaction between a variety of systems and teams. This results in a highly complex chain of data from a variety of sources. At the same time, there is a greater demand than ever for data to be audited, and there to be a clear lineage of its organizational uses. This is increasingly a priority for regulators, with financial regulators now demandingthat all machine learning data be stored for seven years for auditing purposes.

This does not only make the data and metadata used in models more complex, but it also makes the interactions between the constituent pieces of data far more complex. This means machine learning engineers need to put the right infrastructure in place to ensure the right data and metadata is accessible, all while making sure it is properly organized.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

In 2016, it was estimated that the US alonelost $3.1 trillionto bad datadata thats improperly formatted, duplicated, or incomplete. People and businesses across all sectors lose time and money because of this, but in a job that requires building and running accurate models reliant on input data, these issues can seriously jeopardize projects.

IBM estimates that around80 percent of a data scientists timeis spentfinding, cleaning up, and organizing the data they put into their models. Over time, however, increasingly sophisticated error and anomaly detection programs will likely be used to comb through datasets and screen out information that is incomplete or inaccurate.

This means that, as time goes on and machine learning capabilities continue to develop, well see machine learning engineers have more tools in their belt to clean up the information their programs use, and thus be able to focus more time spent on putting together ML programs themselves.

Reproducibility is often defined as the ability to be able to keep a snapshot of the state of a specific machine learning model, and being able to reproduce the same experiment with the exact same results regardless of the time and location. This involves a great level of complexity, given that machine learning requires reproducibility of three components: 1) code, 2) artifacts, and 3) data. If one of these change, then the result will change.

To add to this complexity, its also necessary to keep reproducibility of entire pipelines that may consist of two or more of these atomic steps, which introduces an exponential level of complexity. For machine learning, reproducibility is important because it lets engineers and data scientists know that the results of a model can be relied upon when they are deployed live, as they will be the same if they are run today as if they were run in two years.

Designing infrastructure for machine learning that is reproducible is a huge challenge. It will continue to be a thorn in the side of machine learning engineers for many years to come. One thing that may make this easier in coming years is the rise of universally accepted frameworks for machine learning test environments, which will provide a consistent barometer for engineers to measure their efforts against.

Its easy to forget that the lifecycle of a machine learning model only begins when its deployed to production. Consequently, a machine learning engineer not only needs to do the work of coding, testing, and deploying a model, but theyll have to also develop the right tools to monitor it.

The production environment of a model can often throw up scenarios the machine learning engineer didnt anticipate when they were creating it. Without monitoring and intervention after deployment, its likely that a model can end up being rendered dysfunctional or produce skewed results by unexpected data. Without accurate monitoring, results can often slowly drift away from what is expected due to input data becoming misaligned with the data a model was trained with, producing less and less effective or logical results.

Adversarial attacks on models, often far more sophisticated than tweets and a chatbot, are of increasing concern, and it is clear that monitoring by machine learning engineers is needed to stop a model being rendered counterproductive by unexpected data. As more machine learning models are deployed, and as more economic output becomes dependent upon these models, this challenge is only going to grow in prominence for machine learning engineers going forward.

One of the most exciting things about the role of the machine learning engineer is that its a job thats still being defined, and still faces so many open problems. That means machine learning engineers get the thrill of working in a constantly changing field that deals with cutting-edge problems.

Challenges such as data quality may be problems we can make major progress towards in the coming years. Other challenges, such monitoring, look set to become more pressing in the more immediate future. Given the constant flux of machine learning engineering as an occupation, its of little wonder that curiosity and an innovative mindset are essential qualities for this relatively new profession.

Alex Housley is CEO ofSeldon.

View post:
Machine Learning Engineer: Challenges and Changes Facing the Profession - Dice Insights

Applica Named a Cool Vendor by Gartner in the 2020 Cool Vendors – AiThority

Applica, a leading provider of AI-based Robotic Text Automation (RTA) solutions for enterprises, announced that it is one of five Cool Vendors named in theApril 2020Gartner report titled Cool Vendors in Natural Language Technology.

The report states that while language-processing capabilities have been possible for several decades, a new generation of capabilities has emerged. These capabilities use methods that are informed by deep neural networks and machine learning, in addition to previous methods.

At Applica we believe in a future where humans are liberated from repeatable tasks and moved to higher level work. To us, this recognition by Gartner validates our commitment and passion to leveraging advances in machine learning, natural language processing, and data science to help our customers realize tangible business value from AI, saidPiotr Surma, Co-founder and CEO of Applica.

Recommended AI News:Pure Storage Expands FlashBlade, the Industrys First Native Unified, Fast File and Object Platform

Gartner notes, Enterprises have huge volumes of structured and unstructured textual data sources, and access to many additional textual feeds and sources online. Much of this data is not used at all to enhance their position and services.

Recommended AI News:uBreakiFix Repairs Smartphones, Computers, and More at New Paul Huff Parkway Location

Applica is committed to helping organizations realize there are more effective ways to manage unstructured and semi-structured data, much of which is not automated with existing tools. We look forward to onboarding more companies to Applica RTA an unrivaled AI platform that boosts efficiencies, is easy to use, and fast to deploy, addsPiotr Surma, Co-founder and CEO of Applica.

Recommended AI News:Park Aerospace Corp. Announces Introduction of E-752-MTS Mid-Toughened Epoxy Resin System

Share and Enjoy !

Read the original here:
Applica Named a Cool Vendor by Gartner in the 2020 Cool Vendors - AiThority