{"id":48490,"date":"2021-12-19T02:08:39","date_gmt":"2021-12-19T07:08:39","guid":{"rendered":"https:\/\/euvolution.com\/open-source-convergence\/uncategorized\/high-level-machine-learning-what-will-it-take-industryweek.php"},"modified":"2021-12-19T02:08:39","modified_gmt":"2021-12-19T07:08:39","slug":"high-level-machine-learning-what-will-it-take-industryweek","status":"publish","type":"post","link":"https:\/\/euvolution.com\/open-source-convergence\/machine-learning\/high-level-machine-learning-what-will-it-take-industryweek.php","title":{"rendered":"High-Level Machine Learning: What Will It Take? &#8211; IndustryWeek"},"content":{"rendered":"<p><p>Machine learning is  being utilized in service businesses to run standard, routine, repeatable parts  of processes. During the recent OPEX Summer virtual conference, the daily  sessions were filled with service companies presenting their approach to using  machines to run the core business processes that are executed a dozen to a  hundred times a day. <\/p>\n<p>Manufacturing  organizations can take a lesson from this approach. As we discussed in our earlier article, by leveraging a mixed-initiative approach  and combining the best of Black Belt process expertise and machine learning  systems, we can operationalize machine learning in a meaningful way and drive digital  transformation into the manufacturing operation.<\/p>\n<p>Machine algorithms  are good at running repeatable processesthose that do not require human  judgement to accomplish. However, the experts are still required to handle the  edge cases, those that are non-standard and require some human intelligence  to interpret and resolve. Edge cases in manufacturing involve non-routine  things that happen infrequently and, on the surface, do not appear to be  repeatable. <\/p>\n<p>Some of these are  extremely rare changes such as starting new production lines, qualifying next-generation  equipment, replacement of outdated machinery, catastrophic equipment failure,  etc. Other edge cases arise more frequently, such as when producing new  productson restoration from failure and maintenance activitiesor when new  operators are onboarded. In either case, the edge cases require some human  intervention to resolve, re-optimize the process and bring it back to a stable  state.<\/p>\n<p>Getting machine-learning-based  systems to handle edge cases is complex for several reasons:<\/p>\n<p>Providing enough  data to train a machine-learning-based approach requires experts to manually  capture all actions used to manage the edge-case event and furthermore link  these actions to the outcomes. This is problematic in manufacturing  environments, where people are busy. Their value is not usually associated with  data-entry tasks, but in units of output produced. Asking a person to manually  input responses about an event that they have been busy recovering from is not  likely to produce a quality dataset of responses.<\/p>\n<p>In order to  overcome these challenges, we require non-intrusive but continuous capture of  actions and outcomes associated with an edge case event. There are several  intelligent products out there with potential to bridge the gap. These include  wearable technologies, as well as passive and intelligent interfaces. Google  Glass is an example of the class of intelligent wearables that could be  employed to bridge the gap. However, in this case, as opposed to providing real  time assistance to the wearer to handle the edge case, we instead use the  device to capture data, actions, and outcomes about edge cases. Similarly, we could  also use an interactive and passive interface similar to the contact tracing  approach adapted by Apple and Google. This has been used to enable a Bluetooth  mesh network to trade data about Covid positive interactions without sharing  privacy information, and can be repurposed for the factory floor to trace and  record data tags when an edge case response is in process.<\/p>\n<p>                                                                                In addition to the non-intrusive capture of data, actions and outcomes,  we also need advances in machine learning to be able to leverage this data to  train models that can start to handle edge cases. An interesting area of  research in machine learning is apprenticeship learning. The idea behind this  is that the ML agent behaves like an apprenticeobserving the actions taken by  the expert, and learning to mimic them to accomplish the appropriate task.  These ideas have primarily been explored in robotics, where human experts are  used to teach a robot agent how to take certain physical actions. <\/p>\n<p>The underlying  learning algorithms use inverse reinforcement learningwhere the model needs to  estimate the objective an expert is trying to achieve from observing their  actions, and then try to optimize it when it tries to accomplish a similar  task. Recent applications of this approach have been shown to work in gaming  environments (e.g. Atari game play) as well as in real-world settings  such as helicopter control and animation. Adapting these approaches to the  manufacturing environment would allow the ML agent to learn about actions  needed to handle edge cases by observation. <\/p>\n<p>The labor pinch  that is the current reality will not abate for the remainder of this decade and  into the next decade. Asking workers, of whom there is an ever-dwindling pool,  to take time away from recovering from an event as fast as possible to enter  data is a losing proposition. As the Great Resignation continues, the pressure  on manufacturers will increase, as will turnover and demands for training as  people filter through organizations in search of their ideal work situation. <\/p>\n<p>As the available  workforce dwindles, the machine needs to be able to absorb more and more of the  edge content into the machine paradigm. Through a wearable monitoring  product, passive tracking and inverse reinforcement-based learning approaches,  the person can teach the machine about edge cases, which the machine can use  to expand the understanding of the elements of response to edge cases that are  routine, picking out elements that are repeatable even though edge cases don't  happen every day.<\/p>\n<p>As we march forward into the future, there  will be population shrinkage. It is already happening in many countries. The  portion of that future population that is willing to work in manufacturing will  be a subset of a subset of a dwindling population, yet our demand for products  seems to be increasing. Technology tools need to be assembled in such a way to  bridge the gap.<\/p>\n<p>The current state of manufacturing has several challenges to achieve  the vision of machine- directed operations, with the digital aide concept at  work. The economics of making the technology leap will change as the  availability of cheap labor tightens. Many organizations have struggled for  years to staff their operations, causing production outages and idle time,  which is costly as the investment is underutilized. Additional challenges  surround the comfort level of leaders with technology, ability to understand  the potential for technology to solve their particular problems and patience as  the technology approaches are put together into a seamless integration. <\/p>\n<p>Manual data entry  is a non-starter on the journey to enhancing the machines ability to learn the  edge cases. Active monitoring tools that provide the data without the human  having to stop their work on the edge case is the solution to achieve a  learning machine. The imperative for the next decade is to set up the machine  to learn from humans and absorb more of the edge cases by revealing the  underlying routines and absorbing those routines in the library of Golden Runs.<\/p>\n<p>Deepak Turagais senior vice president of data science at Oden Technologies, an industrial IoT company focused on using AI to monitor, optimize and control manufacturing processes. He has a background in academic and industry research specializing in using machine learning based tools to extract insights from streaming and real-time data. He is also an adjunct professor at Columbia University, and teaches a course on this topic every spring.<\/p>\n<p>James Wellsisprincipal consultant at Quality in Practice, a consulting and training practice specializing in continuous improvement programs, and specializes in quality fundamentals, including the application of digital solutions to common manufacturing challenges. He has led quality and continuous improvement organizations for over 20 years at various manufacturing companies. Wells is a certified master Black Belt and certified lean specialist.<\/p>\n<p><!-- Auto Generated --><\/p>\n<p>View original post here:<br \/>\n<a target=\"_blank\" href=\"https:\/\/www.industryweek.com\/technology-and-iiot\/automation\/article\/21183619\/highlevel-machine-learning-what-will-it-take\" title=\"High-Level Machine Learning: What Will It Take? - IndustryWeek\" rel=\"noopener\">High-Level Machine Learning: What Will It Take? - IndustryWeek<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Machine learning is being utilized in service businesses to run standard, routine, repeatable parts of processes. During the recent OPEX Summer virtual conference, the daily sessions were filled with service companies presenting their approach to using machines to run the core business processes that are executed a dozen to a hundred times a day. Manufacturing organizations can take a lesson from this approach. <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[27373],"tags":[],"class_list":["post-48490","post","type-post","status-publish","format-standard","hentry","category-machine-learning"],"_links":{"self":[{"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/posts\/48490"}],"collection":[{"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/comments?post=48490"}],"version-history":[{"count":0,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/posts\/48490\/revisions"}],"wp:attachment":[{"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/media?parent=48490"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/categories?post=48490"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/tags?post=48490"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}