MSPs are Bolstering Security Programs with Machine Learning and Automation – Channel Futures

Overcome the skills shortage and alert fatigue with advanced machine learning and automation technology.

Advanced threats, a shortage of security experts and the rise in work-from-home together form a catalyst for MSPs to enhance cybersecurity effectiveness for their customers. As MSPs seek ways to increase efficiency and do more with less, theyre turning to advanced analytical capabilities like machine learning, security analytics and automation. All of these have moved past their initial hype cycle and are now adopted and delivering enhanced ROI and outcomes in IT and cybersecurity.

The future of your business is Big Data and Machine Learningtied to the business opportunities and customer challenges before you.

Eric Schmidt, then CEO of GoogleCloudNext Conference in 2017

Machine learning and automation are more than popular buzzwords in the cybersecurity industry. These analytic capabilities make sense of large volumes of raw data to create context and find unknown attacks that speed up decision making. When combined with cybersecurity experts, they hold real promise for their ability to transform IT and security operations for organizations of all sizes. While not a magic potion that instantly perfects data security, these advanced tools offer MSPs a way to augment limited staff in the ongoing battle against cyber criminals.

The Value of Machine Learning and Automation in Cybersecurity

With digital transformation serving as a catalyst for larger volumes of data and technology, use cases for ML and automation in IT and security operations are growing. While not exhaustive, key use cases include:

Analyzing vast reams of data for suspicious activity: Its challenging to process billions of logs with an all-manual approach. Machine learning does the initial correlation work to process incoming log streams, reduce false positives and alert security operations center (SOC) analysts who perform a second level of triage and potential threat hunting.

Improving SOC efficiency and effectiveness: Machine learning and automation manage repetitive and potentially error-prone tasks that can overwhelm security teams. The result is higher job satisfaction and retention of hard-to-find cybersecurity professionals.

Increasing speed, accuracy and scale of threat detection: Automated incident response can launch a set of corrective actions, open a ticket for SOC triage and even block suspicious processes. Faster detection and remediation reduce the potential damage of attackers.

Detecting anomalous behavior by users and supply chain partners: Detect insider threats and advanced attacks with machine learning to understand and predict normal baseline system activity and identify exceptions that signal a cybersecurity risk. A SIEM (security information and event management) solution provides user and entity behavior analysis (UEBA) to detect insider threats, lateral movement and advanced attacks.

Through advancements and adoption of machine learning and security automation, MSPs are harnessing the vast reams of device and client data to foster better cyber decision making.

Cyber Criminals Also Embrace Advanced Tools

Defenders arent the only ones looking at emerging technologies. Global cybercrime damages are predicted to reach $6 trillion annually by 2021, according to the2019 Annual Cybercrime Report by Cybersecurity Ventures. Cybercriminals are upping their game to use the latest tools and technology to improve outcomes for their exploits. Hackers are using

Go here to read the rest:
MSPs are Bolstering Security Programs with Machine Learning and Automation - Channel Futures

Going Deeper with Data Science and Machine Learning – Database Trends and Applications

Surviving and thriving with data science and machine learning means not only having the right platforms, tools and skills, but identifying use cases and implementing processes that can deliver repeatable, scalable business value.

However, the challenges are numerous, from selecting data sets and data platforms, to architecting and optimizing data pipelines, and model training and deployment.

In response, new solutions have emerged to deliver key capabilities in areas including visualization, self-service, and real-time analytics. Along with the rise of DataOps, greater collaboration, and automation have been identified as key success factors.

DBTA recently hosted a special roundtable webinar featuring Alyssa Simpson Rochwerger, VP of AI and data, Appen; Doug Freud, SAP platform and technology global center of excellence, VP of data science; and Robert Stanley, senior director, special projects, Melissa Informatics, who discussed new technologies and strategies for expanding data science and machine learning capabilities.

According to a Gartner 2020 CIO survey, only 20% of AI projects deploy, Rochwerger said. The top challenges are skills of staff, understanding the benefits and uses of AI, and the data scope and quality.

She said businesses need to start out by clarifying a goal so they can then know where the data is coming from. Once organizations know where the data is coming from, they can find and fill in the gaps. Having a diverse team of humans can make it easier to sift and combine data.

According to Data2020: State of Big Data Study Regina Corso Consulting 2017, 86% of companies arent getting the most out of their data and they are limited by data complexity and sprawl, Freud explained.

SAP Data Intelligence can meet companies in the middle, Freud said. The platform boasts that its enterprise AI meets intelligent information management.

The platform features benefits that include:

Stanley took another approach by introducing the concept of data quality (DQ) fundamentals with AI. AI can be useful for DQ, particularly with unstructured or more complex data, bringing competitive advantage.

Using AI (MR and ML), more efficient methods for identification, extraction and normalization has been developed. AI on clean data enables pattern recognition, discovery and intelligent action.

Machine reasoning (MR) relies on knowledge captured and applied within ontologies using graph database technologies - most formally, using SDBs, he explained.

Machine reasoning can make sense out of incomplete or noisy data, making it possible to answer difficult questions. MR delivers highly confident decision-making by applying existing knowledge and ontology-enable logic to data, Stanley noted.

An archived on-demand replay of this webinar is available here.

More:
Going Deeper with Data Science and Machine Learning - Database Trends and Applications

An automated health care system that understands when to step in – MIT News

In recent years, entire industries have popped up that rely on the delicate interplay between human workers and automated software. Companies like Facebook work to keep hateful and violent content off their platforms usinga combination of automated filtering and human moderators. In the medical field, researchers at MIT and elsewhere have used machine learning to help radiologistsbetter detect different forms of cancer.

What can be tricky about these hybrid approaches is understanding when to rely on the expertise of people versus programs. This isnt always merely a question of who does a task better; indeed, if a person has limited bandwidth, the system may have to be trained to minimize how often it asks for help.

To tackle this complex issue, researchers from MITs Computer Science and Artificial Intelligence Lab (CSAIL) have developed a machine learning system that can either make a prediction about a task, or defer the decision to an expert. Most importantly, it can adapt when and how often it defers to its human collaborator, based on factors such as its teammates availability and level of experience.

The team trained the system on multiple tasks, including looking at chest X-rays to diagnose specific conditions such as atelectasis (lung collapse) and cardiomegaly (an enlarged heart). In the case of cardiomegaly, they found that their human-AI hybrid model performed 8 percent better than either could on their own (based on AU-ROC scores).

In medical environments where doctors dont have many extra cycles, its not the best use of their time to have them look at every single data point from a given patients file, says PhD student Hussein Mozannar, lead author with David Sontag, the Von Helmholtz Associate Professor of Medical Engineering in the Department of Electrical Engineering and Computer Science, of a new paper about the system that was recently presented at the International Conference of Machine Learning. In that sort of scenario, its important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary.

The system has two parts: a classifier that can predict a certain subset of tasks, and a rejector that decides whether a given task should be handled by either its own classifier or the human expert.

Through experiments on tasks in medical diagnosis and text/image classification, the team showed that their approach not only achieves better accuracy than baselines, but does so with a lower computational cost and with far fewer training data samples.

Our algorithms allow you to optimize for whatever choice you want, whether thats the specific prediction accuracy or the cost of the experts time and effort, says Sontag, who is also a member of MITs Institute for Medical Engineering and Science. Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa.

The systems particular ability to help detect offensive text and images could also have interesting implications for content moderation. Mozanner suggests that it could be used at companies like Facebook in conjunction with a team of human moderators. (He is hopeful that such systems could minimize the amount of hateful or traumatic posts that human moderators have to review every day.)

Sontag clarified that the team has not yet tested the system with human experts, but instead developed a series of synthetic experts so that they could tweak parameters such as experience and availability. In order to work with a new expert its never seen before, the system would need some minimal onboarding to get trained on the persons particular strengths and weaknesses.

In future work, the team plans to test their approach with real human experts, such as radiologists for X-ray diagnosis. They will also explore how to develop systems that can learn from biased expert data, as well as systems that can work with and defer to several experts at once.For example, Sontag imagines a hospital scenario where the system could collaborate with different radiologists who are more experienced with different patient populations.

There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability, says Sontag. We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms.

Mozanner is affiliated with both CSAIL and the MIT Institute for Data, Systems and Society (IDSS). The teams work was supported, in part, by the National Science Foundation.

Excerpt from:
An automated health care system that understands when to step in - MIT News

Deepfakes Are Becoming the Hot New Corporate Training Tool – Machine Learning Times – machine learning & data science news – The Predictive…

Originally published in Wired.com, July 7, 2020

Coronavirus restrictions make it harder and more expensive to shoot videos. So some companies are turning to synthetic media instead.

This month, advertising giant WPP will send unusual corporate training videos to tens of thousands of employees worldwide. A presenter will speak in the recipients language and address them by name, while explaining some basic concepts in artificial intelligence. The videos themselves will be powerful demonstrations of what AI can do: The face, and the words it speaks, will be synthesized by software.

WPP doesnt bill them as such, but its synthetic training videos might be called deepfakes, a loose term applied to images or videos generated using AI that look real. Although best known as tools of harassment, porn, or duplicity, image-generating AI is now being used by major corporations for such anodyne purposes as corporate training.

WPPs unreal training videos, made with technology from London startup Synthesia, arent perfect. WPP chief technology officer Stephan Pretorius says the prosody of the presenters delivery can be off, the most jarring flaw in an early cut shown to WIRED that was visually smooth. But the ability to personalize and localize video to many individuals makes for more compelling footage than the usual corporate fare, he says. The technology is getting very good very quickly, Pretorius says.

To continue reading this article, click here.

Go here to read the rest:
Deepfakes Are Becoming the Hot New Corporate Training Tool - Machine Learning Times - machine learning & data science news - The Predictive...

Applications Of Machine Learning In Audio – Sonic State

AES Virtual Symposium is set for September 28 29 31/07/20

Continuing its growing series of virtual events, the Audio Engineering Society has set dates for a two-day virtual symposium, "Applications in Machine Learning in Audio," being presented online, September 28 29. The events' technical program, featuring a keynote address by electronic/A.I. composer Holly Herndon, will explore topics including automatic mixing, audio source separation, audio visualization and effect control, audio capture and recording, and sourcing audio data, as well as legal issues created by this new form of science and art.

The AES 2020 Virtual Symposium on Machine Learning in Audio is led by a group of organizers with a range of experience in the world of both machine learning and audio engineering applications, including committee chair and AES President-Elect Jonathan Wyner (iZotope) and program chairs Andy Sarroff (iZotope), Christian Uhle (Fraunhofer IIS) and Gordon Wichern (Mitsubishi Electric Research Laboratories). The program, to be offered across two four-hour days to maximally accommodate participants in different geographical regions, will consist of pre-recorded presentations with live Q&A alongside parallel sessions in dedicated breakout rooms. Each day will conclude with an online social hour.

Symposium Chair Jonathan Wyner, told us, "AI and the influence of data are all around us. Exploring the potentially disruptive and beneficial impact of these emerging technologies will be the focus of our event. Attendees will learn about how it is already present in products and workflows and where it may appear next."

Keynote presenter Herndon personifies the depth and diversity of the Symposium's presenter pool. She studied composition at Stanford University and completed her Ph.D. at Stanford University's Center for Computer Research in Music and Acoustics and continues her artistic career while currently based in Berlin, Germany. The composer, on her latest full-length album, PROTO, fronts and conducts an electronic pop choir consisting of both human and A.I. voices over a musical palette that encompasses everything from synths to Sacred Harp stylings.

In addition to the scheduled technical program sessions, the event committee is currently accepting proposals for parallel breakout sessions on related topics. Accepted presenters will submit a pre-recorded video between five and 10 minutes long, which attendees of the symposium will have the opportunity to view beforehand. During the session, presenters will be online in an interactive video channel, where they may present further materials and answer questions from attendees.

Pricing and Availability:

Registration is $25 for AES members and $150 for non-members, the latter includes one year of complimentary AES membership and access to full member benefits and resources.

More information:

More:
Applications Of Machine Learning In Audio - Sonic State

AI Is All the Rage. So Why Arent More Businesses Using It? – WIRED

The Census report found AI to be less widespread than some earlier estimates. The consulting firm McKinsey, for instance, reported in November 2018 that 30 percent of surveyed executives said their firms were piloting some form of AI. Another study, by PwC at the end of 2018, found that 20 percent of executives surveyed planned to roll out AI in 2019.

One reason for the difference is that those surveys were focused on big companies which are more likely to adopt new technology. Fortune 500 firms have the money to invest in expertise and resources, and often have more data to feed to AI algorithms.

For a lot of smaller companies, AI isnt part of the picturenot yet, at least. Big companies are adopting, says Brynjolfsson, but most companies in AmericaJoes pizzeria, the dry cleaner, the little manufacturing companythey are just not there yet.

Another reason for the discrepancy is that those who responded to the Census survey might not realize that their company is using some form of AI. Companies could use software that relies on some form of machine learning for tasks such as managing employees or customers without advertising the fact.

Even if AI isnt yet widespread, the fact that it is more common at larger companies is important, because those companies tend to drive an even greater proportion of economic activity than their size suggests, notes Pascual Restrepo, an assistant professor at Boston University who researches technology and the economy. He adds that job ads for AI experts increased significantly in 2019.

LinkedIn says that postings for AI-related roles grew 14 percent year over year for the 10 weeks before the Covid outbreak slowed hiring in early March. There has been a very rapid uptake in terms of hiring of people with skills related to AI, Restrepo says.

Another data point that suggests rapid growth in use of AI comes from Google. Kemal El Moujahid, director of product management for TensorFlow, Googles software framework for creating AI programs, says interest in the product has skyrocketed recently. The framework has been downloaded 100 million times since it was released five years agoincluding 10 million times in May 2020 alone.

The economic crisis triggered by the pandemic may do little to dim companies' interest in automating decisions and processes with AI. What can be accomplished is expanding really rapidly, and we're still very much in the discovery phase, says David Autor, an economist at MIT. I cant see any reason why, in the midst of this, people would say, Oh no, we need less AI.

But the benefits may not flow equally to all companies. One worrying aspect that this survey reveals, the report concludes, is that the latest technology adoption is mostly being done by the largest and older firms, potentially leading to increased separation between the typical firm and superstar firms.

As a general principle, says Restrepo of Boston University, when technology adoption concentrates amongst a handful of firms, the gains will not be fully passed to consumers.

Nicholas Bloom, a professor of economics at Stanford, isnt so sure. While the average small firm lags the average large firm, there are some elite adopters in small firms, Bloom says. These are the rapid innovators, who are creative and ambitious, often becoming the larger firms of the future.

More Great WIRED Stories

See the rest here:
AI Is All the Rage. So Why Arent More Businesses Using It? - WIRED

Decoding Practical Problems and Business Implications of Machine Learning – Analytics Insight

Machine learning typically is used to solve a host of diverse problems within an organization, extracting predictive knowledge from both structured and unstructured data and using them to deliver value. The technology has already made its way into different aspects of a business ranging from finding data patterns to detect anomalies and making recommendations. Machine learning helps organizations gain a competitive edge by processing a voluminous amount of data and applying complex computations.

With machine learning, companies can develop better applications according to their business requirements. This technology is mainly designed to make everything programmatic. Applications of machine learning have the potential to drive business outcomes that can extensively affect a companys bottom line. The rapid evolution of new techniques in recent years has further expanded the machine learning application to nearly boundless possibilities.

Industries relying on massive volumes of data are significantly leveraging machine learning techniques to process their data and to build models, strategize, and plan.

While implementing the effective application of machine learning enables businesses to grow, gain competitive advantage and prepare for the future, there are some key practical issues in machine learning and their business implications organizations must consider.

As machine learning significantly relies on data, the occurrence of noisy data can considerably impact any information prediction. Generally, data from a dataset carries extraneous and meaningless information which can significantly affect data analysis, clustering and association analysis. Having a lack of quality data can also restrain the capabilities of building ML models. In order to cope with quality data and noise, businesses need to apply better and effective machine learning strategies through data cleansing and overall processing of data.

There is no doubt that the development of machine learning has made it possible to learn directly from data rather than human knowledge with a strong emphasis on accuracy. However, the lack of the ability to explain or present data in understandable terms to a human, often called interpretability, is one of the biggest issues in machine learning. The introduction of possible biases in data has also led to ethical and legal issues with ML models. Theinterpretabilitylevels in the field of machine learning and algorithms may significantly vary. Some methods are human-compatible as they are highly interpretable, while some are too complex to apprehend, thus require ad-hoc methods to gain an interpretation.

In the context of supervised machine learning, an imbalanced dataset often involves two or more classes. There is an imbalance among labels in the training data in several real-world datasets. This imbalance in a dataset has the potential to affect the choice of learning, the process of selecting algorithms, model evaluation and verification. The models can even suffer large biases, and the learning will not be effective if the right techniques are not employed properly. ML algorithms can generate insufficient classifiers when faced with imbalanced datasets. When trying to resolve certain business challenges with imbalanced data sets, the classifiers produced by standard ML algorithms might not deliver precise outcomes.

Thus, to address imbalanced datasets requires strategies like enhancing classification algorithms or balancing classes in the training data before providing the data as input to machine learning algorithms.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Vivek Kumar is the President of Consumer Revenue at UpGrad, an online education platform providing industry oriented programs in collaboration with world-class institutes, some of which are MICA, IIIT Bangalore, BITS and various industry leaders which include MakeMyTrip, Ola, Flipkart to name a few. He has 19 years of experience in diversified industries like Consumer goods, Media, Technology Products and Ed-ucation Services. He has been leading businesses & multi-cultural teams with a consistent record of market-beating performance and building brand leadership. His previous engagement has been with Manipal Global Education services as Sr General Manager, Education Services (Digital Transformation Strategy & Global Expansion).

Read the rest here:
Decoding Practical Problems and Business Implications of Machine Learning - Analytics Insight

Deep Learning is Coming to Hockey – Last Word on Hockey

Analytics have been transforming how we watch hockey. The revolution is just beginning. Statisticians and quantitative experts have led the way. Their impact has changed how we discuss and watch hockey.Analytics have been influential. Deep learning will be disruptive.

Advances in computing and understanding of complex relationships will massively alter the sporting landscape. Hockey will not be immune.

Every decision point is potentially affected. This will lead to impacts on and off the ice. Whoever gets there first will have an enormous competitive advantage. Think Moneyball, but with a team that maybe doesnt lose in the playoffs.

Our technology is getting smarter. Deep Learning (also known as machine learning) is coming to many aspects of life. The basic idea is using a computer to analyze complex interactions to come to conclusions. We have seen the concept applied to medicine with great results. The worlds greatest GO player has left the game after realizing the robots cant be beat. Team sports will be conquered next.

High-end computers can do mathematical calculations we humans can only dream of. This is the basis of how it can work.

Machine learning is an application of Artificial Intelligence (AI.) The focus is providing data to computers, which then learn and improve with experience. These machines arent programmed in the traditional sense, rather they are developed by allowing computers to access data and learn from it themselves.

Like in the outside world, the impacts for sports are numerous. There are many potential applications for deep learning. A look at the call for papers for the 2020 Machine Learning and Data Mining for Sports Analytics conference shows what this world is working on.Expected topics include items such as:

A quick glance at the topics demonstrates the field is getting into increasingly complex issues. This has the potential to reshape coaching, management, and player development.

There is good data and bad data. Like the larger debate about analytics, the availability and value of information is of concern. The sheer number of variables in the chaotic environment on the ice makes the analysis complex. Stop and go sports like baseball and football are easier to analyze as the statistics tend to be more clear cut.

All numbers arent created equal. The issue of inconsistent stat keepers will slow progress down. A shot or a hit in one arena may not be the same in the next. Stats also become less reliable away from professional leagues, and so a close look at the numbers going in are needed to produce accuracy. Quantitative analysis is wonderful, but critical analysis to ensure accuracy is needed. In science speak, you need to operationalize things properly.

The complexity of hockey will make adopting deep learning difficult. It will be one of the last sports to truly be able to take advantage of it. There are many ways it will affect the game for fans, players, and teams. The complexity problem will be overcome.

Whos going to win? Can statistics help us understand the answer? Apparently, yes.

Predicting results has been a primary focus of deep learning applied to sports. The first tests have focused on predicting results. The potential of figuring out whos going to win, and how to efficiently bet would be lucrative for outsiders. Like in other sports, this is the first area where deep learning is likely to come.

It has been a long road, but expert pundits are falling. In the early days of deep learning, the experts at prediction on tv were better. This is changing. Back in 2003, early attempts computers were not able to beat expert pundits at prediction. Recently, a deep learning machine (75% accuracy) was able to beat the ESPN teams 63% accuracy over the same time. This is just the first step.

Football experts were the first to fall. Machine learning will change the game well beyond that. They have the ability to be early adopters in the field. Particularly as the NFL has so much money, they are likely to continue to be the league to watch for the effects of deep learning.

That said, this is spreading. It has been applied to the English Premier League and many other sports. When it arrives in the hockey world, it will change how teams manage their decision making at all levels. From who to sign as a free agent, to who to trade for, and even lineup decisions night to night. The applications are limited only to the availability of the data.

While hockey is chaotic and numbers are inconsistent, this problem can be lessened. Stathletes seem likely to be the people who do it. Hockey is well aware of the name Chayhka already. Meghan is the one to watch in this case. She was one of 3 co-founders of the company along with brother John and Neil Lane.

What they do:

Using proprietary video tracking software, Stathletes pulls together thousands of performance metrics per game and compiles analytics related to each player and team. These analytics can provide baseline benchmarking, player comparisons, line matching, and player and team performance trends. Stathletes currently tracks data in 22 leagues worldwide and sells data to a wide variety of clients, including the National Hockey League (NHL). Via FedDev

If they are using machine learning, it is not clear. If not, it seems inevitable that they will. Meghan Chayka currently works with an expert in machine learning at the TD Management Data and Analytics Lab at Rotman (business school) at University of Toronto. Seems likely they can benefit each other, and would know this. (This may be part of the reason why Arizona seems peeved at Chayka currently. They may have just become a data have not.)

Stathletes and other groups are gaining knowledge and information. They will improve as they go. The NHL is open to this, its coming.

Machine learning has arrived. As the ability to obtain information improves, it will coincide with further developments and whats to come. If you are able to follow, Neil Lane (current Stathletes CEO) is to speak at the University of Waterloo on what sports managers can learn from analytics. This should be enlightening.

Embedded items will be key. Chips and sensors in various hockey items are coming. Jerseys and pucks will be transmitting the information. Learning computers will put it together.

The impacts will be numerous. Coaches, players, agents, and teams will have considerably more knowledge. This changes decision making. Training. Diet. Trades. Penalty Kill lineups. The possibilities are endless.

Deep learning will lead to hockey having more knowledge of all aspects. If people like Pierre McGuire hate analytics now, just wait for whats to come.

Main Photo:

Embed from Getty Images

See original here:
Deep Learning is Coming to Hockey - Last Word on Hockey

Machine Learning And Organizational Change At Southern California Edison – Forbes

An electrical lineman for Southern California Edison works on replacing a transformer as a whole ... [+] block is rewired. Long Beach, California. April 2014.

Analytics are typically viewed as an exercise in data, software and hardware. However, if the analytics are intended to influence decisions and actions, they are also an exercise in organizational change. Companies that dont view them as such are likely not to get much value from their analytics projects.

One organization that is pursuing analytics-based organizational change is Southern California Edison (SCE). One key focus of their activity is safety predictive analyticsunderstanding and predicting high risk work activities by the companys field employees that might lead to a life threatening and/or life altering incident causing injury or death. Safety issues, as you might expect, are fraught with organizational perilpolitics, lack of transparency, labor relations, and so forth. Even reporting a close call runs counter to typical organizational cultures. These organizational perils are a concern to SCE as well, but the company has created an approach to address them. SCE hasnt completely mastered safety predictive analytics and the requisite organizational changes, but its making great progress.

A Structure for Producing Analytical Change

Key to the success of the SCE approach is the structure of the analytical team that is addressing safety analytics. It is small, experienced, and integrated. Two of the key members of the team are Jeff Moore and Rosemary Perez, and they make a dynamic combination. Moore is a data scientist who works in the IT function; Perez works in Safety, Security, and Business Resiliency, and is a Predictive Analytics Advisor. In effect, Moore handles all the analytics and modeling activities on the project, and Perez, who has many years of experience in the field at SCE, leads the change management activities.

Steps to manage organizational change started at the beginning of the project and have persisted throughout it. One of the first objectives was to explain the model and variable insights to management. Outlining the range of possible outcomes allowed Perez and Moore to gain the support needed for a company wide deployment. Since Perez had relationships and trust in the districts, she could introduce the project concept to field management and staff without the concern about Why is Corporate here?. Perez noted that its important to be transparent when speaking with the teams. That trust has resulted in the district staffs willingness to listen and share their ideas on how best to deploy the model, to address missing variables and data, and to drive higher levels of adoption.

The team took all the time needed to get stakeholders engaged. Moore came into the project in the summer of 2018, and he was able to get a machine learning model up and running in a month or so, but presenting it, socializing it, and gaining buy-in for it took far longer. Moore and Perez met with executives of SCE in November and December of 2018. Within days of these meetings the safety model analytics project became a 2019 corporate goal for SCE. Safety was the companys number one priority, and it was willing to try innovative ideas to move it forward. For such a small team to have their work made into a corporate goal is unusual at SCE and elsewhere.

The Risk Model and its Findings

SCE now has an analytical risk-based framework, and risk scores for specific types of work activities and the context of the work. The model draws from a large data warehouse at SCE with work order data, structure characteristics, injury records, experience and training, and planning detail. All those factors were not previously linked, and there wasas is often the case with analyticsconsiderable data engineering necessary to pull together and relate the data.

The machine learning model scores activities that teams in the field perform, like setting a new pole or replacing an insulator. Each activity may be more or less dangerous depending on the time of year, day of the week, weather, crew size and composition, and so forth. Replacing a pole, for example, may be only a moderate risk task in itself, but when done on the side of a hill in the rain with a crane it becomes very high risk. Instead of generic safety messages to employees, SCE can now get much more specific by describing the risk of particular activities they perform on the job in a particular context.

As the model learns it will recommend specific approaches to reduce the risk of a job, like altering the crew mix or crew size, requiring additional management presence, using specific equipment or rigging to perform the work, or creating a longer power outage in order to do the job more slowly. The latter recommendation runs counter to the culture of not inconveniencing customers, but if the model specifically recommends it, then the teams will discuss the contributing factors as well as their years of experience to mitigate the risk before executing the work.

The project has led to several more general findings, which are of greatest interest to SCE executives. For example, management has long been interested in using data to understand changing safety risk profiles of the field teams over time as a result of increasing/decreasing workloads or as weather patterns change. While the predictive model considers more than 200 variables, the findings from the model have been summarized into the top fifteen distinct drivers of serious injury and fatality. Some shifting of variables is expected over time, but there has been great interest in better understanding the initial set of risk factors.

Deploying the Model and Needed Organizational Changes

Moore and Perez are in the early stages of deploying the model; theyve rolled it out to six of 35 districts thus far. Each district has a unique personality, and they dont want cookie-cutter answers on how to deploy in their district.

Moore, whose primary role was to create the model, said he has realized that safety analytics are not just about a model. I started out thinking it was about an algorithm, but I realized many other factors were involved in improving safety. Moore said that he gets some pressure to move on to analytics in other parts of the business, but in order to see your models come to life you have to go through this kind of process. And everyone at SCE believes the safety work is critical.

Perez, whose primary focus is change management, listed some of the organizational changes in deployment. There might be training issuesnot only on analytics, but also communication, leadership and ownership. There might be process concernshow we plan and communicate work. There may be technology concerns in using the system.

Perez also says the process of working with a district is critical. You cant just walk into a district and disrupt their work flow for no reason, she says. They want to know your purpose and your objective. We try to connect, show transparency, and build trust that we are here to help, that we are here to observe how they mitigate risk, to share our findings, and to see how the findings might be integrated into their work practices. We hope they will help us understand the complexity they face every day.

Both team members say they learn something every time they visit a district. Moore notes, You can only see the data you can see in the data warehousetime sheets, work orders, etc. But when you talk to the people who do the work, you learn a lot about how the data is created and applied. With each visit I understand the drivers better and the complexity of the work. I can also speak the language better with each district visit, and I understand the process and the equipment better as well.

With the findings from the model, Moore and Perez are beginning to work with another partner at SCEthe HR organization. It is responsible for defining work practices, training needs, standard operating procedures, and job aids. Each of these is potentially influenced by findings about safety risks, so the goal is to incorporate analytical results into the practices and procedures.

The team is already working to modify the model to incorporate new factorsone of which, not surprisingly given the situation in California, involves the risk of wildfires. Moore and Perez are also trying to create more integration of the risk scores with the work order system. They also plan to try to incorporate the risk model into other SCE business functions like Engineering, which might be able to lower the risk in the planning and construction of the electric grid. All in all, using data and analytics to improve safety is a time-consuming and multifaceted process, but what could be more important than reducing injury and fatality among SCE employees and work crews?

The rest is here:
Machine Learning And Organizational Change At Southern California Edison - Forbes

97 Things About Ethics Everyone In Data Science Should Know – Machine Learning Times – machine learning & data science news – The Predictive…

Every now and then an opportunity comes along that you just cant pass up. One such opportunity that fell into my lap was when OReilly media reached out to me to see if I was interested in partnering on a collaborative book on the ethics that surround data science. For those who know me and follow my work, they have seen me calling for more focus on ethics for several years. Ive written blogs and papers on the topic, Ive given many conference presentations on the topic (including at Predictive Analytics World 2019!), and Ive had countless discussions with clients

To view this content OR subscribe for free

Already receive the Machine Learning Times emails?The Machine Learning Times now requires legacy email subscribers to upgrade their subscription - one time only - in order to attain a password-protected login and gain complete access.

Sign up for the Newsletter with your Choice of social media account:

Go here to see the original:
97 Things About Ethics Everyone In Data Science Should Know - Machine Learning Times - machine learning & data science news - The Predictive...