VelocityEHS Industrial Ergonomics Solution Harnesses AI and Machine Learning to Drive … – KULR-TV

CHICAGO, April 26, 2022 (GLOBE NEWSWIRE) -- VelocityEHS,the global leader in cloud-based environmental, health, safety (EHS) and environmental, social, and corporate governance (ESG) software, announced the latest additions to the Accelerate Platform, including a highly anticipated new feature,Active Causes & Controls, to its award-winning Industrial Ergonomics Solution. Rooted in ActiveEHS the proprietary VelocityEHS methodology that leverages AI & machine learning to help non-experts produce expert-level results this enhancement kicks off a new era in the prevention of musculoskeletal disorders (MSDs).

Designed, engineered, and embedded with expertise by an unmatched group of board-certified ergonomists, the ActiveEHS powered Active Causes and Controls feature helps companies reduce training time, maintain process consistency across locations, and focus on implementing changes that maximize business results. Starting with the industrys best sensorless, motion-capture technology, which performs ergonomics assessments faster, easier, and more accurately than any human could, the solution then guides users through suggested root causes and job improvement controls. Recommendations are based on AI and machine learning insights fed by data collected from hundreds of global enterprise customers and millions of MSD risk data points.

The result is an unparalleled opportunity to prevent MSD risk, reduce overall injury costs, drive productivity, and provide employees with quality-of-life changing improvements in the workplace.

These are exciting times for anyone who cares about EHS and ESG, said John Damgaard, CEO of VelocityEHS. While its true, the job of a C-suite executive or EHS professional has never been more challenging and complex; its also true that leaders have never had this kind of advanced, highly usable, and easy-to-deploy technology at their fingertips. Ergonomics is just the start; ActiveEHS will transform how we think about health, safety, and sustainability going forward. It is the key to evolving from a reactive documentation and compliance mindset to a proactive continuous improvement cycle of prediction, intervention, and outcomes.

MSDs are a major burden on workers and a huge cost to employers.According to the Bureau of Labor Statistics, for employers in the U.S. private sector alone, MDSs cause more than 300,000 days away from work and per OSHA, are responsible for $20 billioneveryyear in workers compensation claims.

Also Announced Today: New Training & Learning Content, Enhancements to Automated Utility Data Management, and Improved workflows for the Control of Work Solution.

The VelocityEHS Safety Solution, which includes robust Training & Learning capabilities, is undergoing a major expansion of its online training content library. To enable companies to meet more of their training responsibilities, the training content library is growing from approximately 100 courses to over 750. They will be available in multiple languages, including 300+ courses in Spanish. The new content will feature microlearning modules, which have gained popularity in recent years as workers prefer shorter, easily digestible training sessions. This results in less time in front of the screen for workers, while employers report better engagement and overall retention of the material.

The VelocityEHS Climate Solution continues to capitalize on the VelocityEHS partnership with Urjanet the engine behind the recently announced Automated Utility Data Management capabilities. Now, in addition to saving time and reducing costs related to the collection of utility data, users can automatically port their energy, gas and water usage data into the VelocityEHS Climate Solution to perform GHG calculations and report on Scope 1,2, and 3 emissions, without any manual effort.

The Companys Control of Work Solution boasts a new streamlined navigation and enhanced functionality that allows customers to add new, pre-approved roles for improved compliance and approval workflows.

Industrial Ergonomics, Safety, Climate, and Control of Work solutions are all part of the VelocityEHS AcceleratePlatform, which delivers best-in-class performance in the areas of health, safety, risk, ESG, and operational excellence. Backed by the largest global software community of EHS experts and thought leaders, the software drives expert processes so every team member can produce outstanding results.

For more information about VelocityEHS and its complete offering of award-winning software solutions, visit http://www.EHS.com.

AboutVelocityEHS Trusted by more than 19,000 customers worldwide, VelocityEHS is the global leader in true SaaS enterprise EHS technology. Through the VelocityEHS Accelerate Platform, the company helps global enterprises drive operational excellence by delivering best-in-class capabilities for health, safety, environmental compliance, training, operational risk, and environmental, social, and corporate governance (ESG). The VelocityEHS team includes unparalleled industry expertise, with more certified experts in health, safety, industrial hygiene, ergonomics, sustainability, the environment, AI, and machine learning than any EHS software provider. Recognized by the EHS industrys top independent analysts as a Leader in the Verdantix 2021 Green Quadrant AnalysisVelocityEHS is committed to industry thought leadership and to accelerating the pace of innovation through its software solutions and vision.

VelocityEHS is headquartered in Chicago, Illinois, with locations in Ann Arbor, Michigan; Tampa, Florida; Oakville, Ontario; London, England; Perth, Western Australia; and Cork, Ireland. For more information, visit http://www.EHS.com.

Media Contact Brad Harbaugh 312.881.2855 bharbaugh@ehs.com

See original here:
VelocityEHS Industrial Ergonomics Solution Harnesses AI and Machine Learning to Drive ... - KULR-TV

America’s AI in Retail Industry Report to 2026 – Machine Learning Technology is Expected to Grow Signific – Benzinga

The "America's AI in the Retail Market - Growth, Trends, COVID-19 Impact, and Forecasts (2022 - 2027)" report has been added to ResearchAndMarkets.com's offering.

America's AI in the retail market is expected to register a CAGR of 30% during the forecast period, 2021 - 2026.

Companies Mentioned

Key Market Trends

Machine Learning Technology is Expected to Grow Significantly

Food and Grocery to Augment Significant Growth

Key Topics Covered:

1 INTRODUCTION

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET DYNAMICS

4.1 Market Overview

4.2 Market Drivers

4.2.1 Hardware Advancement Acting as a Key Enabler for AI in Retail

4.2.2 Disruptive Developments in Retail, including AR, VR, IOT, and New Metrics

4.2.3 Rise of AI First Organizations

4.2.4 Need for Efficiency in Supply Chain Optimization

4.3 Market Restraints

4.3.1 Lack of Professionals, as well as In-house Knowledge for Cultural Readiness

4.4 Industry Value Chain Analysis

4.5 Porter's Five Forces Analysis

4.6 Industry Policies

4.7 Assessment of Impact of COVID-19 on the Industry

5 AI Adoption in the Retail Industry

5.1 AI Penetration with Retailers (Historical, Current, and Forecast)

5.2 AI penetration by Retailer Size (Large and Medium)

5.3 AI Use Cases in Operations

5.3.1 Logistics and Distribution

5.3.2 Planning and Procurement

5.3.3 Production

5.3.4 In-store Operations

5.3.5 Sales and Marketing

5.4 AI Retail Startups (Equity Funding vs Equity Deals)

5.5 Road Ahead for AI in Retail

6 MARKET SEGMENTATION

6.1 Channel

6.2 Solution

6.3 Application

6.4 Technology

7 COMPETITIVE LANDSCAPE

7.1 Company Profiles

8 INVESTMENT ANALYSIS

9 MARKET TRENDS AND FUTURE OPPORTUNITIES

For more information about this report visit https://www.researchandmarkets.com/r/kddpm3

View source version on businesswire.com: https://www.businesswire.com/news/home/20220427005894/en/

View original post here:
America's AI in Retail Industry Report to 2026 - Machine Learning Technology is Expected to Grow Signific - Benzinga

Machine learning hiring levels in the ship industry rose in March 2022 – Ship Technology

The proportion of ship equipment supply, product and services companies hiring for machine learning related positions rose in March 2022 compared with the equivalent month last year, with 20.6% of the companies included in our analysis recruiting for at least one such position.

This latest figure was higher than the 16.2% of companies who were hiring for machine learning related jobs a year ago but a decrease compared to the figure of 22.6% in February 2022.

When it came to the rate of all job openings that were linked to machine learning, related job postings dropped in March 2022, with 0.4% of newly posted job advertisements being linked to the topic.

This latest figure was a decrease compared to the 0.5% of newly advertised jobs that were linked to machine learning in the equivalent month a year ago.

Machine learning is one of the topics that GlobalData, from whom our data for this article is taken, have identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that ship equipment supply, product and services companies are currently hiring for machine learning jobs at a rate lower than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 1.3% in March 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

Communication Systems for Maritime Control Centres

Integrated Electric Propulsion Systems for Ships

Read more from the original source:
Machine learning hiring levels in the ship industry rose in March 2022 - Ship Technology

Striveworks and Figure Eight Federal Enter into Strategic Partnership for Enhanced Annotation Capabilities within Machine Learning Operations Platform…

Together Striveworks and Figure Eight Federal Enhance the AI Capabilities for the Department of Defense and Federal Law Enforcement

AUSTIN, Texas and ARLINGTON, Va., April 27, 2022 /PRNewswire/ -- Striveworks and Figure Eight Federal are excited to announce their strategic alliance to jointly support the government's emerging capabilities in AI technologies.

David Poirier, President of Figure Eight Federal, said "Our efforts to assist federal customers parallels that of Striveworks and therefore we are excited to work with Striveworks to achieve our common goals."

Figure Eight Federal has more than 15 years of experience assisting its federal customers with their advanced annotation needs. Data annotation is the process of labeling data to enable a model to make decisions and take action. To take action, a model must be trained to understand specific information. With Figure Eight Federal this is done with training data that is annotated and properly categorized giving you confidence for each specific use case.

Data annotated by Figure Eight can be directly integrated with Striveworks' Chariot MLOps platform for model development, training, and deployment within operational timelines. Striveworks has an extensive record of positive performance in delivering software and data science products and services within DoD operational environments. Earlier this year, Striveworks was awarded a basic ordering agreement for the The Data Readiness for Artificial Intelligence Development (DRAID) by U.S. Contracting Command on behalf of the Joint Artificial Intelligence Center (JAIC).

The strategic alliance of these companies will help customers in Defense and Federal law enforcement to step into using artificial intelligence solutions across their wide data landscapes.

Striveworks Executive Vice President Quay Barnett said, "The Striveworks and Figure Eight partnership brings our customers a scalable impact for accurate and rapid decision advantage from their data. Figure Eight's low code annotation platform integrates with our low code Chariot MLOps platform to accelerate AI solutions for our joint customers."

Story continues

About Striveworks

Striveworks is a pioneer in operational data science for national security and other highly regulated spaces. Striveworks' flagship MLOps platform is Chariot, purpose-built to enable engineers and business professionals to transform their data into actionable insights. Founded in 2018, Striveworks was highlighted as an exemplar in the National Security Commission for AI 2020 Final Report.

About Figure Eight Federal

Figure Eight Federal's AI & data enrichment platform includes multiple toolsets and algorithms that have been used by some of the world's largest tech companies and Government Agencies. Our data scientists and AI/ML experts have deep knowledge and understanding of many types of data and their use cases including Natural Language Process and Computer Vision. We have the skills and technology required to make AI/ML testing and evaluation more systematic and scalable allowing analysts to easily make comparisons, determine accuracy, bias and vulnerability.

Contact: info@F8Federal.com Media Contact: Janet Waring

Website: F8Federal.com

Address: 1735 N Lynn St, #730Arlington, VA 22209

Cision

View original content:https://www.prnewswire.com/news-releases/striveworks-and-figure-eight-federal-enter-into-strategic-partnership-for-enhanced-annotation-capabilities-within-machine-learning-operations-platform-301534526.html

SOURCE Figure Eight Federal

Go here to see the original:
Striveworks and Figure Eight Federal Enter into Strategic Partnership for Enhanced Annotation Capabilities within Machine Learning Operations Platform...

Control Risks Taps Reveal-Brainspace to Bolster its Suite of Analytics, AI and Machine Learning Capabilities – GlobeNewswire

London, Chicago, April 26, 2022 (GLOBE NEWSWIRE) -- Control Risks, the specialist risk consultancy, today announced it is expanding its technology offering with Reveal, the global provider of the leading AI-powered eDiscovery and investigations platform. Reveal uses adaptive AI, behavioral analysis, and pre-trained AI model libraries to help uncover connections and patterns buried in large volumes of unstructured data.

Corporate legal and compliance teams, and their outside counsel, are looking to technology to better understand data, reduce risks and costs, and extract key insights faster across an ever-increasing volume and variety of data. We look forward to leveraging Reveals data visualization, AI and machine learning functionality to drive innovation with our clients, said Brad Kolacinski, Partner, Control Risks.

Control Risks will leverage the platform globally to unlock intelligence that will help clients mitigate risks across a range of areas including litigation, investigations, compliance, ethics, fraud, human resources, privacy and security.

We work with clients and their counsel on large, complex, cross-border forensics and investigations engagements. It is no secret that AI, ML and analytics are now required tools in matters where we need to sift through enormous quantities of data and deliver insights to clients efficiently, says Torsten Duwenhorst, Partner, Control Risks. Offering the full range of Reveals capabilities globally will benefit our clients enormously.

As we continue to expand the depth and breadth of Reveals marketplace offerings, we are excited to partner with Control Risks, a demonstrated leader in security, compliance and organizational resilience offerings that are more critical now than ever, said Wendell Jisa, Reveals CEO. By taking full advantage of Reveals powerful platform, Control Risks now has access to the industrys leading SaaS-based, AI-powered technology stack, helping them and their clients solve their most complex problems with greater intelligence.

For more information about Reveal-Brainspace and its AI platform for legal, enterprise and government organizations, visit http://www.revealdata.com.

###

About Control Risks

Control Risks is a specialist global risk consultancy that helps to create secure, compliant and resilient organizations in an age of ever-changing risk. Working across disciplines, technologies and geographies, everything we do is based on our belief that taking risks is essential to our clients success. We provide our clients with the insight to focus resources and ensure they are prepared to resolve the issues and crises that occur in any ambitious global organization. We go beyond problem-solving and provide the insights and intelligence needed to realize opportunities and grow. Control Risks will initially provide Reveal-Brainspace in the US, Europe and Asia Pacific. Visit us online at http://www.controlrisks.com.

About Reveal

Reveal, with Brainspace technology, is a global provider of the leading AI-powered eDiscovery platform. Fueled by powerful AI technology and backed by the most experienced team of data scientists in the industry, Reveals cloud-based software offers a full suite of eDiscovery solutions all on one seamless platform. Users of Reveal include law firms, Fortune 500 corporations, legal service providers, government agencies and financial institutions in more than 40 countries across five continents. Featuring deployment options in the cloud or on-premise, an intuitive user design and multilingual user interfaces, Reveal is modernizing the practice of law, saving users time and money and offering them a competitive advantage. For more information, visit http://www.revealdata.com.

Follow this link:
Control Risks Taps Reveal-Brainspace to Bolster its Suite of Analytics, AI and Machine Learning Capabilities - GlobeNewswire

IBM And MLCommons Show How Pervasive Machine Learning Has Become – Forbes

AI, Artificial Intelligence concept,3d rendering,conceptual image.

This week IBM announced its latest Z-series mainframe and MLCommons released its latest benchmark series. The two announcements had something in common Machine Learning (ML) acceleration which is becoming pervasive everywhere from financial fraud detection in mainframes to detecting wake words in home appliances.

While these two announcements were not directly related, but they are part of a trend, showing how pervasive ML has become.

MLCommons Brings Standards to ML Benchmarking

ML benchmarking is important because we often hear about ML performance in terms of TOPS trillions of operations per second. Like MIPS (Millions of Instructions per Second or Meaningless Indication of Processor Speed depending on your perspective), TOPS is a theoretical number calculated from the architecture, not a measured rating based on running workloads. As such, TOPS can be a deceiving number because it does not include the impact of the software stack., Software is the most critical aspect of implementing ML and the efficiency varies widely, which Nvidia clearly demonstrated by improving the performance of its A100 platform by 50% in MLCommons benchmarks over the years.

The industry organization MLCommons was created by a consortium of companies to build a standardized set of benchmarks along with a standardized test methodology that allows different machine learning systems to be compared. The MLPerf benchmark suites from MLCommons include different benchmarks that cover many popular ML workloads and scenarios. The MLPerf benchmarks addresses everything from the tiny microcontrollers used in consumer and IoT devices, to mobile devices like smartphones and PCs, to edge servers, to data center-class server configuration. Supporters of MLCommons include Amazon, Arm, Baidu, Dell Technologies, Facebook, Google, Harvard, Intel, Lenovo, Microsoft, Nvidia, Stanford and the University of Toronto.

MLCommons releases benchmark results in batches and has different publishing schedules for inference and for training. The latest announcement was for version 2.0 of the MLPerf Inference suite for data center and edge servers, version 2.0 for MLPerf Mobile, and version 0.7 for MLPerf Tiny for IoT devices.

To date, the company that has had the most consistent set of submissions, producing results every iteration, in every benchmark test, and by multiple partners, has been Nvidia. Nvidia and its partners appear to have invested enormous resources in running and publishing every relevant MLCommons benchmark. No other vendor can match that claim. The recent batch of inference benchmark submissions include Nvidia Jetson Orin SoCs for edge servers and the Ampere-based A100 GPUs for data centers. Nvidias Hopper H100 data center GPU, which was announced at Spring 2022 GTC, arrived too late to be included in the latest MLCommons announcement, but we fully expect to see Nvidia H100 results in the next round.

Recently, Qualcomm and its partners have been posting more data center MLPerf benchmarks for the companys Cloud AI 100 platform and more mobile MLPerf benchmarks for Snapdragon processors. Qualcomms latest silicon has proved to be very power efficient in data center ML tests, which may give it an edge on power-constrained edge server applications.

Many of the submitters are system vendors using processors and accelerators from silicon vendors like AMD, Andes, Ampere, Intel, Nvidia, Qualcomm, and Samsung. But many of the AI startups have been absent. As one consulting company, Krai, put it: Potential submitters, especially ML hardware startups, are understandably wary of committing precious engineering resources to optimizing industry benchmarks instead of actual customer workloads. But then Krai countered their own objection with MLPerf is the Olympics of ML optimization and benchmarking. Still, many startups have not invested in producing MLCommons results for various reasons and that is disappointing. Theres also not enough FPGA vendors participating in this round.

The MLPerf Tiny benchmark is designed for very low power applications such as keyword spotting, visual wake words, image classification, and anomaly detection. In this case we see results from a mix of small companies like Andes, Plumeria, and Syntiant, as well as established companies like Alibaba, Renesas, Silicon Labs, and STMicroeletronics.

IBM z16 Mainframe

IBM Adds AI Acceleration Into Every Transaction

While IBM didnt participate in MLCommons benchmarks, the company takes ML seriously. With its latest Z-Series mainframe computer, the z16, IBM has added accelerators for ML inference and quantum-safe secure boot and cryptography. But mainframe systems have different customer requirements. With roughly 70% of banking transactions (on a value basis) running on IBM mainframes, the company is anticipating the needs of financial institutes for extreme reliable and transaction processing protection. In addition, by adding ML acceleration into its CPU, IBM can offer per-transaction ML intelligence to help detect fraudulent transactions.

In an article I wrote in 2018, I said: In fact, the future hybrid cloud compute model will likely include classic computing, AI processing, and quantum computing. When it comes to understanding all three of those technologies, few companies can match IBMs level of commitment and expertise. And the latest developments in IBMs quantum computing roadmap and the ML acceleration in the z16, show IBM is a leader in both.

Summary

Machine Learning is important from tiny devices up to mainframe computers. Accelerating this workload can be done on CPUs, GPUs, FPGAs, ASICs, and even MCUs and is now a part of all computing going forward. These are two examples of how ML is changing and improving over time.

Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for IBM, Nvidia, Qualcomm, and other companies throughout the AI ecosystems.

See the rest here:
IBM And MLCommons Show How Pervasive Machine Learning Has Become - Forbes

Amazon awards grant to UI researchers to decrease discrimination in AI algorithms – UI The Daily Iowan

A team of University of Iowa researchers received $800,000 from Amazon and the National Science Foundation to limit the discriminatory effects of machine learning algorithms.

Larry Phan

University of Iowa researcher Tianbao Yang seats at his desk where he works on AI research on Friday, Aril 8, 2022.

University of Iowa researchers are examining discriminative qualities of artificial intelligence and machine learning models, which are likely to be unfair against ones race, gender, or other characteristics based on patterns of data.

A University of Iowa research team received an $800,000 grant funded jointly by the National Science Foundation and Amazon to decrease the possibility of discrimination through machine learning algorithms.

The three-year grant is split between the UI and Louisiana State University.

According to Microsoft, machine learning models are files trained to recognize specific types of patterns.

Qihang Lin, a UI associate professor in the department of business analytics and grant co-investigator, said his team wants to make machine learning models fairer without sacrificing an algorithms accuracy.

RELATED: UI professor uses machine learning to indicate a body shape-income relationship

People nowadays in [the] academic field ladder, if you want to enforce fairness in your machine learning outcome, you have to sacrifice the accuracy, Lin said. We somehow agree with that, but we want to come up with an approach that [does] trade-off more efficiently.

Lin said discrimination created by machine learning algorithms is seen disproportionately predicting rates of recidivism a convicted criminals tendency to re-offend for different social groups.

For instance, lets say we look at in U.S. courts, they use a software to predict what is the chance of recidivism of a convicted criminal and they realize that that software, that tool they use, is biased because they predicted a higher risk of recidivism of African Americans compared to their actual risk of recidivism, Lin said.

Tianbao Yang, a UI associate professor of computer science and grant principal investigator, said the team proposed a collaboration with Netflix to encourage fairness in the process of recommending shows or films to users.

Here we also want to be fair in terms of, for example, users gender, users race, we want to be fair, Yang said. Were also collaborating with them to use our developed solutions.

Another instance of machine learning algorithm unfairness comes in determining what neighborhoods to allocate medical resources, Lin said.

RELATED: UI College of Engineering uses artificial-intelligence to solve problems across campus

In this process, Lin said the health of a neighborhood is determined by examining household spending on medical expenses. Healthy neighborhoods are allocated more resources, creating a bias against lower income neighborhoods that may spend less on medical resources, Lin said.

Theres a bad cycle that kind of reinforces the knowledge the machines mistakenly have about the relationship between the income, medical expense in the house, and the health, Lin said.

Yao Yao, UI third-year doctoral candidate in the department of mathematics, is conducting various experiments for the research team.

She said the importance of the groups focus is that they are researching more than simply reducing errors in machine learning algorithm predictions.

Previously, people only focus on how to minimize the error but most time we know that the machine learning, the AI will cause some discrimination, Yao said. So, its very important because we focus on fairness.

Continue reading here:
Amazon awards grant to UI researchers to decrease discrimination in AI algorithms - UI The Daily Iowan

How machine learning and AI help find next-generation OLED materials – OLED-Info

In recent years, we have seen accelerated OLED materials development, aided by software tools based on machine learning and Artificial Intelligence. This is an excellent development which contributes to the continued improvement in OLED efficiency, brightness and lifetime.

Kyulux's Kyumatic AI material discover system

The promise of these new technologies is the ability to screen millions of possible molecules and systems quickly and efficiently. Materials scientists can then take the most promising candidates and perform real synthesis and experiments to confirm the operation in actual OLED devices.

The main drive behind the use of AI systems and mass simulations is to save the time that actual synthesis and testing of a single material can take - sometimes even months to complete the whole cycle. It is simply not viable to perform these experiments on a mass scale, even for large materials developers, let alone early stage startups.

In recent years we have seen several companies announcing that they have adopted such materials screening approaches. Cynora, for example, has an AI platform it calls GEM (Generative Exploration Model) which its materials experts use to develop new materials. Another company is US-based Kebotix, which has developed an AI-based molecular screening technology to identify novel blue OLED emitters, and it is now starting to test new emitters.

The first company to apply such an AI platform successfully was, to our knowledge, Japan-based Kyulux. Shortly after its establishment in 2015, the company licensed Harvard University's machine learning "Molecular Space Shuttle" system. The system has been assisting Kyulux's researchers to dramatically speed up their materials discovery process. The company reports that its development cycle has been reduced from many months to only 2 months, with higher process efficiencies as well.

Since 2016, Kyulux has been improving its AI platform, which is now called Kyumatic. Today, Kyumatic is a fully integrated materials informatics system that consists of a cloud-based quantum chemical calculation system, an AI-based prediction system, a device simulation system, and a data management system which includes experimental measurements and intellectual properties.

Kyulux is advancing fast with its TADF/HF material systems, and in October 2021 it announced that its green emitter system is getting close to commercialization and the company is now working closely with OLED makers, preparing for early adoption.

Excerpt from:
How machine learning and AI help find next-generation OLED materials - OLED-Info

Meet the winners of the Machine Learning Hackathon by Swiss Re & MachineHack – Analytics India Magazine

Swiss Re, in collaboration with MachineHack, successfully completed the Machine Learning Hackathon held from March 11th to 28th for data scientists and ML professionals to predict accident risk scores for unique postcodes. The end goal? To build a machine learning model to improve auto insurance pricing.

The hackathon saw over 1100+ registrations and 300+ participants from interested candidates. Out of those, the top five were asked to participate in a solution showcase held on the 6th of April. The top five entries were judged by Amit Kalra, Managing Director, Swiss Re and Jerry Gupta, Senior Vice President, Swiss Re who engaged with the top participants, understood their solutions and presentations and provided their comments and scores. From that emerged the top three winners!

Lets take a look at the winners who impressed the judges with their analytics skills and took home highly coveted cash prizes and goodies.

Pednekar comes with over 19 years of work experience in IT, project management, software development, application support, software system design, and requirement study. He is passionate about new technologies, especially data science, AI and machine learning.

My expertise lies in creating data visualisations to tell my datas story & using feature engineering to add new features to give a human touch in the world of machine learning algorithms, said Pednekar.

Pednekars approach consisted of seven steps:

For EDA, Pednekar has analysed the dataset to find out the relationship between:

Image: Rahul Pednekar

Image: Rahul Pednekar

Here, Pednekar merged Population & Road Network datasets with train using left join. He created Latitude and Longitude columns by extracting data from the WKT columns in Roads_network.

He proceeded to

And added new features:

Pednekar completed the following steps:

Image: Rahul Pednekar

Image: Rahul Pednekar

Pednekar has thoroughly enjoyed participating in this hackathon. He said, MachineHack team and the platform is amazing, and I would like to highly recommend the same to all data science practitioners. I would like to thank Machinehack for providing me with the opportunity to participate in various data science problem-solving challenges.

Check the code here.

Yadavs data science journey started a couple of years back, and since then, he has been an active participant in hackathons conducted on different platforms. Learning from fellow competitors and absorbing their ideas is the best part of any data science competition as it just widens the thinking scope for yourself and makes you better after each and every competition, says Yadav.

MachineHack competitions are unique and have a different business case in each of their hackathons. It gives a field wherein we can practice and learn new skills by applying them to a particular domain case. It builds confidence as to what would work and what would not in certain cases. I appreciate the hard work the team is putting in to host such competitions, adds Yadav.

Check the code here.

Rank 03: Prudhvi Badri

Badri entered the data science field while pursuing a masters in computer science at Utah State University in 2014 and had taken classes related to statistics, Python programming and AI, and wrote a research paper to predict malicious users in online social networks.

After my education, I started to work as a data scientist for a fintech startup company and built models to predict loan default risk for customers. I am currently working as a senior data scientist for a website security company. In my role, I focus on building ML models to predict malicious internet traffic and block attacks on websites. I also mentor data scientists and help them build cool projects in this field, said Badri.

Badri mainly focused on feature engineering to solve this problem. He created aggregated features such as min, max, median, sum, etc., by grouping a few categorical columns such as Day_of_Week, Road_Type, etc. He built features from population data such as sex_ratio, male_ratio, female_ratio, etc.

He adds, I have not used the roads dataset that has been provided as supplemental data. I created a total of 241 features and used ten-fold cross-validation to validate the model. Finally, for modelling, I used a weighted ensemble model of LightGBM and XGBoost.

Badri has been a member of MachineHack since 2020. I am excited to participate in the competitions as they are unique and always help me learn about a new domain and let me try new approaches. I appreciate the transparency of the platform sharing the approaches of the top participants once the hackathon is finished. I learned a lot of new techniques and approaches from other members. I look forward to participating in more hackathons in the future on the MachineHack platform and encourage my friends and colleagues to participate too, concluded Badri.

Check the code here.

The Swiss Re Machine Learning Hackathon, in collaboration with MachineHack, ended with a bang, with participants presenting out-of-the-box solutions to solve the problem in front of them. Such a high display of skills made the hackathon intensely competitive and fun and surely made the challenge a huge success!

See original here:
Meet the winners of the Machine Learning Hackathon by Swiss Re & MachineHack - Analytics India Magazine

Machine learning in higher education – McKinsey

Many higher-education institutions are now using data and analytics as an integral part of their processes. Whether the goal is to identify and better support pain points in the student journey, more efficiently allocate resources, or improve student and faculty experience, institutions are seeing the benefits of data-backed solutions.

Those at the forefront of this trend are focusing on harnessing analytics to increase program personalization and flexibility, as well as to improve retention by identifying students at risk of dropping out and reaching out proactively with tailored interventions. Indeed, data science and machine learning may unlock significant value for universities by ensuring resources are targeted toward the highest-impact opportunities to improve access for more students, as well as student engagement and satisfaction.

For example, Western Governors University in Utah is using predictive modeling to improve retention by identifying at-risk students and developing early-intervention programs. Initial efforts raised the graduation rate for the universitys four-year undergraduate program by five percentage points between 2018 and 2020.

Yet higher education is still in the early stages of data capability building. With universities facing many challenges (such as financial pressures, the demographic cliff, and an uptick in student mental-health issues) and a variety of opportunities (including reaching adult learners and scaling online learning), expanding use of advanced analytics and machine learning may prove beneficial.

Below, we share some of the most promising use cases for advanced analytics in higher education to show how universities are capitalizing on those opportunities to overcome current challenges, both enabling access for many more students and improving the student experience.

Data science and machine learning may unlock significant value for universities by ensuring resources are targeted toward the highest-impact opportunities to improve access for more students, as well as student engagement and satisfaction.

Advanced-analytics techniques may help institutions unlock significantly deeper insights into their student populations and identify more nuanced risks than they could achieve through descriptive and diagnostic analytics, which rely on linear, rule-based approaches (Exhibit 1).

Exhibit 1

Advanced analyticswhich uses the power of algorithms such as gradient boosting and random forestmay also help institutions address inadvertent biases in their existing methods of identifying at-risk students and proactively design tailored interventions to mitigate the majority of identified risks.

For instance, institutions using linear, rule-based approaches look at indicators such as low grades and poor attendance to identify students at risk of dropping out; institutions then reach out to these students and launch initiatives to better support them. While such initiatives may be of use, they often are implemented too late and only target a subset of the at-risk population. This approach could be a good makeshift solution for two problems facing student success leaders at universities. First, there are too many variables that could be analyzed to indicate risk of attrition (such as academic, financial, and mental health factors, and sense of belonging on campus). Second, while its easy to identify notable variance on any one or two variables, it is challenging to identify nominal variance on multiple variables. Linear, rule-based approaches therefore may fail to identify students who, for instance, may have decent grades and above-average attendance but who have been struggling to submit their assignments on time or have consistently had difficulty paying their bills (Exhibit 2).

Exhibit 2

A machine-learning model could address both of the challenges described above. Such a model looks at ten years of data to identify factors that could help a university make an early determination of a students risk of attrition. For example, did the student change payment methods on the university portal? How close to the due date does the student submit assignments? Once the institution has identified students at risk, it can proactively deploy interventions to retain them.

Though many institutions recognize the promise of analytics for personalizing communications with students, increasing retention rates, and improving student experience and engagement, institutions could be using these approaches for the full range of use cases across the student journeyfor prospective, current, and former students alike.

For instance, advanced analytics can help institutions identify which high schools, zip codes, and counties they should focus on to reach prospective students who are most likely to be great fits for the institution. Machine learning could also help identify interventions and support that should be made available to different archetypes of enrolled students to help measure and increase student satisfaction. These use cases could then be extended to providing students support with developing their skills beyond graduation, enabling institutions to provide continual learning opportunities and to better engage alumni. As an institution expands its application and coverage of advanced-analytics tools across the student life cycle, the model gets better at identifying patterns, and the institution can take increasingly granular interventions and actions.

Institutions will likely want to adopt a multistep model to harness machine learning to better serve students. For example, for efforts aimed at improving student completion and graduation rates, the following five-step technique could generate immense value:

Institutions could deploy this model at a regular cadence to identify students who would most benefit from additional support.

Institutions could also create similar models to address other strategic goals or challenges, including lead generation and enrollment. For example, institutions could, as a first step, analyze 100 or more attributes from years of historical data to understand the characteristics of applicants who are most likely to enroll.

Institutions will likely want to adopt a multistep model to harness machine learning to better serve students.

The experiences of two higher education institutions that leaned on advanced analytics to improve enrollment and retention reveal the impact such efforts can have.

One private nonprofit university had recently enrolled its largest freshman class in history and was looking to increase its enrollment again. The institution wanted to both reach more prospective first-year undergraduate students who would be a great fit for the institution and improve conversion in the enrollment journey in a way that was manageable for the enrollment team without significantly increasing investment and resources. The university took three important actions:

For this institution, advanced-analytics modeling had immediate implications and impact. The initiative also suggested future opportunities for the university to serve more freshmen with greater marketing efficiency. When initially tested against leads for the subsequent fall (prior to the application deadline), the model accurately predicted 85 percent of candidates who submitted an application, and it predicted the 35 percent of applicants at that point in the cycle who were most likely to enroll, assuming no changes to admissions criteria (Exhibit 3). The enrollment management team is now able to better prioritize its resources and time on high-potential leads and applicants to yield a sizable class. These new capabilities will give the institution the flexibility to make strategic choices; rather than focus primarily on the size of the incoming class, it may ensure the desired class size while prioritizing other objectives, such as class mix, financial-aid allocation, or budget savings.

Exhibit 3

Similar to many higher-education institutions during the pandemic, one online university was facing a significant downward trend in student retention. The university explored multiple options and deployed initiatives spearheaded by both academic and administrative departments, including focus groups and nudge campaigns, but the results fell short of expectations.

The institution wanted to set a high bar for student success and achieve marked and sustainable improvements to retention. It turned to an advanced-analytics approach to pursue its bold aspirations.

To build a machine-learning model that would allow the university to identify students at risk of attrition early, it first analyzed ten years of historical data to understand key characteristics that differentiate students who were most likely to continueand thus graduatecompared with those who unenrolled. After validating that the initial model was multiple times more effective at predicting retention than the baseline, the institution refined the model and applied it to the current student population. This attrition model yielded five at-risk student archetypes, three of which were counterintuitive to conventional wisdom about what typical at-risk student profiles look like (Exhibit 4).

Exhibit 4

Together, these three counterintuitive archetypes of at-risk studentswhich would have been omitted using a linear analytics approachaccount for about 70 percent of the students most likely to discontinue enrollment. The largest group of at-risk individuals (accounting for about 40 percent of the at-risk students identified) were distinctive academic achievers with an excellent overall track record. This means the model identified at least twice as many students at risk of attrition than models based on linear rules. The model outputs have allowed the university to identify students at risk of attrition more effectively and strategically invest in short- and medium-term initiatives most likely to drive retention improvement.

With the model and data on at-risk student profiles in hand, the online university launched a set of targeted interventions focused on providing tailored support to students in each archetype to increase retention. Actions included scheduling more touchpoints with academic and career advisers, expanding faculty mentorship, and creating alternative pathways for students to satisfy their knowledge gaps.

Advanced analytics is a powerful tool that may help higher-education institutions overcome the challenges facing them today, spur growth, and better support students. However, machine learning is complex, with considerable associated risks. While the risks vary based on the institution and the data included in the model, higher-education institutions may wish to take the following steps when using these tools:

While many higher-education institutions have started down the path to harnessing data and analytics, there is still a long way to go to realizing the full potential of these capabilities in terms of the student experience. The influx of students and institutions that have been engaged in online learning and using technology tools over the past two years means there is significantly more data to work with than ever before; higher-education institutions may want to start using it to serve students better in the years to come.

Originally posted here:
Machine learning in higher education - McKinsey