Daily Archives: September 27, 2021

Automation puts one in 10 Aussie jobs at risk, report warns – 9News

Posted: September 27, 2021 at 5:38 pm

One in 10 jobs in Australia are at risk of being automated as the economy recovers from the COVID-19 pandemic, a new reports says.

In comparison, wealthier and affluent urban areas face the least risk of jobs being automated.

Jobs most at risk from automation

The OECD estimates 36 per cent of Australian jobs are at risk of being automated, compared with the OECD average of 46 per cent. About 11 per cent of jobs face a high risk of being automated. A further 25 per cent face major changes.

Plant and machinery operators and food preparation workers are among the employment sectors most at risk.

Young people, men and Indigenous people were also more likely to have declining job opportunities.

Where will it be felt hardest?

Regional towns and cities will be among the most affected by automation. Many of these have traditional industries such as coal mining.

About 40 per cent of jobs in the New South Wales Hunter Region face some disruption while in Queensland's Mackay region it was about 41 per cent.

In comparison, Canberra and Sydney's eastern suburbs face the lowest risk of jobs lost through automation.

Which jobs have the best prospects?

The COVID-19 crisis accelerated the shift from traditional industries to teaching and health, the OECD says.

One in seven Australians today work in health or social services, a 100,000 rise over the past 12 months.

In Australia, health or social services now account for more than 14 per cent of all jobs.

What about protecting jobs?

The OECD says some workers will have their duties upskilled to replace their routines duties. This would ensure they could carry out non-routine tasks that could not be achieved through automation.

This trend is increasingly common in the mining and resource industries.

Federal and state governments needed to continue this to encourage a shift away from sectors that have traditional jobs.

Read the rest here:

Automation puts one in 10 Aussie jobs at risk, report warns - 9News

Posted in Automation | Comments Off on Automation puts one in 10 Aussie jobs at risk, report warns – 9News

DHL harnesses the benefits of automation – SHD Logistics

Posted: at 5:38 pm

Over the last eighteen months the eCommerce space has seen significant growth, catalysed by the Covid-19 pandemic and global lockdowns which shifted even more consumer spend online. We have seen this across the board, with pure play etailers growing, bricks and mortar operations able to solidify their online operations, and even new market entrants able to launch with different business models such as subscription services. This acceleration in the market has been great for consumers, and a lifeline for businesses needing to find their way through challenging times, but has posed some challenges from a fulfilment perspective. A surge in the number and type of orders that need to be picked, packed and shipped can put pressure on businesses as they try to keep to their customer promises and brand ambitions, while maintaining employee satisfaction. At DHL Supply Chain we have been working closely with technology providers as part of our accelerated digitisation strategy to ensure were able to harness the benefits of innovations in automation to help our customers, delivering efficiency and service improvements to their operations as well as improve the working environment for our colleagues.

In the fashion eCommerce space we have seen our customers experience significant growth as we stayed at home but continued to shop online. Often the challenge in both mid- and high-end fashion is getting the cost-to-serve price point right, something that can be improved by making the order picking process more efficient and where automation can really help. For example, for some of our fashion customers globally we have been able to deploy the Geek+ goods-to-person solution, which eliminates the need for our colleagues to travel around the warehouse picking manually, helping to improve picking accuracy and drive efficiencies. This technology also allows us to improve the density of product storage, reducing the warehouse footprint needed and lowering the costs of the logistics operation even further. As with all the robotics partners selected by DHL, the rich data behind the Geek+ solution can also be used by our customers to gain an in-depth understanding of their consumer and their fulfilment operation, producing sophisticated heatmaps of how their stock is performing in real-time.

Looking to the future, shared user sites are becoming increasingly common and these goods-to-person robots have a real place in these environments, as sophisticated algorithms will allow them to work accurately in multi-user sites, and even simultaneously across different businesses.

Another area were working with customers on is in the beauty and pet care space, where high throughput and a requirement for personalisation and customisation is driving the need for an increasingly light-touch, end-to-end pick process. For pure play retailers, peak volumes can typically be four times the non-peak volumes, placing strain on the fulfilment operations as it adapts. At the same time, the shipping and unboxing experience is particularly important, as it may be the only real opportunity for the brand to interact with the consumer and so needs to be an opportunity to build brand loyalty. Across our global eCommerce business, working with 6 River Systems we have been able to deploy the mobile robot chuck to help our customers manage the challenges of fluctuating demand, while maintaining high quality control and delivering a great end-consumer experience. Using the chuck means that each order is able to be quickly and efficiently picked and packed across a high number of SKUs, and only needs one touch before it is shipped. Reducing the number of touchpoints is really beneficial to the protection and presentation of fragile beauty products.

Aside from the efficiency gains that innovations in the order-picking space can bring, its important for us to consider how we can improve the employee experience for our colleagues in these roles. At DHL we are focused on introducing technology that improves the working life of colleagues across the business, and in the warehouse it is no different. Adding automation into the environment means that our colleagues are able to move away from repetitive and manual tasks into a more technology-based role. In a competitive labour market we know that giving the opportunity to work with cutting edge innovation can be incredibly rewarding, and creates the sort of working environment where people want to stay. Likewise, while the strict requirements for social distancing may have passed, there are still ongoing challenges and the use of technology allows us to keep staff welfare front of mind by reducing the number of pack benches we need.

Working with a business like DHL is a great way for businesses to take advantage of the opportunities presented by the technology, without taking on the investment themselves. In these challenging times retailers dont want to stifle the growth opportunities, so tapping into our portfolio of solutions allows them to explore the possibilities without any of the risk. Its an exciting time to be in this space, as there is no doubt that order picking and fulfilment stands to benefit greatly from innovations in automation and robotics now, and into the future. We have already seen just how transformational it can be, and were still at the beginning of the journey.

Read more:

DHL harnesses the benefits of automation - SHD Logistics

Posted in Automation | Comments Off on DHL harnesses the benefits of automation – SHD Logistics

PMMI’s Top to Top addresses automation, workforce woes and sustainability – Packaging World

Posted: at 5:38 pm

Major consumer brands like Barilla, Kraft Heinz, Pepsico and General Mills collaborated with PMMI member OEMs such as Spee Dee, Garvey, Delkor, and Poly Pack to prioritize challenges facing the industry today as well as in the next three to six years.

The biggest ask from CPGs is machinery that is more intuitive for easier operations, especially with continued workforce shortages. Two-thirds of the audience voted this the most challenging issue facing the industry today. One CPG called for machinery that is Amazon-easy to operate.

Forty-two percent of participants called for earlier collaboration between stakeholders including contract packagers/manufacturers.

Better partnerships with deeper communication are keys to success. Dont wait to tell us about a problem or shortage, said one CPG. Often, we can help source steel or components because of our global connections.

About a third of participants point to workforce issues as an opportunity to help boost ROI for automation projects, especially connectivity to aid in remote monitoring, diagnostics, etc. Although everyone agrees that when you talk machine connectivity, you are introducing cyber security issues.

The discussion turned more passionate when moderator Jorge Izquierdo, VP of Market Development for PMMI, asked about projecting problem issues three to six years in the future. Many issues garnered the same amount of interest, with no one issue emerging above others.

The number-one issue in the future will be sustainability and recycling initiatives, including education for industry and consumers. The industry needs to address our role in waste and climate change, said one participant.

Another crisis tied to workforce is making the packaging industry attractive to young workers. Working with schools to create a curriculum that will prepare students for specific jobs was suggested. A clear way to attract young gamers is the application of virtual and augmented reality, said one participant.

Follow this link:

PMMI's Top to Top addresses automation, workforce woes and sustainability - Packaging World

Posted in Automation | Comments Off on PMMI’s Top to Top addresses automation, workforce woes and sustainability – Packaging World

Explainable AI Is the Future of AI: Here Is Why – CMSWire

Posted: at 5:36 pm

PHOTO:Adobe

Artificial intelligence is going mainstream. If you're using Google docs, Ink for All or any number of digital tools, AI is being baked in. AI is already making decisions in the workplace, around hiring, customer service and more. However, a recurring issue with AI is that it can be a bit of a "black box" or mystery as to how it arrived at its decisions. Enter explainable AI.

Explainable Artificial Intelligence, or XAI, is similar to a normal AI application except that the processes and results of an XAI algorithm are able to be explained so that they can be understood by humans. The complex nature of artificial intelligence means that AI is making decisions in real-time based on the insights it has discovered in the data that it has been fed. When we do not fully understand how AI is making these decisions, we are not able to fully optimize the AI application to be all that it is capable of. XAI enables people to understand how AI and Machine Learning (ML) are being used to make decisions, predictions, and insights. Explainable AI allows brands to be transparent in their use of AI applications, which increases user trust and the overall acceptance of AI.

There is a valid need for XAI if AI is going to be used across industries. According to a report by FICO, 65% of surveyed employees could not explain how AI model decisions or predictions are determined. The benefits of XAI are beginning to be well-recognized, and not just by scientists and data engineers. The European Unions draft AI regulations are specifying XAI as a prerequisite for the eventual normalization of machine learning in society. Standardization organizations including the European Telecommunications Standards Institute (ETSI) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) also recognize the importance of XAI in relation to the acceptance and trust of AI in the future.

Philip Pilgerstorfer, data scientist and XAI specialist at QuantumBlack, has indicated that the benefits of XAI include:

This is because the majority of AI with ML operates in what is referred to as a black box, that is, in an area that is unable to provide any discernible insights as to how it comes to make decisions. Many AI/ML applications are moderately benign decision engines that are used with online retail recommender systems, so it is not absolutely necessary to ensure transparency or explainability. For other, more risky decision processes, such as medical diagnoses in healthcare, investment decisions in the financial industry, and safety-critical systems in autonomous automobiles, the stakes are much higher. As such, the AI used in those systems should be explainable, transparent, and understandable in order to be trusted, reliable, and consistent.

When brands are better able to understand potential weaknesses and failures in an application, they are better prepared to maximize performance and improve the AI app. Explainable AI enables brands to more easily detect flaws in the data model, as well as biases in the data itself. It can also be used for improving data models, verifying predictions, and gaining additional insights into what is working, and what is not.

Explainable AI has the benefits of allowing us to understand what has gone wrong and where it has gone wrong in an AI pipeline when the whole AI system makes an erroneous classification or prediction, said Marios Savvides, Bossa Nova Robotics Professor of Artificial Intelligence, Electrical and Computer Engineering and Director of theCyLab Biometrics Centerat Carnegie Mellon University. These are the benefits of an XAI pipeline. In contrast, a conventional AI system involving a complete end-to-end black-box deep learning solution is more complex to analyze and more difficult to pinpoint exactly where and why an error has occurred.

Many businesses today use AI/ML applications to automate the decision-making process, as well as to gain analytical insights. Data models can be trained so that they are able to predict sales based on variable data, while an explainable AI model would enable a brand to increase revenue by determining the true drivers of sales.

Kevin Hall, CTO and co-founder of Ripcord, an organization that provides robotics, AI and machine learning solutions, explained that although AI-enabled technologies have proliferated throughout enterprise businesses, there are still complexities that exist that are preventing widespread adoption, largely that AI is still mysterious and complicated for most people. "In the case of intelligent document processing (IDP), machine learning (ML) is an incredibly powerful technology that enables higher accuracy and increased automation for document-based business processes around the world," said Hall. "Yet the performance and continuous improvement of these models is often limited by a complexity barrier between technology platforms and critical knowledge workers or end-users. By making the results of ML models more easily understood, Explainable AI will allow for the right stakeholders to more directly interact with and improve the performance of business processes."

Related Article:What Is Explainable AI (XAI)?

Its a fact that unconscious or algorithmic biases are built into AI applications. Thats because no matter how advanced or smart the AI app is, or if it uses ML or deep learning, it was developed by human beings, each of which has their own unconscious biases, and a biased data set was used to train the AI algorithm. Explainable AI systems can be architected in a way to minimize bias dependencies on different types of data, which is one of the leading issues when complete black box solutions introduce biases and make errors, explained Professor Savvides.

A recent CMSWire article on unconscious biases reflected on Amazons failed use of AI for job application vetting. Although the shopping giant did not use prejudiced algorithms on purpose, their data set looked at hiring trends over the last decade, and suggested the hiring of similar job applicants for positions with the company. Unfortunately, the data revealed that the majority of those who were hired were white males, a fact that itself reveals the biases within the IT industry. Eventually, Amazon gave up on the use of AI for its hiring practices, and went back to its previous practices, relying upon human decisioning. Many other biases can sneak into AI applications, including racial bias, name bias, beauty bias, age bias, and affinity bias.

Fortunately, XAI can be used to eliminate unconscious biases within AI data sets. Several AI organizations, including OpenAI and the Future of Life Institute, are working with other businesses to ensure that AI applications are ethical and equitable for all of humanity.

Being able to explain why a person was not selected for a loan, or a job will go a long way to improving the public trust in AI algorithms and machine learning processes. "Whether these models are clearly detailing the reason why a loan was rejected or why an invoice was flagged for fraud review, the ability to explain the model results will greatly improve the quality and efficiency of many document processes, which will lead to cost savings and greater customer satisfaction," said Hall.

Related Article:Ethics and Transparency: How We Can Reach Trusted AI

Along with the unconscious biases we previously discussed, XAI has other challenges to conquer, including:

Professor Savvides said that XAI systems need architecting into different sub-task modules where sub-module performance can be analyzed. The challenge is that these different AI/ML components need compute resources and require a data pipeline, so in general they can be more costly than an end-to-end system from a computational perspective.

There is also the issue of additional errors for an XAI algorithm, but there is a tradeoff because errors in an XAI algorithm are easier to track down. Additionally, there may be cases where a black-box approach may give fewer performance errors than an XAI system, he said. However, there is no insight into the failure of the traditional AI approach other than trying to collect these cases and re-train, whereas the XAI system may be able to pinpoint the root cause of the error.

As AI applications become smarter and are used in more industries to solve bigger and bigger problems, the need for a human element in AI becomes more vital. XAI can help do just that.

The next frontier of AI is the growth and improvements that will happen in Explainable AI technologies. They will become more agile, flexible, and intelligent when deployed across a variety of new industries. XAI is becoming more human-centric in its coding and design, reflected AJ Abdallat, CEO ofBeyond Limits, an enterprise AI software solutions provider. Weve moved beyond deep learning techniques to embed human knowledge and experiences into the AI algorithms, allowing for more complex decision-making to solve never-seen-before problems those problems without historical data or references. Machine learning techniques equipped with encoded human knowledge allow for AI that lets users edit their knowledge base even after its been deployed. As it learns by interacting with more problems, data, and domain experts, the systems will become significantly more flexible and intelligent. With XAI, the possibilities are truly endless.

Related Article: Make Responsible AI Part of Your Company's DNA

Artificial Intelligence is being used across many industries to provide everything from personalization, automation, financial decisioning, recommendations, and healthcare. For AI to be trusted and accepted, people must be able to understand how AI works and why it comes to make the decisions it makes. XAI represents the evolution of AI, and offers opportunities for industries to create AI applications that are trusted, transparent, unbiased, and justified.

Excerpt from:

Explainable AI Is the Future of AI: Here Is Why - CMSWire

Posted in Ai | Comments Off on Explainable AI Is the Future of AI: Here Is Why – CMSWire

The limitations of AI safety tools – VentureBeat

Posted: at 5:36 pm

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

In 2019, OpenAI released Safety Gym, a suite of tools for developing AI models that respects certain safety constraints. At the time, OpenAI claimed that Safety Gym could be used to compare the safety of algorithms and the extent to which those algorithms avoid making harmful mistakes while learning.

Since then, Safety Gym has been used in measuring the performance of proposed algorithms from OpenAI as well as researchers from the University of California, Berkeley and the University of Toronto. But some experts question whether AI safety tools are as effective as their creators purport them to be or whether they make AI systems safer in any sense.

OpenAIs Safety Gym doesnt feel like ethics washing so much as maybe wishful thinking, Mike Cook, an AI researcher at Queen Mary University of London, told VentureBeat via email. As [OpenAI] note[s], what theyre trying to do is lay down rules for what an AI system cannot do, and then let the agent find any solution within the remaining constraints. I can see a few problems with this, the first simply being that you need a lot of rules.

Cook gives the example of telling a self-driving car to avoid collisions. This wouldnt preclude the car from driving two centimeters away from other cars at all times, he points out, or doing any number of other unsafe things in order to optimize for the constraint.

Of course, we can add more rules and more constraints, but without knowing exactly what solution the AI is going to come up with, there will always be a chance that it will be undesirable for one reason or another, Cook continued. Telling an AI not to do something is similar to telling a three year-old not to do it.

Via email, an OpenAI spokesperson emphasized that Safety Gym is only one project among many that its teams are developing to make AI technologies safer and more responsible.

We open-sourced Safety Gym two years ago so that researchers working on constrained reinforcement learning can check whether new methods are improvements over old methods and many researchers have used Safety Gym for this purpose, the spokesperson said. [While] there is no active development of Safety Gym since there hasnt been a sufficient need for additional development we believe research done with Safety Gym may be useful in the future in applications where deep reinforcement learning is used and safety concerns are relevant.

The European Commissions High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, have attempted to create standards for building trustworthy, safe AI. Absent safety considerations, AI systems have the potential to inflict real-world harm, for example leading lenders to turn down people of color more often than applicants who are white.

Like OpenAI, Alphabets DeepMind has investigated a method for training machine learning systems in both a safe and constrained way. Its designed for reinforcement learning systems, or AI thats progressively taught to perform tasks via a mechanism of rewards or punishments. Reinforcement learning powers self-driving cars, dexterous robots, drug discovery systems, and more. But because theyre predisposed to explore unfamiliar states, reinforcement learning systems are susceptible to whats called the safe exploration problem, where they become fixated on unsafe states (e.g., a robot driving into a ditch).

DeepMind claims its safe training method is applicable to environments (e.g., warehouses) in which systems (e.g., package-sorting robots) dont know where unsafe states might be. By encouraging systems to explore a range of behaviors through hypothetical situations, it trains the systems to predict rewards and unsafe states in new and unfamiliar environments.

To our knowledge, [ours] is the first reward modeling algorithm that safely learns about unsafe states and scales to training neural network reward models in environments with high-dimensional, continuous states, wrote the coauthors of the study. So far, we have only demonstrated the effectiveness of [the algorithm] in simulated domains with relatively simple dynamics. One direction for future work is to test [algorithm] in 3D domains with more realistic physics and other agents acting in the environment.

Firms like Intels MobileyeandNvidia have also proposed models to guarantee safe and logical AI decision-making, specifically in the autonomous car realm.

In October 2017, Mobileye released a framework called Responsibility-Sensitive Safety (RSS), a deterministic formula with logically provable rules of the road intended to prevent self-driving vehicles from causing accidents. Mobileye claims that RSS provides a common sense approach to on-the-road decision-making that codifies good habits, like maintaining a safe following distance and giving other cars the right of way.

Nvidias take on the concept is Safety Force Field, which monitors unsafe actions by analyzing sensor data and making predictions with the goal of minimizing harm and potential danger. Leveraging mathematical calculations Nvidia says have been validated in real-world and synthetic highway and urban scenarios, Safety Force Field can take into account both braking and steering constraints, ostensibly enabling it to identify anomalies arising from both.

The goal of these tools safety might seem well and fine on its face. But as Cook points out, there are a lot of sociological questions around safety, as well as who gets define whats safe. Underlining the problem, 65% of employees cant explain how AI model decisions or predictions are made at their companies, according to FICO much less whether theyre safe.

As a society, we sort of collectively agree on what levels of risk were willing to tolerate, and sometimes we write those into law. We expect a certain number of vehicular collisions annually. But when it comes to AI, we might expect to raise those standards higher, since these are systems we have full control over, unlike people, Cook said. [An] important question for me with safety frameworks is: at what point would people be willing to say, Okay, we cant make technology X safe, we shouldnt continue. Its great to show that youre concerned for safety, but I think that concern has to come with an acceptance that some things may just not be possible to do in a way that is safe and acceptable for everyone.

For example, while todays self-driving and ADAS systems are arguably safer than human drivers, they still make mistakes as evidenced by Teslas recent woes. Cook believes that if AI companies were held more legally and financially responsible for their products actions, the industry would take a different approach to evaluating their systems safety instead of trying to bandage the issues after the fact.

I dont think the search for AI safety is bad, but I do feel that there might be some uncomfortable truths hiding there for people who believe AI is going to take over every aspect of our world, Cook said. We understand that people make mistakes, and we have 10,000 years of society and culture that has helped us process what to do when someone does something wrong [but] we arent really prepared, as a society, for AI failing us in this way, or at this scale.

Nassim Parvin, an associate professor of digital media at Georgia Tech, agrees that the discourse around self-driving cars especially has been overly optimistic. She argues that enthusiasm is obscuring proponents ability to see whats at stake, and that a genuine, caring concern for the lives lost in car accidents could serve as a starting point to rethink mobility.

[AI system design should] transcend false binary trade-offs and that recognize the systemic biases and power structures that make certain groups more vulnerable than others, she wrote. The term unintended consequences is a barrier to, rather than a facilitator of, vital discussions about [system] design The overemphasis on intent forecloses consideration of the complexity of social systems in such a way as to lead to quick technical fixes.

Its unlikely that a single tool will ever be able to prevent unsafe decision-making in AI systems. In its blog post introducing Safety Gym, researchers at OpenAI acknowledged that the hardest scenarios in the toolkit were likely too challenging for techniques to resolve at the time. Aside from technological innovations, its the assertion of researchers like Manoj Saxena, who chairs the Responsible AI Institute, a consultancy firm, that product owners, risk assessors, and users must be engaged in conversations about AIs potential flaws so that processes can be created that expose, test, and mitigate the flaws.

[Stakeholders need to] ensure that potential biases are understood and that the data being sourced to feed to these models is representative of various populations that the AI will impact, Saxena told VentureBeat in a recent interview. [They also need to] invest more to ensure members who are designing the systems are diverse.

Read this article:

The limitations of AI safety tools - VentureBeat

Posted in Ai | Comments Off on The limitations of AI safety tools – VentureBeat

AI Adoption Skyrocketed Over the Last 18 Months – Harvard Business Review

Posted: at 5:36 pm

When it comes to digital transformation, the Covid crisis has provided important lessons for business leaders. Among the most compelling lessons is the potential data analytics and artificial intelligence brings to the table.

During the pandemic, for example, Frito-Lay ramped up its digital and data-driven initiatives, compressing five years worth of digital plans into six months. Launching a direct-to-consumer business was always on our roadmap, but we certainly hadnt planned on launching it in 30 days in the middle of a pandemic, says Michael Lindsey, chief growth officer at Frito-Lay. The pandemic inspired our teams to move faster that we would have dreamed possible.

The crisis accelerated the adoption of analytics and AI, and this momentum will continue into the 2020s, surveys show. Fifty-two percent of companies accelerated their AI adoption plans because of the Covid crisis, a study by PwC finds. Just about all, 86%, say that AI is becoming a mainstream technology at their company in 2021. Harris Poll, working with Appen, found that 55% of companies reported they accelerated their AI strategy in 2020 due to Covid, and 67% expect to further accelerate their AI strategy in 2021.

Will companies be able to keep up this heightened pace of digital and data-driven innovation as the world emerges from Covid? In the wake of the crisis, close to three-quarters of business leaders (72%) feel positive about the role that AI will play in the future, a survey by The AI Journal finds. Most executives (74%) not only anticipate AI will deliver more efficient make business processes, but also help to create new business models (55%) and enable the creation of new products and services (54%).

AI and analytics became critical to enterprises as they reacted to the shifts in working arrangements and consumer purchasing brought on by the Covid crisis. And as adoption of these technologies continues apace, enterprises will be drawing on lessons learned over the past year and a half that will guide their efforts well into the decade ahead:

Business leaders understand firsthand the power and potential of analytics and AI on their businesses. Since Covid hit, CEOs are now leaning in, asking how they can take advantage of data? says Arnab Chakraborty, global managing director at Accenture. They want to understand how to get a better sense of their customers. They want to create more agility in their supply chains and distribution networks. They want to start creating new business models powered by data. They know they need to build a data foundation, taking all of the data sets, putting them into an insights engine using all the algorithms, and powering insights solutions that can help them optimize their businesses, create more agility in business processes, know their customers, and activate new revenue channels.

AI is instrumental in alleviating skills shortages. Industries flattened by the Covid crisis such as travel, hospitality, and other services need resources to gear up to meet pent-up demand. Across industries, skills shortages have arisen across many fields, from truck drivers to warehouse workers to restaurant workers. Ironically, there is an increasingly pressing need to develop AI and analytics to compensate for shortages of AI development skills. In Cognizants latest quarterly Jobs of the Future Index, there will be a strong recovery for the U.S. jobs market this coming year, especially those involving technology. AI, algorithm, and automation jobs saw a 28% gain over the previous quarter.

AI is a critical ingredient to creating solutions to what is likely to be on-going, ever-changing skills needs and training, agrees Rob Jekielek, managing director with Harris Poll. AI is already beginning to help fill skills shortages of the existing workforce through career transition support tools. AI is also helping employees do their existing and evolving jobs better and faster using digital assistants and in-house AI-driven training programs.

AI will also help alleviate skills shortages by augmenting support activities. Given how more and more products are either digital products or other kinds of technology products with user interfaces, there is a growing need for support personnel, says Dr. Rebecca Parsons, chief technology officer at Thoughtworks. Many of straightforward questions can be addressed with a suitably trained chatbot, alleviating at least some pressure. Similarly, there are natural language processing systems that can do simple document scanning, often for more canned phrases.

AI and analytics are boosting productivity. Over the years, any productivity increases associated with technology adoption have been questionable. However, AI and analytics may finally be delivering on this long-sought promise. Driven by advances in digital technologies, such as artificial intelligence, productivity growth is now headed up, according to Erik Brynjolfsson and Georgios Petropoulos, writing in MIT Technology Review. The development of machine learning algorithms combined with large decline in prices for data storage and improvements in computing power has allowed firms to address challenges from vision and speech to prediction and diagnosis. The fast-growing cloud computing market has made these innovations accessible to smaller firms.

AI and analytics are delivering new products and services. Analytics and AI have helped to step-up the pace of innovation undertaken by companies such as Frito-Lay. For example, during the pandemic, the food producer delivered an e-commerce platform, Snacks.com, our first foray into the direct-to-consumer business, in just 30 days, says Lindsey. The company is now employing analytics to leverage its shopper and outlet data to predict store openings, shifts in demand due to return to work, and changes in tastes that are allowing us to reset the product offerings all the way down to the store level within a particular zip code, he adds.

AI accentuates corporate values. The way we develop AI reflects our company culture we state our approach in two words responsible growth, says Sumeet Chabria, global chief operating officer technology and operations at Bank of America. We are in the trust business. We believe one of the key elements of our growth the use of technology, data, and artificial intelligence must be deployed responsibly. As a part of that, our strategy around AI is Responsible AI; that means Being customer led. It starts with what the customer needs and the consequence of your solution to the customer; Being process led. How does AI fit into your business process? Did the process dictate the right solution?

AI and analytics are addressing supply chain issues. There are lingering effects as the economy kicks back into high gear after the Covid crisis issues with items from semiconductors to lumber have been in short supply due to disruptions caused by the crisis. Analytics and AI help companies predict, prepare, and see issue that may disrupt their abilities to deliver products and services. These are still the early days for AI-driven supply chains, a survey released by theAmerican Center for Productivity and Quality finds only 13% of executives foresee a major impact from AI or cognitive computing over the coming year. Another 17% predict a moderate impact. Businesses are still relying on manual methods to monitor their supply chains those that adopt AI in the coming months and years will achieve significant competitive differentiation.

Supply chain planning addressing disruptions in the supply chain can benefit in two ways, says Parsons. The first is for the easy problems to be handled by the AI system. This frees up the human to address the more complex supply chain problems. However, the AIsystem can also provide support even in the more complex cases by, for example, providing possible solutions to consider or speeding up an analysis of possible solutions by completing a solution from a proposal on a specific part of the problem.

AI is fueling startups, while helping companies manage disruption. Startups are targeting established industries by employing the latest data-driven technologies to enter new markets with new solutions. AI and analytics presents a tremendous opportunity for both startups and established companies, says Chakraborty. Startups cannot do AI standalone. They can only solve a part of the puzzle. This is where collaboration becomes very important. The bigger organizations have an opportunity to embrace those startups, and make them part of their ecosystem.

At the same time, AI is helping established companies compete with startups through the ability to test and iterate on potential opportunities far more rapidly and at far broader scale, says Jekielek. This enables established companies to both identify high potential opportunity areas more quickly as well as determine if it makes most sense to compete or, especially is figured out early, acquire.

The coming boom in business growth and innovation will be a data-driven one. As the world eventually emerges from the other side of the Covid crisis, there will be opportunities for entrepreneurs, business leaders and innovators to build value and launch new ventures that can be rapidly re-configured and re-aligned as customer needs change. Next-generation technologies artificial intelligence and analytics will play a key role in boosting business innovation and advancement in this environment, as well as spur new business models.

See more here:

AI Adoption Skyrocketed Over the Last 18 Months - Harvard Business Review

Posted in Ai | Comments Off on AI Adoption Skyrocketed Over the Last 18 Months – Harvard Business Review

When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical – EnterpriseAI

Posted: at 5:36 pm

While the U.S. is making strides in the advancement of AI use cases across industries, we have a long way to go before AI technologies are commonplace and truly ingrained in our daily life.

What are the missing pieces? Better data access and improved data sharing.

As our ability to address point applications and solutions with AI technology matures, we will need a greater ability to share data and insights while being able to draw conclusions across problem domains. Cooperation between individuals from government, research, higher education and the private sector to make greater data sharing feasible will drive acceleration of new use cases while balancing the need for data privacy.

This sounds simple enough in theory. Data privacy and cybersecurity are top of mind for everybody and prioritizing them go hand in hand with any technology innovation nowadays, including AI. The reality is that data privacy and data sharing are rightfully sensitive subjects. This, coupled with widespread government mistrust, is a legitimate hurdle that decision makers must evaluate to effectively provide access to and take our AI capabilities to the next level.

In the last five to 10 years, China has made leaps and bounds forward in the AI marketplace through the establishment of its Next Generation Artificial Intelligence Development Plan. While our ecosystems differ, the progress China has made in a short time shows that access to tremendous volumes of datasets is an advantage in AI advancement. It is also triggering a domino effect.

Government action in the U.S. is rampant. Recently, in June, President Biden established the National AI Research Task Force, which follows former President Trumps 2019 executive order to fast-track the development and regulation of AI signs that American leaders are eager to dominate the race.

While the benefits of AI are clear, we must acknowledge consumer expectations as the technology progresses. Data around new and emerging use cases shows that the more consumers are exposed to the benefits of AI in their daily lives, the more likely they are to value its advancements.

According to new data from the Deloitte AI Institute and the U.S. Chamber of Commerces Technology Engagement Center, 65 percent of survey respondents indicated that consumers would gain confidence in AI as the pace of discovery of new medicines, materials and other technologies accelerated through the use of AI. Respondents were also positive about the impact government investment could have in accelerating AI growth. The conundrum is that the technology remains hard to understand and relate to for many consumers.

While technology literacy in general has progressed thanks to the internet and digital connectivity, general awareness around data privacy, digital security and how data is used in AI remains weak. So, as greater demands are put on the collection, integration and sharing of consumer data, better transparency, education and standards around how data is collected, shared and used must be prioritized simultaneously. With this careful balance we could accelerate innovation at a rapid pace.

The data speaks for itself. The more of it we have, the stronger the results. Just like supply chain management of raw materials is critical in manufacturing, data supply chain management is critical in AI. One area that many organizations prioritize when implementing AI technology is applying more rigorous methods around data provenance and organization. Raw collected data is often transformed, pre-processed, summarized or aggregated at multiple stages in the data pipeline, complicating efforts to track and understand the history and origin of inputs to AI training. The quality and fit of resultant models the ability for the model to make accurate decisions is primarily a function of the corpus of data they were trained on, so it is imperative to identify what datasets were used and where they originated.

Datasets must be broad and show enough examples and variations for models to be correctly trained on. When they are not, the consequences can be severe. For instance, in the absence of sufficient datasets, AI-based face recognition models have reinforced racial profiling in some cases and AI algorithms for healthcare risk predictions have left minorities with less access to critical care.

With so much on the line, diverse data with strong data supply chain management is important, but there are limits to how much data a single company can collect. Enter the challenges of data sharing, data privacy and the issue of which information individuals are willing to hand over. We are seeing this play out through medical applications of AI, i.e., radiology images and medical records, and in other aspects of day-to-day life, from self-driving cars to robotics.

For many, granting access to personal data is more appealing if the purpose is to advance potentially life-saving technology, versus use cases that may appear more leisurely. This makes it critical that leading AI advancements prioritize the use cases that consumers deem most valuable, while remaining transparent about how data is being processed and implemented.

Two recent developments the National AI Research Task Force and the NYC Cyber Attack Defense Center are positive steps forward. While AI organizations and leaders will continue to drive innovation, forming these groups could be the driver in bringing AI to the forefront of technology advancement in the U.S. The challenge will be whether the action that they propose is impressive enough to consumers and outweighs privacy concerns and government mistrust.

Advancements in AI are driving insights and innovation across industries. As AI leaders it is up to us to continue the momentum and collaborate to accelerate AI innovation safely. For us to succeed, industry leaders must prioritize privacy and security around data collection and custodianship, create transparency around data management practices and invest in education and training to gain public trust.

The inner workings of AI technology are not as discernable as most popular applications and will remain that way for some time but how data is collected and used must not be so hard for consumers to see and understand.

About the Author

Rob Lee of Pure Storage

Rob Lee is the Chief Technology Officer at Pure Storage, where he is focused on global technology strategy, and identifying new innovation and market expansion opportunities for the company. He joined Pure in 2013 after 12 years at Oracle Corp. He serves on the board of directors for Bay Area Underwater Explorers and Cordell Marine Sanctuary Foundation. Lee earned a bachelor's degree and a master's degree in electrical engineering and computer science from the Massachusetts Institute of Technology.

Related

Read the original here:

When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical - EnterpriseAI

Posted in Ai | Comments Off on When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical – EnterpriseAI

‘Pre-crime’ software and the limits of AI – Resilience

Posted: at 5:36 pm

The Michigan State Police (MSP) has acquired software that will allow the law enforcement agency to help predict violence and unrest, according to a story published by The Intercept.

I could not help but be reminded of the film Minority Report. In that film three exceptionally talented psychics are used to predict crimes before they happen and apprehend the would-be perpetrators. These not-yet perpetrators are guilty of what is called pre-crime, and they are sentenced to live in a very nice virtual reality where they will not be able to hurt others.

The publics acceptance of the fictional pre-crime system is based on good numbers: It has eliminated all pre-meditated murders for the past six years in Washington, D.C. where it has been implemented. Which goes to provefictionally, of coursethat if you lock up enough people, even ones who have never committed a crime, crime will go down.

How does the MSP software work? Let me quote again from The Intercept:

The software, put out by a Wyoming company called ShadowDragon, allows police to suck in data from social media and other internet sources, including Amazon, dating apps, and the dark web, so they can identify persons of interest and map out their networks during investigations. By providing powerful searches of more than 120 different online platforms and a decades worth of archives, the company claims to speed up profiling work from months to minutes.

Simply reclassify all of your online friends, connections and followers as accomplices and youll start to get a feel for what this software and other pieces of software mentioned in the article can do.

The ShadowDragon software in concert with other similar platforms and companion software begins to look like what the article calls algorithmic crime fighting. Here is the main problem with this type of thinking about crime fighting and the general hoopla over artificial intelligence (AI): Both assume that human behavior and experience can be captured in lines of computer code. In fact, at their most audacious, the biggest boosters of AI claim that it can and will learn the way humans learn and exceed our capabilities.

Now, computers do already exceed humans in certain ways. They are much faster at calculations and can do very complex ones far more quickly than humans can working with pencil and paper or even a calculator. Also, computers and their machine and robotic extensions dont get tired. They can do often complex repetitive tasks with extraordinary accuracy and speed.

What they cannot do is exhibit the totality of how humans experience and interpret the world. And, this is precisely because that experience cannot be translated into lines of code. In fact, characterizing human experience is such a vast and various endeavor that it fills libraries across the world with literature, history, philosophy and the sciences (biology, chemistry and physics) using the far more subtle construct of natural languageand still we are nowhere near done describing the human experience.

It is the imprecision of natural language which makes it useful. It constantly connotes rather that merely denotes. With every word and sentence it offers many associations. The best language opens paths of discovery rather than closing them. Natural language is both a product of us humans and of our surroundings. It is a cooperative, open-ended system.

And yet, natural language and its far more limited subset, computer code, are not reality, but only a faint representation of it. As the father of general semantics, Alfred Korzybski, so aptly put it, The map is not the territory.

Apart from the obvious dangers of the MSPs algorithmic crime fighting such as racial and ethnic profiling and gender bias, there is the difficulty in explaining why information picked up by the algorithm is relevant to a case. If there is human intervention to determine relevance, then that moves the system away from the algorithm.

But it is the act of hoovering up so much irrelevant information that risks the possibility of creating a pattern that is compelling and seemingly real, but which may just be an artifact of having so much data. This becomes all the more troublesome when law enforcement is trying to predict unrest and crimessomething which the MSP says it doesnt do even though its systems have that capability.

The temptation will grow to use such systems to create better order in society by focusing on the troublemakers identified by these systems. Societies have always done some form of that through their institutions of policing and adjudication. Now, companies seeking to profit from their ability to find the unruly elements of society will have every incentive to write algorithms that show the troublemakers to be a larger segment of society than we ever thought before.

We are being put on the same road in our policing and courts that weve just traverse in the so-called War on Terror that has killed a lot of innocent people and made a large number of defense and security contractors rich, but which has left us with a world that is arguably more unsafe than it was before.

To err is human. But to correct is also human, especially based on intangiblesintuitions, hunches, glimpses of perceptionwhich give us humans a unique ability to see beyond the algorthmically defined facts and even beyond those facts presented to our senses in the conventional way. When a machine failsnot in a trivial way that merely fails to check and correct databut in a fundamental way that miscontrues the situation, it has no unconscious or intuitive mind to sense that something is wrong. The AI specialists have a term for this. They say that the machine lacks common sense.

The AI boosters will respond, of course, that humans can remain in the loop. But to admit this is to admit that the future of AI is much more limited than portrayed and that as with any tool, its usefulness all depends on how the tool is used and who is using it.

It is worth noting that the title of the film mentioned at the outset, Minority Report, refers to a disagreement among the psychics, that is, one of them issues a minority report which conflicts with the others. It turns out that for the characters in this movie the future isnt so clear after all, even to the sensitive minds of the psychics.

Nothing is so clear and certain in the future or even in the present that we can allay all doubts. And, when it comes to determining what is actually going on, context is everything. But no amount of data mining will provide us with the true context in which the subject of an algorithmic inquiry lives. For that we need people. And, even then the knowledge of the authorities will be partial.

If only the makers of this software would insert a disclaimer in every report saying that users should look upon the information provided with skepticism and thoroughly interrogate it. But then, how many suites of software would these software makers sell with that caveat prominently displayed on their products?

Roughed up by Robocop disassembled robot. (2013) by Steve Jurvetson. via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Roughed_up_by_Robocop_(9687272347).jpg

Go here to see the original:

'Pre-crime' software and the limits of AI - Resilience

Posted in Ai | Comments Off on ‘Pre-crime’ software and the limits of AI – Resilience

Amazon delivery staff ‘denied bonus’ pay by AI cameras misjudging their driving – The Register

Posted: at 5:36 pm

In brief AI cameras inside Amazons delivery trucks are denying drivers' bonus pay for errors they shouldnt be blamed for, it's reported.

The e-commerce giant installed the equipment in its vehicles earlier this year. The devices watch the road and the driver, and send out audio alerts if they don't like the way their humans are driving.

One man in Los Angeles told Vice that when he gets cut off by other cars, the machine would sense the other vehicle suddenly right in front of him, and squawk: Maintain safe distance! Logs of the audio alerts and camera footage are relayed back to Amazon, and it automatically decides whether drivers deserve to get bonuses or not from their performance on the road.

These workers, who are employed via contractors, claim they are unfairly denied extra pay for errors that were beyond their control or for things that dont necessarily mean theyre driving recklessly, such as tuning the radio or glancing at a side mirror.

When I get my score each week, I ask my company to tell me what I did wrong, the unnamed driver said. My [delivery company] will email Amazon and cc me, and say, Hey we have [drivers] who'd like to see the photos flagged as events, but they don't respond. There's no room for discussion around the possibility that maybe the camera's data isn't clean.

An Amazon spokesperson said alerts can be contested and are reviewed by staff at the internet giant to weed out incorrect judgments by the software.

Deepfakes arent all bad. The technology is helping trans people feel comfortable with communicating in gamer communities by changing the sound of their voice with AI algorithms.

It can be difficult for trans gamers to speak in group chats when the pitch of their voice doesnt match their gender identities; some may want to sound more feminine or masculine, typically.

A startup called Modulate is helping them generate new voices or so-called voice skins by using machine-learning software that automatically adjusts the sound of their speech. Some trans people have started testing the algorithms but havent yet used it in the wild, according to Wired.

We realized many people dont feel they can participate in online communities because their voice puts them at greater risk, Mike Pappas, Modulates CEO, said. He claimed the software only has a 15 millisecond lag when transforming someones speech in real time to a different pitch.

Early testers said they were impressed with the softwares capabilities, although Modulate declined to provide a live demo for the magazine.

The British government has promised to invest more in the AI industry and review semiconductor supply chains to make sure it has enough computational resources to support the growth of the technology.

This is how we will prepare the UK for the next ten years, and is built on three assumptions about the coming decade, the report's summary began.

"1. Invest and plan for the long-term needs of the AI ecosystem to continue our leadership as a science and AI superpower;

"2. Support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring AI benefits all sectors and regions;

"3. Ensure the UK gets the national and international governance of AI technologies right to encourage innovation, investment, and protect the public and our fundamental values.

The first point involves funding more scholarships to help more people obtain postgraduate education in machine learning and data science. Researchers are encouraged to collaborate with others from European and US institutions.

Other parts of the plan, however, are a little bit more wishy-washy. There isnt strict actions or policies in some parts, for example, a lot of inter-agency collaboration involves formulating yet more reports to understand strategic goals in supporting the AI economy or algorithmic transparency.

Aurora, the self-driving car software biz, has started testing autonomous heavy duty Class 8 trucks capable of hauling over 14,969 kilograms with shipping giant FedEx.

The trucks will be monitored by a safety driver as they drive the 500-mile round trip from Dallas and Houston, Texas, along the I-485 interstate highway, Aurora announced this month. The company is aiming to operate fleets of fully autonomous trucks without the help of safety drivers by 2023.

You can read more about it here.

See more here:

Amazon delivery staff 'denied bonus' pay by AI cameras misjudging their driving - The Register

Posted in Ai | Comments Off on Amazon delivery staff ‘denied bonus’ pay by AI cameras misjudging their driving – The Register

UN calls for moratorium on AI systems that pose serious risks to right to privacy and other human rights – JD Supra

Posted: at 5:36 pm

On 15 September 2021, the UN Office of the High Commissioner for Human Rights (OHCHR) published a report, The right to privacy in the digital age, that analyses how artificial intelligence (AI), through the use of profiling, automated decision-making and machine learning technologies, affects peoples fundamental rights and freedoms such as the right to privacy, the right to health and freedom of expression.

The OHCHR urges for a moratorium on the sale and use of AI systems that pose a serious risk to human rights and remote biometric recognition systems in public spaces until adequate safeguards are put in place. It also recommends banning AI applications that cannot ensure compliance with international human rights law.

While the report recognised that AI is instrumental in developing innovative solutions, it stressed the effects of the ubiquity of AI on peoples fundamental rights. The report looks in detail at the use of AI solutions in key public and private sectors, for example, in national security, criminal justice, employment and when managing information online.

In this respect, the OHCHR highlighted a number of risks of AI that need to be addressed by states and businesses, for example:

The report recommends addressing these risks using a comprehensive human rights-based approach and outlines possible ways to address the fundamental problems associated with AI, including the implementation of a robust legislative and regulatory framework, which prevents and mitigates any adverse effects of AI on human rights. States should ensure that any permitted interference with the right to privacy and other human rights through the use of AI does not impair the essence of these rights and is stipulated by law, pursues a legitimate purpose, is necessary and proportionate, and requires adequate justification of AI-supported decisions. The OHCHR also recommends that public and private entities systematically conduct human rights due diligence throughout the entire life cycle of the AI systems (including a human rights impact assessment), increase transparency about the use of AI and actively combat discrimination.

The press release is available here and the report is available here.

Link:

UN calls for moratorium on AI systems that pose serious risks to right to privacy and other human rights - JD Supra

Posted in Ai | Comments Off on UN calls for moratorium on AI systems that pose serious risks to right to privacy and other human rights – JD Supra