Page 135«..1020..134135136137..140150..»

Category Archives: Ai

Utilizing AI To Reach Peak Performance In Health And Business: How The Sculpted Vegan Built An 8-Figure Empire – Forbes

Posted: May 11, 2021 at 10:35 pm

Kim Constable, Founder at The Sculpted Vegan

She speaks with a lilting Irish accent yet there is no mistaking the power behind the tough talk and the incredible abs that have thrust Kim Constable into the multi-million dollar bracket with her platform The Sculpted Vegan.

As a 38-year old mother of four, living a charmed life in her native Belfast, Kim Constable took a good hard look at herself in the harsh light of day and decided the thing she was missing in her life was muscles. Growing up in a rural village outside of Belfast, she says, my Dad subscribed to bodybuilding magazines, and I had always admired the power behind those incredible physiques, she explains.

Kim had been a vegetarian for nigh on 16 years but had decided to go full vegan about a year before deciding to train.

There was one small problem, however. Her trainer explained that he couldnt help with the nutrition side of her program because he had zero experience training vegans. Determined to find answers, she took to Google and found there was nothing out there to help shape her program.

She decided to make one, and make millions of dollars in the process.

Bodybuilders, vegan or not, track everything. The reason we track is because you cannot know which variables to manipulate progress or change if you havent kept the data consistent. We track our calories, macros, sleep, recovery, cardio duration, heart rate, weight, and measurement.

When she started training as a vegan bodybuilder, Kim bought an Apple Watch.

It was the first time I had ever tracked anything to do with health, and the data was interesting. The Apple watch was one of the first trackers on the market, but it seems rudimentary now, compared to what is available. It only really tracked my steps and workouts daily and gave me virtual high fives when I had achieved a goal.

When the Oura Ring launched Kim switched to using this device, and says its by far the best shes worn. The tracking is superb, and it only needs to be charged every 5 days for about 20 minutes. I never take it off. It syncs with my Apple watch to track my workouts and the step tracker is far more accurate than Apple seems to be.

According to Kim, data measurement is not only how she tracks performance And improved results, it is key to her success.

Rest and recovery are paramount to my success. My Oura Ring reminds me that bedtime is approaching and suggests a wind-down routine. It also suggests bedtimes, which I try to adhere to. Unfortunately, I cant take it easy when it suggests, if it conflicts with my training program. But just seeing the data daily is important as I can make subtle changes to my routine, to ensure Im getting enough rest.

The scariest part of tracking health data, says Kim, is seeing the effect that alcohol has on your body. It keeps your resting heart rate high, and your heart rate variability low. Seeing the data in front of you makes you more mindful.

She doesnt sugarcoat her program or her success. Its hard, and I tell people that right upfront. I tell them, dont buy my program unless youre willing to put in the work.

Scrolling through her Instagram page @thesculptedvegan, filled with incredible photos of her bodybuilding achievements, Kim tells it like it is. One image of her showing off her amazing abs features the caption, it took four years to get these abs. Over years of building exceptional muscle with alternating periods of shredding, finally my abs started to appear.

Her advice is pretty simple; she has no time for victims. If you make up your mind to do something, it takes work, consistency, and discipline, she says. Those are the hallmarks of her success she doesnt try to make it look easy.

Data helps Kim and her clients stay on track and stay accountable. One of Kims favorite sayings is: What gets measured, gets managed.

Its easy to ignore the possible effects of behavior when they are not in front of your face. Tracking using AI keeps the data forefront, and allows you to make better and more informed decisions.

For women bodybuilders, keeping track of data can be especially useful. At home, Kim has a Renpho Smart Scale which tracks weight, body fat, body fat mass, and other variables. Kim says this piece of equipment is not as intelligent as the Oura and doesnt make suggestions for your training or progress, but its extremely useful to track the data over an extended time, especially around your monthly cycle.

For bodybuilders, tracking weight daily allows you to see the fluctuations over time, especially when all other variables are kept the same. During the month, my body will slowly hold more and more water leading up to my period. I will also hold more water after alcohol or salty food, says Kim.

You learn not to make it mean anything its just data. And it takes all emotion out of weight gain.

The other thing Kims Oura Ring tracks is her daily temperature, so she can see how it rises during the month and peaks during her cycle. This is usually when my sleep is the most restless, says Kim. I can also push more load mid-month when my temperature is lower and my recovery is stronger. Oura helps me with all of this tracking.

Kim Constable, Founder at The Sculpted Vegan

Kims data tracking of course would not be very useful without incredible time management skills and self-discipline to do it all. I have a husband and four kids who are home-schooled. If I commit to training five days a week, I do it. If I say Im going to make a million dollars from my business, I do it.

In a recent Forbes article, the author writes: In some ways, an AI-driven fitness coach can be better than a human trainer. It has access to more data, knows more exercises, and can track your progress more precisely.

In response, Kim states, I dont believe that it could ever replace a coach standing in front of you, intuitively advising based on what they see. One of the ways humans process data is through intuition. It has been studied extensively and shown to be alarmingly accurate.

While Kim acknowledges that data-tracking and AI-suggested adjustments have been a key aspect of her fitness (and business) success, she has fears that people would become too reliant on the data and stop trusting their intuition, learning deductively rather than inductively.

As a bodybuilder, I have a very strict workout timetable. I train 5 days a week, and cardio 6 days. If my Oura Ring tells me my recovery is bad and I should rest, but its my day for training legs, Im not going to give legs a miss. AI can respond to your body on a day-to-day basis, but unless it can help you to accomplish a specific goal in the future, and therefore tell you to push through even when you dont feel like it, its going to work against you and not for you.

Kim continues, AI is great. But it has its limitations. Unless its able to help you to work towards a specific goal, which often involves pushing through your tiredness or poor recovery just to get the work done, then it would have serious limits for professional athletes or the non-average person.

While her personal goals to build an incredible body and inspire the women who follow her align with her strong moral code, she continues to seek out the uncomfortable situations that inform her writing, her posts, and her videos.

Bodybuilding and business have a lot in common; they both involve hard work and often failure. I love failure; I call it learning. Even though I stand firmly behind my goals, Im not afraid to change lanes if something isnt working.

Creating a roadmap for women to build confidence, muscle, and success while being brutally honest about what that will involve is part of her appeal. She is not for every woman, but she believes every woman can be her if they are willing to put in the work and never give up.

To many privileged individuals in our society, these data tracking devices for fitness are readily available. Once relegated to billion-dollar science labs, sports tech is now available for us to buy at the local Costco. However, having a watch or ring that tracks your data and gives you suggestions is just one part of the equation.

Kim concludes, What you do with that data is up to you.

Excerpt from:

Utilizing AI To Reach Peak Performance In Health And Business: How The Sculpted Vegan Built An 8-Figure Empire - Forbes

Posted in Ai | Comments Off on Utilizing AI To Reach Peak Performance In Health And Business: How The Sculpted Vegan Built An 8-Figure Empire – Forbes

Not Just Chatbots: How AI Is Helping Organizations Better Serve Their Customers – Forbes

Posted: at 10:35 pm

Chatbot On Mobile App

When the pandemic required that retailers shut their doors, consumers still needed a way to reach the brands they use and interact with every day. Due to the need to go remote, the contact center quickly took the place of in-person retailing, enabling consumers to connect to brands by phone, chat, email and more.

Contact centers had long been run on-premise, but these resources also had to change to accommodate the remote nature of daily life, and many retailers quickly shifted those systems to the cloud. In addition to enabling agents to service customers remotely, the cloud-enabled contact center also allows brands to harness data from all those interactions and make them smarter and more effective.

I recently connected with Genefa Murphy, CMO of Five9, an intelligent cloud contact center solution, who spoke to me about how brands are reimagining the customer experience and the technology they use to facilitate it. She explained that AI is a massive part of that, helping retailers to connect with their customers more successfully beyond chatbots, across all channels.

Gary Drenik: Customer service has been hugely important for companies during the pandemic. What types of channels are consumers relying on most?

Genefa Murphy: The pandemic has driven a surge of customer service traffic to the digital store front of enterprises. When customers couldnt engage with businesses in person, they increasingly looked to digital channels, such as email, web chat, SMS, and social messaging, to resolve their requests before picking up the phone. New research from Prosper Insight & Analytics exploring the importance of various services when shopping online found that 81% of adult consumers believe its very important to have a website thats easy to use, while just over half (52%) rate toll free live assistance as very important. The importance of an easy-to-use website is even greater among Gen X (83%) and Boomers (84%) according to the research.

Prosper - Importance of Services When Shopping Online

With more consumers reaching out across more channels than ever before, businesses are looking for solutions to help them quickly scale their service teams to keep up with the demand. They are not just deploying AI-powered self-service applications like Intelligent Virtual Agents (IVAs) to automate the more routine and repetitive service tasks. They are also looking to AI and automation to help with dynamic interaction routing, matching customer demand with contact center supply. In some cases that supply comes in the form of a live agent, in many cases that comes from IVA and online digital support. The key thing is that customers are asking for these channels, and they are fast becoming the norm. In fact, our own research has found that 82% of consumers are now comfortable using digital channels. Therefore, the change has moved from a push to a pull.

Drenik: What about chatbots? Why arent they as popular as some may have anticipated just a few years ago?

Murphy: The first generation of chatbots were somewhat underwhelming because most of them lacked the Natural Language Understanding (NLU) to intelligently process and respond to customer requests. This in turn led to more frustration. However, as conversational AI technology has matured, chatbots have evolved from that limited scope. They can now handle more complex tasks and transactions and actually provide a better customer experience. Today, many chatbots have the intelligence and context they need to respond in a much more conversational manner. The tooling and training of AI have also improved. This is important because it enables chatbots to engage with greater accuracy. Given this, we are starting to see a major uptick in chatbot usage and value for both the business and the end customer.

Drenik: Are there any other channels where AI is playing a big role currently? Which ones?

Murphy: AI is not just for digital channels; AI is being increasingly adopted over the voice channel. Consumers are getting used to having conversations with their smart devices, and the same technology that powers those devices is now more available and accessible in the cloud. In fact, Gartner predicts that by 2023,customers will prefer to use speech interfacesto initiate 70% of self-service interactions, rising from 40% in 2019. Today, companies can launch cost-effective IVAs to provide conversational self-service over the phone with minimal investment and a faster time to value than ever before.

Were also seeing AI play a big role in agent-facing support applications. Intelligent assistants can utilize machine learning and Natural Language Processing to automate call transcriptions and notetaking, quickly surface the data agents need to resolve customer requests or even to drive more business and provide real-time coaching and reminders that can improve the overall customer experience.

Drenik: How is AI playing a role in these channels?

Murphy: AI is enabling businesses to reimagine the kind of customer experience they can provide across voice, web, and messaging channels. IVAs allow businesses to quickly scale their service operations, empowering customers to engage on their channel of choice any time of day or night and resolve many requests without waiting for live assistance. As a result, live agents have more time to focus on conversations where they create value by applying their human touch. Additionally, because IVAs integrate so easily into CRM systems, knowledgebases and other back-end systems, service interactions can be more personalized and escalate across every support channel more seamlessly.

For all these reasons, AI-powered service automation is helping businesses boost interaction handling efficiencies, customer loyalty, agent satisfaction, and overall cost savings and revenue. This is especially true when IVAs are connected into the broader contact center eco-system so that context and data can be preserved especially when there is a need to escalate to a live agent.

Drenik: Whats important for businesses to know about using AI in their customer service strategies?

Murphy: Businesses can reap huge rewards by implementing AI across the customer journey, as long as they choose a practical approach to deploying and managing the technology. AI isnt pixie dust. They should look for platforms and tools that are self-managed, simple and intuitive, and that will easily integrate across the channels and back-end-systems that are most essential to their customer journey. They should also adopt strategies that allow them to implement these solutions incrementally, so they achieve quick wins, drive tangible results and solve real business challenges. We call this practical AI.

The most forward-thinking service organizations understand the potential of AI and automation to work alongside their human teams, rather than as a means of replacing them to cut costs. The workforce of the future is multi-modal.It works across channels, it leverages AI and automation, its digital and human, and it enables work from anywhere. Using AI to automate more of the routine, repetitive service tasks, and to provide real-time assistance during calls can also go a long way to improve the agent experience, which is a key factor in providing great CX especially as we all know employee experience often reflects in the customer experience.

Drenik: Thank you, Genefa, for taking the time to share your insights on how AI is helping brands to better serve their customers across channels. Its exciting to learn about the ways businesses are using technology to help bolster the customer experience.

Visit link:

Not Just Chatbots: How AI Is Helping Organizations Better Serve Their Customers - Forbes

Posted in Ai | Comments Off on Not Just Chatbots: How AI Is Helping Organizations Better Serve Their Customers – Forbes

Meet the real Alexa: voice actor reportedly responsible for Amazons AI assistant revealed – The Verge

Posted: at 10:35 pm

Amazons Alexa has a voice familiar to millions: calm, warm, and measured. But like most synthetic speech, its tones have a human origin. There was someone whose voice had to be recorded, analyzed, and algorithmically reproduced to create Alexa as we know it now. Amazon has never revealed who this original Alexa is, but journalist Brad Stone says he tracked her down, and she is Nina Rolle, a voiceover artist based in Boulder, Colorado.

The claim comes from Stones upcoming book on the tech giant, Amazon Unbound, an excerpt of which is published here in Wired. Neither Amazon nor Rolle confirmed or denied Stones reporting, which he says is based on conversations with the professional voiceover community, but Rolles voice alone makes for a compelling case.

Listen to the videos below: the first an advertisement for Cherry Creek North, Denvers premier outdoor retail destination, and the second an introduction to Hapyn, a social app that seems to now be defunct (its Play Store entry was last updated in 2017). You can absolutely hear Alexas reassuring tones in Rolles voice. Or, to be more precise, you can absolutely hear where Alexas reassuring tones come from when listening to Rolle.

Heres how Stone writes up the process in selecting Alexas voice:

Believing that the selection of the right voice for Alexa was critical, [then-Amazon exec Greg] Hart and colleagues spent months reviewing the recordings of various candidates that GM Voices produced for the project, and presented the top picks to Bezos. The Amazon team ranked the best ones, asked for additional samples, and finally made a choice. Bezos signed off on it. Characteristically secretive, Amazon has never revealed the name of the voice artist behind Alexa. I learned her identity after canvasing the professional voice-over community: Boulder, Coloradobased voice actress and singer Nina Rolle. Her professional website contains links to old radio ads for products such as Motts Apple Juice and the Volkswagen Passatand the warm timbre of Alexas voice is unmistakable. Rolle said she wasnt allowed to talk to me when I reached her on the phone in February 2021. When I asked Amazon to speak with her, they declined.

Weve pinged Amazon and Rolle to confirm her involvement in creating Alexa, but dont expect to hear much back. If the company isnt interested in confirming Stones account, it suggests this is a bit of history theyd rather not draw attention to, for whatever reason.

Providing the voice for such a ubiquitous product can have its own drawbacks, too. The original voice artist behind Siri, Susan Bennett, revealed herself in 2013 (after seeing an article from The Verge dissecting the process behind the creation of synthesized voices, incidentally) but said shed been wary about being associated with Siri. I was conservative about it for a long time [...] then this Verge video came out [...] and it seems like everyone was clamoring to find out who the real voice behind Siri is, and so I thought, well, you know, what the heck? This is the time, Bennett told CNN.

Of course, although we can hear both Bennett and Rolles voices in their AI doppelgngers, its impossible to say without inside knowledge exactly what traces of the original remain. Creating a synthetic voice starts with real audio samples, but this data is exhaustively quantized and remastered to such a degree that answering the question of whether the final product is the same as the original is best reserved for the shipbuilders of Theseus.

What is, fun, though, is listening to the other examples of Rolles voiceover work on her website. Although she offers a restrained performance in the videos above, shes much more animated and lively in other commercial samples. It really shows how, despite the ever-increasing sophistication of Alexas voice, it still lacks the range of the real thing.

Excerpt from:

Meet the real Alexa: voice actor reportedly responsible for Amazons AI assistant revealed - The Verge

Posted in Ai | Comments Off on Meet the real Alexa: voice actor reportedly responsible for Amazons AI assistant revealed – The Verge

UK lagging in post-Covid AI-driven business recovery – ComputerWeekly.com

Posted: at 10:35 pm

A survey of senior UK and European IT chiefs, conducted by Morning Consult for IBMs Global AI adoption index 2021, has found that deployments of artificial acceleration (AI) are accelerating.

The study, based on a survey of 2,500 senior decision-makers from the UK, France, Germany, Italy and Spain, found that one-third (36%) reported that their companies had accelerated their roll-out of AI as a direct result of the Covid-19 pandemic. However, only 27% of UK IT professionals reported that their company had accelerated the roll-out of AI in response to the virus.

Significantly, 38% of the IT decision-makers from the UK who took part in the survey said their employer had made no change to their adoption of the technology as a response to the global health crisis, compared with a European average of 33%.

However, there is general recognition among IT decision-makers that AI has a role to play in supporting organisations as they plot a path through post-pandemic business recovery. Many see AI as a way to enhance competitiveness and streamline productivity through the use of automation tools.

The study reported that automating processes to empower higher-value work was the single biggest reason for the adoption of AI across Europe picked by 43% of the study group. The numbers for the UK were lower, with 35% of UK IT decision-makers selecting it as a reason to invest in AI.

Jean Philippe Desbiolles, global vice-president, data and AI at IBM, said 2021 will deliver a real AI return on investment for businesses. We are at a juncture point, he said. AI deployment is now here and there is clear acceleration. Across Europe and the UK, the study found that 44% of organisations that have run AI pilots plan to deploy AI.

However, the increasing complexity of data is a significant roadblock for widespread adoption. In the survey, almost a quarter (24%) of UK IT decision-makers identified increasing data complexity and the existence of siloed data as barriers to adoption, compared with 29% of the European IT decision-makers.

The study also found that the proliferation of data across the enterprise has resulted in six out of 10 UK IT professionals drawing from more than 20 different data sources to inform their AI.

There is general consensus among the survey respondents that there is a lack of expertise in AI. One-third (33%) of the IT decision-makers surveyed said their organisations planned to upskill their workforce. There is also growing interest in off-the-shelf AI to help organisations address the skills gap and overcome the technical challenges of infusing AI into business processes.

Desbiolles said: From my experience, the focus has been to demonstrate some real AI use cases. This has driven the market, but for the first time, I can see that the market now wants business AI off the shelf.

He added that IBMs customers and prospects wanted to see the business impact of AI more than ever, and were moving beyond the experimental phase of AI to production deployments that generate business value.

As an industry, this is where we need to adapt to deliver platforms, tools and also the right business applications, he said.

View original post here:

UK lagging in post-Covid AI-driven business recovery - ComputerWeekly.com

Posted in Ai | Comments Off on UK lagging in post-Covid AI-driven business recovery – ComputerWeekly.com

Did you know these 10 everyday services rely on AI? – World Economic Forum

Posted: at 10:35 pm

Artificial intelligence (AI) has transformed many aspects of our lives for the better. It even played a role in developing vaccines against COVID-19. But you may be surprised just how many things we take for granted that rely on AI.

As IBM explain, "at its simplest form, artificial intelligence is a field, which combines computer science and robust datasets to enable problem-solving." It includes the sub-fields of machine learning and deep learning. These two fields use algorithms that are designed to make predictions or classifications based on input data.

This is how AI is used in our everyday lives.

Image: European Parliament

Of course, as technology becomes more sophisticated, literally millions of decisions need to be made every day and AI speeds things up and takes the burden off humans. The World Economic Forum describes AI as a key driver of the Fourth Industrial Revolution.

Forecasted shipments of edge artificial intelligence (AI) chips worldwide in 2020 and 2024, by device.

Image: Statista

The Forums platform, Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning, is bringing together key stakeholders to design and test policy frameworks that accelerate the benefits and mitigate the risks of AI and machine learning.

Here are 10 examples of AI we encounter every day.

Your email provider almost certainly uses AI algorithms to filter mail into your spam folder. Quite helpful when you consider that 77% of global email traffic is spam. Google says less than 0.1% of spam makes it past its AI-powered filters.

But there are concerns that algorithms that read content to target advertising are invading our privacy.

AI automates a host of functions on your smartphone, from predictive text that learns the words you commonly use to voice-activated personal assistants which listen to the world around them and try to learn your keywords.

The way your phone screen adjusts to ambient light or the battery life is optimized is also down to AI. But if the personal assistant absorbs everything you say, whether youre on the phone or not, some critics say it creates opportunities for surveillance, however benign the intention.

In many parts of the world, online and app-based banking are the norm. From onboarding new customers and checking their identity to countering fraud and money laundering, AI is in charge. Want a loan? An AI-powered system will assess your creditworthiness and decide.

This is how AI is used in banking.

Image: Business Insider

AI also monitors transactions and AI chatbots can answer questions about your account. More than two-thirds of banks in a recent survey by SAS Institute say they use AI chatbots and almost 63% said they used AI for fraud detection.

Going for an x-ray? Forget the idea of a clinician in a white coat studying the results. The initial analysis is most likely to be done by an AI algorithm. In fact they turn out to be rather good at diagnosing problems.

In a trial, an AI algorithm called DLAD beat 17 out of a panel of 18 doctors in detecting potential cancers in chest x-rays.

However, critics say AI diagnosis must not become an impenetrable black box. Doctors need to know how they work in order to trust them. Issues around privacy, data protection and fairness have also been raised.

As in banking, chatbots are also being deployed in healthcare to engage with patients - for example, to book an appointment - or even as virtual assistants to physicians. This presents numerous issues though, from miscommunication to wrong diagnoses.

The World Economic Forum's Chatbots RESET programme brings together stakeholders from multiple areas to explore these opportunities and challenges to govern the use of chatbots.

AI is at the heart of the drive towards autonomous vehicles, adoption of which has accelerated due to the pandemic. Delivery services are one area being targeted, while China now has a robotaxi fleet operating in Shanghai.

There are still safety issues to be ironed out, however. There have been accidents involving self-driving cars, some of them fatal.

The Netherlands is the best prepared for autonomous cars.

Image: Statista

Conventional trackside railway signals are being replaced by AI-powered in-cab signalling systems which automatically control trains. The European Train Control System allows more trains to use the same stretch of track while maintaining safe distances between them.

To date, the use of AI in controlling aircraft has been limited to drones, although flying taxis that use AI to navigate have already been flight-tested. Experts say a human is still better at flying an airliner but AI is widely used in route planning, optimizing schedules and managing bookings.

7. Ride sharing and travel apps

Ride sharing apps use AI to resolve the conflicting needs of drivers and passengers. The latter want a ride immediately, while drivers value their freedom to start and stop working when they choose. Learning how these patterns interact, AI can send you a ride when you ask for it.

Travel apps use AI to personalize what they offer users as algorithms learn our preferences. Hotel search engine Trivago even bought an AI platform that customizes search results based on the users social media likes.

Uncanny how social media seems to know what you like, isnt it? Of course, its all down to AI. Facebooks machine learning can recognize your face in pictures posted on the platform, as well as everyday objects to target content and advertising that interests and engages you.

Job seekers using LinkedIn benefit from AI which analyzes their profile and engagement with other users to offer job recommendations. The platform says AI is woven into the fabric of everything that we do.

Unexpected breakdowns are every factory managers nightmare. So AI is playing a key role in monitoring machine performance, enabling maintenance to be planned rather than reactive. Experts say its cutting the time machines are offline by 75% and repair costs by almost a third.

AI can also predict changes in demand for products, optimizing production capacity. AI is currently used in about 9% of factories worldwide but Deloitte says 93% of companies believe AI will be a pivotal technology to drive growth and innovation in the sector.

Google says AI can enhance the value of wind power by 20%.

Image: Pixabay/enriquelopezgarre

10. Regulating power supply

Wind and solar power may be green but what happens when the wind doesnt blow and the sky is cloudy? AI-powered smart technology can balance supply and demand, controlling devices like water heaters to ensure they only draw power when demand is low and supply plentiful.

Googles DeepMind created an AI neural network trained using weather forecasts and turbine data to predict the output from a wind farm 36 hours ahead. By making output to the power grid more predictable, Google says it increased the value of its wind energy by 20%.

The views expressed in this article are those of the author alone and not the World Economic Forum.

View post:

Did you know these 10 everyday services rely on AI? - World Economic Forum

Posted in Ai | Comments Off on Did you know these 10 everyday services rely on AI? – World Economic Forum

Outreach Propels Sales Engagement Forward with Integrated AI-Powered Conversation Intelligence, Buyer Sentiment Insights, and Success Plans, Unveiled…

Posted: at 10:35 pm

SEATTLE, May 11, 2021 /PRNewswire/ -- Outreach, the largest and fastest-growing sales engagement provider, announcedthree new products that expand the definition of sales engagement further than ever before. By focusing on real-time conversation intelligence, buyer sentiment, and buyer engagement, Outreach now supports sellers across the entire revenue cycle while capturing rich buyer behavioral data that provides managers and revenue leaders with greater visibility into pipelines and forecasts.

"In a world that has changed so dramatically, it's what you do now that matters," said Manny Medina, CEO and co-founder of Outreach. "That's why we built real-time conversation and made revenue intelligence actionable. Because we must empower revenue organizations to continuously evolve so they can thrive in this new environment. We're living up to our vision of reimagining the category and paving the way for a new buyer and seller engagement model."

At Unleash 2021, before an audience of thousands of sales practitioners and industry thought leaders, Outreach unveiled Outreach Kaia, Outreach Insights, and Outreach Success Plans, all designed to support sales reps, managers, and leaders:

Outreach Kaiaprovides sellers with in-the-moment coaching, leading to faster seller ramp times, shorter deal cycles, and better buyer experiences. Leveraging industry-leading artificial intelligence (AI), Kaia is a voice-and video-enabled virtual sales assistant that surfaces relevant sales enablement content based on real-time buyer prompts, transcribes conversations with more than 90% accuracy, and captures notes and action items. By providing real-time coaching, capturing notes and next steps, Kaia makes every sales rep more effective and every customer conversation more productive.

Outreach Insightshelps sellers better understand the level of interest a prospect or customer has in engaging in a sales conversation. Outreach Insights is an integrated reporting and analytics solution that leverages cutting-edge AI to detect buyer sentiment and more accurately measure buyer emotion, allowing sellers to optimize what, when, and how they communicate with their buyers. Sentiment analysis represents a significant leap forward relative to legacy and ineffective activity metrics like reply, click, and open rates, which are incorrect 50% of the time.

Outreach Success Plansallows sellers and buyers to collaborate on the Outreach platform to create dynamic action plans leading to mutually successful business outcomes. The first-of-its-kind product, now available in public beta, coordinates and aligns sell-side and buy-side teams involved in the deal. Data captured from these interactions will provide revenue leaders with the ability to course-correct deals that are off track and forecast with more accuracy.

"Over the last 15 months, as revenue leaders strived to meet and exceed targets amid a global pandemic, they faced the most challenging environment of their careers. As the economy bounces back, those leaders who made bets on the right technologies and providers are thriving, while those that didn't are missing a huge opportunity," said Mary Shea, global innovation evangelist at Outreach. "Having a comprehensive technology platform and the right partner is now more essential than ever. By integrating real-time conversation intelligence, buyer sentiment analytics, and a dynamic environment that enhances buyer and seller collaboration, Outreach is transforming the sales engagement category while delivering unparalleled value to its users across the entire revenue cycle."

For more information about Outreach, including product demos, images, and videos, please visit http://www.outreach.io/GameOn.

About Outreach Outreach is the largest and fastest-growing sales engagement platform that helps companies dramatically increase productivity and drive smarter, more insightful engagement with their customers. The only sales engagement platform to make the Forbes Cloud 100, Outreach was also the fastest-growing Sales Engagement Platform on the Deloitte Technology Fast 500. More than 4,600 companies such as Adobe, Tableau, Okta, Splunk, DocuSign, and SAP depend on Outreach's enterprise-scale, unparalleled customer adoption, and robust AI-powered innovation. Outreach is a privately held company based inSeattle, Washington. To learn more, please visitwww.outreach.io.

PR Contact: Amanda Woolley [emailprotected]

SOURCE Outreach

http://www.outreach.io

More here:

Outreach Propels Sales Engagement Forward with Integrated AI-Powered Conversation Intelligence, Buyer Sentiment Insights, and Success Plans, Unveiled...

Posted in Ai | Comments Off on Outreach Propels Sales Engagement Forward with Integrated AI-Powered Conversation Intelligence, Buyer Sentiment Insights, and Success Plans, Unveiled…

AI-Focused Veterans Affairs Organization Shares Best Practices With Other Agencies – ExecutiveGov

Posted: at 10:35 pm

Nichols Martin May 11, 2021News, Technology

Department of Veterans Affairs

The Department of Veterans Affairs' National Artificial Intelligence Institute is sharing its expertise with other agencies to develop a catalog of AI use cases, FedScoop reported Monday.

The institute has shared input with the Veteran Engagement Board, the Data Governance Council and other federal agencies to create a reference for AI uses.

Gil Alterovitz, VA's director of AI, said at the SNG Live: Enhancing AI in Government event that the department plans to trial a set of modules that can support an AI-focused internal review board.

Agencies have engaged in AI data sharing as part of efforts to align with Former President Trump's executive order on trustworthy AI made in late 2020.

You may also be interested in...

AeroVironment announced on Tuesday that its Board of Directors has unanimously elected Wahid Nawabi to succeed Tim Conver as chairman upon Convers retirement effective at the companys 2021 annual meeting of shareholders. Tims leadership and mentorship have resulted in countless American and allied lives saved as a result of our innovative UAS and TMS solutions, concluded Nawabi.

Go here to read the rest:

AI-Focused Veterans Affairs Organization Shares Best Practices With Other Agencies - ExecutiveGov

Posted in Ai | Comments Off on AI-Focused Veterans Affairs Organization Shares Best Practices With Other Agencies – ExecutiveGov

Neurotechnology Releases New Version of StockGeist.ai Platform for Real-Time Monitoring of Publicly Traded Companies – PRNewswire

Posted: at 10:35 pm

"With StockGeist.ai, you can quickly feel the spirit of the most recent developments in the business world."

StockGeist is a web-based platform that uses deep learning models to provide a convenient, real-time monitor of the sentiment and context behind developments in the stock market as reflected in the media and social media. StockGeist.ai's intuitive interface lets users quickly build customized watchlists with companies of interest to observe the dynamics in their rankings and track other up-to-date information.

"With StockGeist.ai, you can quickly feel the spirit of the most recent developments in the business world," said Dr. Vytautas Abramavicius, StockGeist.ai team lead from Neurotechnology. "Our web platform aggregates a tremendous amount of data from various media and social media sources. Using modern deep learning-based NLP (Natural Language Processing) models, StockGeist.ai allows users to derive meaningful insights from noisy data. It reflects our best efforts to bring this information to investors and traders in an intuitive and efficient way, so they can make faster decisions regarding stocks of interest," Abramavicius added.

StockGeist.ai was first released as a free-to-use platform on October 7, 2020. Since then, a number of new features have been added:

For more information about StockGeist.ai, please visit: http://www.stockgeist.ai.

About NeurotechnologyNeurotechnologyis a developer of high-precision algorithms and software based on deep neural networks and other AI-related technologies. The company was launched in 1990 in Vilnius, Lithuania, with the key idea of using neural networks for various applications, such as biometric person identification, computer vision, robotics and artificial intelligence. Since the first release of its fingerprint identification system in 1991, the company has delivered more than 200 products and version upgrades. More than 3,000 system integrators, security companies and hardware providers in more than 140 countries integrate Neurotechnology's algorithms into their products. The company's algorithms have achieved top results in independent technology evaluations, including NIST MINEX, PFT, FRVT, IREX and FVC-onGoing.

SOURCE Neurotechnology

Go here to see the original:

Neurotechnology Releases New Version of StockGeist.ai Platform for Real-Time Monitoring of Publicly Traded Companies - PRNewswire

Posted in Ai | Comments Off on Neurotechnology Releases New Version of StockGeist.ai Platform for Real-Time Monitoring of Publicly Traded Companies – PRNewswire

Trustworthy AI versus ethical AI – what’s the difference, and why does it matter? – Diginomica

Posted: at 10:35 pm

( Andrey_Popov - Shutterstock)

I've written before about semantic ambiguity in natural languages and how difficult, if impossible, it would be to render natural languages into a digital object.

The reasons: the gestalt of a silicon processor versus the human brain and brain-to-brain communication:

Something isambiguouswhen it can be understood in two or more possible senses or ways. If theambiguityis in a single word, it is called lexicalambiguity. ... In everyday speech,ambiguitycan sometimes be understood as something witty or deceitful.

Today, the terms trustworthy AI and ethical AI are used interchangeably. The problem is that trustworthy AI is not necessarily ethical, and ethical AI is necessarily trustworthy. The casual commingling of the terms has unfortunate circumstances

Let's break down trust and trustworthiness. There is a difference between 'trust,' which can be described in pretty straightforward factual terms and 'trustworthiness,' which is a very different matter and has a value component. It is about what or whoshouldbe trusted. Unfortunately, we can both trust those who are not trustworthy, and not trust those who are. Trust and transparency go together: we can only trust what we are clear is being asked of us.

Ethics determines whether a strategy should be chosen because it seeks simple utilitarian terms to secure the best overall aggregate balance of harms and costs. Or whether it rests on a belief that there are fundamental human rights that should never be sacrificed. Values inform a judgment of a proper or proportionate balancing of the loss of individual liberty and privacy for the gain of certain public goods; or whether it is fair to expect some social and age groups to suffer disproportionately in any public health initiative.

Here is one attempt to define ethical AI:

Organizations ready to embrace AI and thrive in the Age of With must start by putting trust at the center. First, they must thoroughly assess whether their organization meets the criteria for trustworthy and ethical AI; it's a necessary step in increasing the returns and managing the risks that constitute the transformational promise of AI.

This is messed up. Trust is something given based on transparency, reputation, and sometimes, unfortunately, blind faith in wholly untrustworthy characters. It is not a consummate good. Putting trust at the center implies ethics are of secondary concern.

Here are a few examples of trustworthy but potentially unethical models:

Predictive Policing: City government is in an endless cycle of allocation of resources for all of the things they have to do. One area that gets cut is policing. In an attempt to introduce some element of fairness (or at least science) to how to deploy and redeploy policing, they have turned to several AI solutions that predict where police need to be. Implementing these systems shows a matter of trust in the operation and outcome, but the experience has led to unethical outcomes.

The models themselves are, for the most, free from bias, but they calculate occurrences of crimes in segments of the city and assign more policing. The problem is, though organized to fight Class1 crimes, homicides, arson and assaults, more boots on the ground begin to pick up more Class 2 crimes, such as vagrancy, trespass, curfew or hold a small number of drugs. As this data flows back to the system in an infinite feedback loop, more police are assigned, and crime rates soar. Moreover, since these phenomena occur in neighborhoods with a high degree of color, the result is entirely unethical.

Intrusive personalization: Giving your most initiate information to ad servers and marketers, clicking through websites, ordering online, talking to Siri, people tend to trust these applications even though subsequent use of the data can be highly unethical in persuasion, digital phenotyping and disrupting civil society.

Life insurance: life insurance is the paradigm of trust. Their commercials of "good hands," "the future is safe," partners afterlife too," "for a secure life." The assumption when purchasing life insurance is that the face value will be distributed promptly to your beneficiaries. There are circumstances, clearly elucidated in the contract, where that would not happen, such as suicide in the first two years, or acts of war. But two-year-exclusion doesn't offer complete protection. Another exclusion is the matter of "material representations on the application." Effectively, it provides the insurance company the right to deny the death claim on material representations, which can be minor:

The cynical part of this is that Insurers typically do not investigate these situations at the beginning. Still, when a claim is large, or just beyond the two years, they will dig into thousands of sources to invalidate a claim and return the premium plus interest, but not the death benefit. This is an example of the perception of a trustworthy instrument: "If I die, my family will be taken care of," which is, in fact, an unethical process.

Ethical, but not trust trustworthy: one prominent example of ethical but not trustworthy is the use of machine learning in radiology. After some early gaffes when Stanford Medical's radiation oncology model produces noticeable result between different ethnic groups, they went back to the drawing board. They developed a system that identified tumors that most of a panel of radiologists did not, and the degree of false-positive and negatives was evenly distributed across groups. They'd developed an ethical system, cleansed of bias, but trust was a different issue. First of all, doctors are a conservative bunch. Many refused to accept the result. Then there was an unanticipated problem. As they licensed the software to other hospitals, the accuracy of the system dropped dramatically. The reason was that Stanford had state-of-the-art imaging technology that other hospitals did not. Trust in the system plummeted, and took some time to regain.

Let's drop "trustworthy" as a criterion for ethical AI. Ethics are about knowing what to do and doing it. Trust is about what or whoshouldbe trusted or how to create trust, whether or not it's ethical. Though they are commingled in specific ways, pursuing trust to the exclusion of ethics is dangerous.

Continue reading here:

Trustworthy AI versus ethical AI - what's the difference, and why does it matter? - Diginomica

Posted in Ai | Comments Off on Trustworthy AI versus ethical AI – what’s the difference, and why does it matter? – Diginomica

Ethics of AI: Benefits and risks of artificial intelligence – ZDNet

Posted: May 4, 2021 at 8:10 pm

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems.

Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised.

Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived."

Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers.

But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve.

Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life.

That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners.

Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens.

Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing.

As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?"

Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion.

Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December. Gebru made clear that she was fired by Google, a claim Mitchell backs up in her letter. Jeff Dean, head of AI at Google, wrote in an internal email to staff that the company accepted the resignation of Gebru. Gebru's former colleagues offer a neologism for the matter: Gebru was "resignated" by Google.

Margaret Mitchell [right], was fired on the heels of the removal of Timnit Gebru.

I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired 🙂

Timnit Gebru (@timnitGebru) December 3, 2020

Mitchell, who expressed outrage at how Gebru was treated by Google, was fired in February.

The departure of the top two ethics researchers at Google cast a pall over Google's corporate ethics, to say nothing of its AI scruples.

As reported by Wired's Tom Simonite last month, two academics invited to participate in a Google conference on safety in robotics in March withdrew from the conference in protest of the treatment of Gebru and Mitchell. A third academic said that his lab, which has received funding from Google, would no longer apply for money from Google, also in support of the two professors.

Google staff quit in February in protest of Gebru and Mitchell's treatment, CNN's Rachel Metz reported. And Sammy Bengio, a prominent scholar on Google's AI team who helped to recruit Gebru, resigned this month in protest over Gebru and Mitchell's treatment, Reuters has reported.

A petition on Medium signed by 2,695 Google staff members and 4,302 outside parties expresses support for Gebru and calls on the company to "strengthen its commitment to research integrity and to unequivocally commit to supporting research that honors the commitments made in Google'sAI Principles."

Gebru's situation is an example of how technology is not neutral, as the circumstances of its creation are not neutral, as MIT scholars Katlyn Turner, Danielle Wood, Catherine D'Ignazio discussed in an essay in January.

"Black women have been producing leading scholarship that challenges the dominant narratives of the AI and Tech industry: namely that technology is ahistorical, 'evolved', 'neutral' and 'rational' beyond the human quibbles of issues like gender, class, and race," the authors write.

During an online discussion of AI in December, AI Debate 2, Celeste Kidd, a professor at UC Berkeley, reflecting on what had happened to Gebru, remarked, "Right now is a terrifying time in AI."

"What Timnit experienced at Google is the norm, hearing about it is what's unusual," said Kidd.

The questioning of AI and how it is practiced, and the phenomenon of corporations snapping back in response, comes as the commercial and governmental implementation of AI make the stakes even greater.

Ethical issues take on greater resonance when AI expands to uses that are far afield of the original academic development of algorithms.

The industrialization of the technology is amplifying the everyday use of those algorithms. A report this month by Ryan Mac and colleagues at BuzzFeed found that "more than 7,000 individuals from nearly 2,000 public agencies nationwide have used technology from startup Clearview AI to search through millions of Americans' faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members."

Clearview neither confirmed nor denied BuzzFeed's' findings.

New devices are being put into the world that rely on machine learning forms of AI in one way or another. For example, so-called autonomous trucking is coming to highways, where a "Level 4 ADAS" tractor trailer is supposed to be able to move at highway speed on certain designated routes without a human driver.

A company making that technology, TuSimple, of San Diego, California, is going public on Nasdaq. In its IPO prospectus, the company says it has 5,700 reservations so far in the four months since it announced availability of its autonomous driving software for the rigs. When a truck is rolling at high speed, carrying a huge load of something, making sure the AI software safely conducts the vehicle is clearly a priority for society.

TuSimple says it has almost 6,000 pre-orders for a driverless semi-truck. When a truck is rolling at high speed, carrying a huge load of something, making sure the AI software safely conducts the vehicle is clearly a priority for society.

Another area of concern is AI applied in the area of military and policing activities.

Arthur Holland Michel, author of an extensive book on military surveillance, Eyes in the Sky, has described how ImageNet has been used to enhance the U.S. military's surveillance systems. For anyone who views surveillance as a useful tool to keep people safe, that is encouraging news. For anyone worried about the issues of surveillance unchecked by any civilian oversight, it is a disturbing expansion of AI applications.

Calls are rising for mass surveillance, enabled by technology such as facial recognition, not to be used at all.

As ZDNet's Daphne Leprince-Ringuet reported last month, 51 organizations, including AlgorithmWatch and the European Digital Society, have sent a letter to the European Union urging a total ban on surveillance.

And it looks like there will be some curbs after all. After an extensive report on the risks a year ago, and a companion white paper, and solicitation of feedback from numerous "stakeholders," the European Commission this month published its proposal for "Harmonised Rules On Artificial Intelligence For AI." Among the provisos is a curtailment of law enforcement use of facial recognition in public.

"The use of 'real time' remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply," the report states.

The backlash against surveillance keeps finding new examples to which to point. The paradigmatic example had been the monitoring of ethic Uyghurs in China's Xianxjang region. Following a February military coup in Myanmar, Human Rights Watch reports that human rights are in the balance given the surveillance system that had just been set up. That project, called Safe City, was deployed in the capital Naypidaw, in December.

As one researcher told Human Rights Watch, "Before the coup, Myanmar's government tried to justify mass surveillance technologies in the name of fighting crime, but what it is doing is empowering an abusive military junta."

Also: The US, China and the AI arms race: Cutting through the hype

The National Security Commission on AI's Final Report in March warned the U.S. is not ready for global conflict that employs AI.

As if all those developments weren't dramatic enough, AI has become an arms race, and nations have now made AI a matter of national policy to avoid what is presented as existential risk. The U.S.'s National Security Commission on AI, staffed by tech heavy hitters such as former Google CEO Eric Schmidt, Oracle CEO Safra Catz, and Amazon's incoming CEO Andy Jassy, last month issued its 756-page "final report" for what it calls the "strategy for winning the artificial intelligence era."

The authors "fear AI tools will be weapons of first resort in future conflicts," they write, noting that "state adversaries are already using AI-enabled disinformation attacks to sow division in democracies and jar our sense of reality."

The Commission's overall message is that "The U.S. government is not prepared to defend the United States in the coming artificial intelligence era." To get prepared, the White House needs to make AI a cabinet-level priority, and "establish the foundations for widespread integration of AI by 2025." That includes "building a common digital infrastructure, developing a digitally-literate workforce, and instituting more agile acquisition, budget, and oversight processes."

Why are these issues cropping up? There are issues of justice and authoritarianism that are timeless, but there are also new problems with the arrival of AI, and in particular its modern deep learning variant.

Consider the incident between Google and scholars Gebru and Mitchell. At the heart of the dispute was a research paper the two were preparing for a conference that crystallizes a questioning of the state of the art in AI.

The paper that touched off a controversy at Google: Gebru and Bender and Major and Mitchell argue that very large language models such as Google's BERT present two dangers: massive energy consumption and perpetuating biases.

The paper, coauthored by Emily Bender of the University of Washington, Gebru, Angelina McMillan-Major, also of the University of Washington, and Mitchell, titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" focuses on a topic within machine learning called natural language processing, or NLP.

The authors describe how language models such as GPT-3 have gotten bigger and bigger, culminating in very large "pre-trained" language models, including Google's Switch Transformer, also known as Switch-C, which appears to be the largest model published to date. Switch-C uses 1.6 trillion neural "weights," or parameters, and is trained on a corpus of 745 gigabytes of text data.

The authors identify two risk factors. One is the environmental impact of larger and larger models such as Switch-C. Those models consume massive amounts of compute, and generate increasing amounts of carbon dioxide. The second issue is the replication of biases in the generation of text strings produced by the models.

The environment issue is one of the most vivid examples of the matter of scale. As ZDNet has reported, the state of the art in NLP, and, indeed, much of deep learning, is to keep using more and more GPU chips, from Nvidia and AMD, to operate ever-larger software programs. Accuracy of these models seems to increase, generally speaking, with size.

But there is an environmental cost. Bender and team cite previous research that has shown that training a large language model, a version of Google's Transformer that is smaller than Switch-C, emitted 284 tons of carbon dioxide, which is 57 times as much CO2 as a human being is estimated to be responsible for releasing into the environment in a year.

It's ironic, the authors note, that the ever-rising cost to the environment of such huge GPU farms impacts most immediately the communities on the forefront of risk from change whose dominant languages aren't even accommodated by such language models, in particular the population of the Maldives archipelago in the Arabian Sea, whose official language is Dhivehi, a branch of the Indo-Aryan family:

Is it fair or just to ask, for example, that the residents of the Maldives (likely to be underwater by 2100) or the 800,000 people in Sudan affected by drastic floods pay the environmental price of training and deploying ever larger English LMs [language models], when similar large-scale models aren't being produced for Dhivehi or Sudanese Arabic?

The second concern has to do with the tendency of these large language models to perpetuate biases that are contained in the training set data, which are often publicly available writing that is scraped from places such as Reddit. If that text contains biases, those biases will be captured and amplified in generated output.

The fundamental problem, again, is one of scale. The training sets are so large, the issues of bias in code cannot be properly documented, nor can they be properly curated to remove bias.

"Large [language models] encode and reinforce hegemonic biases, the harms that follow are most likely to fall on marginalized populations," the authors write.

The risk of the huge cost of compute for ever-larger models has been a topic of debate for some time now. Part of the problem is that measures of performance, including energy consumption, are often cloaked in secrecy.

Some benchmark tests in AI computing are getting a little bit smarter. MLPerf, the main measure of performance of training and inference in neural networks, has been making efforts to provide more representative measures of AI systems for particular workloads. This month, the organization overseeing MLPerf, the MLCommons, for the first time asked vendors to list not just performance but energy consumed for those machine learning tasks.

Regardless of the data, the fact is systems are getting bigger and bigger in general. The response to the energy concern within the field has been two-fold: to build computers that are more efficient at processing the large models, and to develop algorithms that will compute deep learning in a more intelligent fashion than just throwing more computing at the problem.

Cerebras's Wafer Scale Engine is the state of the art in AI computing, the world's biggest chip, designed for the ever-increasing scale of things such as language models.

On the first score, a raft of startups have arisen to offer computers dedicate to AI that they say are much more efficient than the hundreds or thousands of GPUs from Nvidia or AMD typically required today.

They include Cerebras Systems, which has pioneered the world's largest computer chip; Graphcore, the first company to offer a dedicated AI computing system, with its own novel chip architecture; and SambaNova Systems, which has received over a billion dollars in venture capital to sell both systems but also an AI-as-a-service offering.

"These really large models take huge numbers of GPUs just to hold the data," Kunle Olukotun, Stanford University professor of computer science who is a co-founder of SambaNova, told ZDNet, referring to language models such as Google's BERT.

"Fundamentally, if you can enable someone to train these models with a much smaller system, then you can train the model with less energy, and you would democratize the ability to play with these large models," by involving more researchers, said Olukotun.

Those designing deep learning neural networks are simultaneously exploring ways the systems can be more efficient. For example, the Switch Transformer from Google, the very large language model that is referenced by Bender and team, can reach some optimal spot in its training with far fewer than its maximum 1.6 trillion parameters, author William Fedus and colleagues of Google state.

The software "is also an effective architecture at small scales as well as in regimes with thousands of cores and trillions of parameters," they write.

The key, they write, is to use a property called sparsity, which prunes which of the weights get activated for each data sample.

Scientists at Rice University and Intel propose slimming down the computing budget of large neural networks by using a hashing table that selects the neural net activations for each input, a kind of pruning of the network.

Another approach to working smarter is a technique called hashing. That approach is embodied in a project called "Slide," introduced last year by Beidi Chen of Rice University and collaborators at Intel. They use something called a hash table to identify individual neurons in a neural network that can be dispensed with, thereby reducing the overall compute budget.

Chen and team call this "selective sparsification", and they demonstrate that running a neural network can be 3.5 times faster on a 44-core CPU than on an Nvidia Tesla V100 GPU.

As long as large companies such as Google and Amazon dominate deep learning in research and production, it is possible that "bigger is better" will dominate neural networks. If smaller, less resource-rich users take up deep learning in smaller facilities, than more-efficient algorithms could gain new followers.

The second issue, AI bias, runs in a direct line from the Bender et al. paper back to a paper in 2018 that touched off the current era in AI ethics, the paper that was the shot heard 'round the world, as they say.

Buolamwini and Gebru brought international attention to the matter of bias in AI with their 2018 paper "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," which revealed that commercial facial recognition systems showed "substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems."

That 2018 paper, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," was also authored by Gebru, then at Microsoft, along with MIT researcher Joy Buolamwini. They demonstrated how commercially available facial recognition systems had high accuracy when dealing with images of light-skinned men, but catastrophically bad inaccuracy when dealing with images of darker-skinned women. The authors' critical question was why such inaccuracy was tolerated in commercial systems.

Buolamwini and Gebru presented their paper at the Association for Computing Machinery's Conference on Fairness, Accountability, and Transparency. That is the same conference where in February Bender and team presented the Parrot paper. (Gebru is a co-founder of the conference.)

Both Gender Shades and the Parrot paper deal with a central ethical concern in AI, the notion of bias. AI in its machine learning form makes extensive use of principles of statistics. In statistics, bias is when an estimation of something turns out not to match the true quantity of that thing.

So, for example, if a political pollster takes a poll of voters' preferences, if they only get responses from people who talk to poll takers, they may get what is called response bias, in which their estimation of the preference for a certain candidate's popularity is not an accurate reflection of preference in the broader population.

Also: AI and ethics: One-third of executives are not aware of potential AI bias

The Gender Shades paper in 2018 broke ground in showing how an algorithm, in this case facial recognition, can be extremely out of alignment with the truth, a form of bias that hits one particular sub-group of the population.

Flash forward, and the Parrot paper shows how that statistical bias has become exacerbated by scale effects in two particular ways. One way is that data sets have proliferated, and increased in scale, obscuring their composition. Such obscurity can obfuscate how the data may already be biased versus the truth.

Second, NLP programs such as GPT-3 are generative, meaning that they are flooding the world with an amazing amount of created technological artifacts such as automatically generated writing. By creating such artifacts, biases can be replicated, and amplified in the process, thereby proliferating such biases.

On the first score, the scale of data sets, scholars have argued for going beyond merely tweaking a machine learning system in order to mitigate bias, and to instead investigate the data sets used to train such models, in order to explore biases that are in the data itself.

Before she was fired from Google's Ethical AI team, Mitchell lead her team to develop a system called "Model Cards" to excavate biases hidden in data sets. Each model card would report metrics for a given neural network model, such as looking at an algorithm for automatically finding "smiling photos" and reporting its rate of false positives and other measures.

One example is an approach created by Mitchell and team at Google called model cards. As explained in the introductory paper, "Model cards for model reporting," data sets need to be regarded as infrastructure. Doing so will expose the "conditions of their creation," which is often obscured. The research suggests treating data sets as a matter of "goal-driven engineering," and asking critical questions such as whether data sets can be trusted and whether they build in biases.

Another example is a paper last year, featured in The State of AI Ethics, by Emily Denton and colleagues at Google, "Bringing the People Back In," in which they propose what they call a genealogy of data, with the goal "to investigate how and why these datasets have been created, what and whose values influence the choices of data to collect, the contextual and contingent conditions of their creation, and the emergence of current norms and standards of data practice."

Vinay Prabhu, chief scientist at UnifyID, in a talk at Stanford last year described being able to take images of people from ImageNet, feed them to a search engine, and find out who people are in the real world. It is the "susceptibility phase" of data sets, he argues, when people can be targeted by having had their images appropriated.

Scholars have already shed light on the murky circumstances of some of the most prominent data sets used in the dominant NLP models. For example, Vinay Uday Prabhu, who is chief scientist at startup UnifyID Inc., in a virtual talk at Stanford University last year examined the ImageNet data set, a collection of 15 million images that have been labeled with descriptions.

The introduction of ImageNet in 2009 arguably set in motion the deep learning epoch. There are problems, however, with ImageNet, particularly the fact that it appropriated personal photos from Flickr without consent, Prabhu explained.

Those non-consensual pictures, said Prabhu, fall into the hands of thousands of entities all over the world, and that leads to a very real personal risk, he said, what he called the "susceptibility phase," a massive invasion of privacy.

Using what's called reverse image search, via a commercial online service, Prabhu was able to take ImageNet pictures of people and "very easily figure out who they were in the real world." Companies such as Clearview, said Prabhu, are merely a symptom of that broader problem of a kind-of industrialized invasion of privacy.

An ambitious project has sought to catalog that misappropriation. Called Exposing.ai, it is the work of Adam Harvey and Jules LaPlace, and it formally debuted in January. The authors have spent years tracing how personal photos were appropriated without consent for use in machine learning training sets.

The site is a search engine where one can "check if your Flickr photos were used in dozens of the most widely used and cited public face and biometric image datasets [] to train, test, or enhance artificial intelligence surveillance technologies for use in academic, commercial, or defense related applications," as Harvey and LaPlace describe it.

Some argue the issue goes beyond simply the contents of the data to the means of its production. Amazon's Mechanical Turk service is ubiquitous as a means of employing humans to prepare vast data sets, such as by applying labels to pictures for ImageNet or to rate chat bot conversations.

An article last month by Vice's Aliide Naylor quoted Mechanical Turk workers who felt coerced in some instances to produce results in line with a predetermined objective.

The Turkopticon feedback aims to arm workers on Amazon's Mechanical Turk with honest appraisals of the work conditions of contracting for various Turk clients.

A project called Turkopticon has arisen to crowd-source reviews of the parties who contract with Mechanical Turk, to help Turk workers avoid abusive or shady clients. It is one attempt to ameliorate what many see as the troubling plight of an expanding underclass of piece workers, what Mary Gray and Siddharth Suri of Microsoft have termed "ghost work."

There are small signs the message of data set concern has gotten through to large organizations practicing deep learning. Facebook this month announced a new data set that was created not by appropriating personal images but rather by making original videos of over three thousand paid actors who gave consent to appear in the videos.

The paper by lead author Caner Hazirbas and colleagues explains that the "Casual Conversations" data set is distinguished by the fact that "age and gender annotations are provided by the subjects themselves." Skin type of each person was annotated by the authors using the so-called Fitzpatrick Scale, the same measure that Buolamwini and Gebru used in their Gender Shades paper. In fact, Hazirbas and team prominently cite Gender Shades as precedent.

Hazirbas and colleagues found that, among other things, when machine learning systems are tested against this new data set, some of the same failures crop up as identified by Buolamwini and Gebru. "We noticed an obvious algorithmic bias towards lighter skinned subjects," they write.

See the original post:

Ethics of AI: Benefits and risks of artificial intelligence - ZDNet

Posted in Ai | Comments Off on Ethics of AI: Benefits and risks of artificial intelligence – ZDNet

Page 135«..1020..134135136137..140150..»