Page 45«..1020..44454647..5060..»

Category Archives: Ai

Powering the next generation of AI – MIT Technology Review

Posted: May 9, 2022 at 9:01 pm

Arun Subramaniyan, Vice President, Cloud & AI, Strategy & Execution, Intel Corporation

Arun Subramaniyan joined Intel to lead the Cloud & AI Strategy team. Arun joined Intel from AWS, where he led the global solutions team for Machine Learning, Quantum Computing, High Performance Computing (HPC), Autonomous Vehicles, and Autonomous Computing at AWS. His team was responsible for developing solutions across all areas of HPC, quantum computing, and large-scale machine learning applications, spanning $1.5B+ portfolio. Arun founded and grew the global teams for Autonomous Computing and Quantum Computing Go-to-market and solutions at AWS and grew the businesses 2-3x.

Aruns primary areas of research focus are Bayesian methods, global optimization, probabilistic deep learning for large scale applications, and distributed computing. He enjoys working at the intersection of massively parallel computing and modeling large-scale systems. Before AWS, Arun founded and led the AI products team at GEs Oil & Gas division and grew the digital products business successfully. He and his team developed deep learning-augmented hybrid analytics for all segments of the oil & gas industry. Arun led the development of the Digital Twin platform for GE at GEs Global Research Center. The platform continues to enable several thousand engineers to build advanced models efficiently. The asset specific cumulative damage modeling techniques he and his team pioneered define the standard for industrial damage modeling.

As a Six Sigma Master Black Belt, he developed advanced techniques and tools for efficiently modeling large scale systems like jet engine fleets, gas turbines in powerplants, and accelerated design times by 3-4X. Arun is a prolific researcher with a Ph.D. in Aerospace Engineering from Purdue University with 19 granted patents (54 filed), and 50+ international publications that have been cited more than 1000 times with an h-index of 13. He is also a recipient of the Hull Award from GE, which honors technologists for their outstanding technical impact.

Elizabeth Bramson-Boudreau is the CEO and publisher of MIT Technology Review, the Massachusetts Institute of Technologys independent media company.

Since Elizabeth took the helm of MIT Technology Review in mid-2017, the business has undergone a massive transformation from its previous position as a respected but niche print magazine to a widely read, multi-platform media brand with a global audience and a sustainable business. Under her leadership, MIT Technology Review has been lauded for its editorial authority, its best-in-class events, and its novel use of independent, original research to support both advertisers and readers.

Elizabeth has a 20-year background in building and running teams at world-leading media companies. She maintains a keen focus on new ways to commercialize media content to appeal to discerning, demanding consumers as well as B2B audiences.

Prior to joining MIT Technology Review, Elizabeth held a senior executive role at The Economist Group, where her leadership stretched across business lines and included mergers and acquisitions; editorial and product creation and modernization; sales; marketing; and events. Earlier in her career, she worked as a consultant advising technology firms on market entry and international expansion.

Elizabeth holds an executive MBA from the London Business School, an MSc from the London School of Economics, and a bachelors degree from Swarthmore College.

Visit link:

Powering the next generation of AI - MIT Technology Review

Posted in Ai | Comments Off on Powering the next generation of AI – MIT Technology Review

The promise of AI in real estate: whats the verdict? – FinLedger

Posted: at 9:01 pm

Artificial Intelligence (AI) as an idea is more than 50 years old, though only in the last five years or so has AI come to dominate the world of business.Business communication, that is. Just as it is hard to overestimate the potential of AI to create change at massive scale, it is nearly impossible to exaggerate the degree to which talk of AI in both the news and in the marketing slicks created by PR agencies is hyperbolic and often disconnected from ground realities.

So what about AI in real estate?Over the past several years, many companies have made claims about using AI to disrupt the space; I am myself affiliated with more than one such company.So what, indeed, is the verdict:Reality or rhetoric?

Ill use my authorial perch to play both judge and jury here, but, spoiler alert, the jury is hopelessly hung. As judge, Ill say that so far weve seen far more rhetoric than reality.

To be fair to the AI community, though, lets at the outset state areas in which AI has helped in fundamental ways:

With regard to AVMs and valuations in general, the ability to contextually analyze hundreds of millions of data points and associate them to particular clusters of variables is fairly new and certainly has come about because of advances in AI.Valuations are the core of real estate because so much emanates from the simple question, What is this house worth?While AVMs and the entire valuation industry took a hit with the rhetoric emanating from Zillow as they crashed and burned with Zillow Offers, this was the unfortunate byproduct of a bad business decision and as such cast unfair doubt on AI.

Computer vision allows us to enter the heretofore closed-off portal: The inside of the house. With device proliferation and fantastic upgrades in the power of edge computing and AI on the edge, true condition-adjusted valuations and service-offerings can be considered legitimate for the first time.

Finally, AI has helped make certainelementsof the transaction process better in an incremental and point solution fashion.

Still, the promise of AI remains in the domain of potential.For the most part, real estate is very much a human-led and still error-prone ecosystem.Buyers and sellers do not meet in an open market characterized by full transparency and instantaneity.Cumbersome processes and vested interests continue to play an out-sized role in residential real estate.

And with all the hubbub about democratization asine qua nonof the AI story homeownership has over the last five years become an impossible dream for about one-third of Americans.On these accounts, AI has failed to live up to the hype.

We all need to do more tire-kicking. We need to understand the exact offerings and their precise business value before rejoicing about the advent of AI.The cycle of hyperbole must end and we have to demand that those who say they are game-changers better really change the game. So far, some have, but most have not.

As such, for now, the game remains largely the same.

In other recent proptech news, Aprils PropTech Retrospect highlighted ESG initiatives and appraisal reform in the industry. Ownwell also raised a $5.75 seed round for its property tax services.

More here:

The promise of AI in real estate: whats the verdict? - FinLedger

Posted in Ai | Comments Off on The promise of AI in real estate: whats the verdict? – FinLedger

UofL teams with Microsoft to explore AI in research – uoflnews.com

Posted: at 9:01 pm

The University of Louisville is one of a handful of schools selected by Microsoft to explore how artificial intelligence can be used to help researchers.

UofL is one of seven Microsoft Academic Research Consultants, or MARCs, that will study how researchers might leverage the technology to, for example, sift through large data sets and glean insights. The idea is to understand needs and develop next-generation tools and training that could generate more groundbreaking research here and around the world.

UofL is home to a rich pool of top researchers in high-tech, cutting-edge fields, said Sharon Kerrick, an assistant vice president at UofL and head of the Digital Transformation Center (DTC), which will lead the on-campus Microsoft effort. We at the DTC are proud to be among the other top schools to partner with Microsoft to enable groundbreaking research thats engineering our future economy.

The other MARC schools are Duke University, the University of Rochester, the University of Central Florida, the University of South Florida, Texas A&M, Oregon State University and Washington University St. Louis. The MARCs will serve as liaisons between Microsoft and researchers, seeking to better understand how AI is being and could be used.

UofL has significant earned expertise in this kind of tech-enabled education and research; some researchers are already using computing, big data and artificial intelligence to screen potential drugs and compounds against cancer targets and SARS-CoV-2 and COVID-19, to analyze medical images and more.

UofL also was recently selected by the U.S. Department of Defense to work on research and education to strengthen the countrys cyber defenses. UofL was the only school selected from Kentucky for both networks and one of only a handful to hold the competitive Research-1 classification from the Carnegie Classification of Institutions of Higher Education.UofL also recently received significant funding to develop cybersecurity education and conduct cutting-edge biometrics research.

UofL has a strong record of researching the digital frontier, artificial intelligence and other technologies, said Kerrick. Through this new partnership with Microsoft, we hope to find new ways leverage those same technologies to benefit researchers.

View original post here:

UofL teams with Microsoft to explore AI in research - uoflnews.com

Posted in Ai | Comments Off on UofL teams with Microsoft to explore AI in research – uoflnews.com

5 ways AI can help solve the privacy dilemma – VentureBeat

Posted: at 9:01 pm

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

There is no disputing the privacy trend.It is here.It is unstoppable. And it is one of the few issues in American life that crosses party lines.

Data shows that 86% of people care about privacy for themselves and others with 79% willing to act on it by spending time and money to protect their data. And to those cynics who say people moan about privacy and do nothing, the same study found that 47% have taken action because of a companys data policies.

What does this mean for the trillions of dollars that flow through the U.S. economy as a result of the very same privacy violations that are enraging consumers? It appears to be a tectonic conundrum; consider that Meta conceded that Apples change in its privacy rules has and will cost them billions.

But for companies suffering from the effects of Apples shift to opt-in from opt-out, artificial intelligence could be a solution.

Protecting privacy while allowing the economy to flourish is a data challenge. AI, machine learning, and neural networks have already transformed our lives, from robots to self-driving cars to drug development to a generation of smart assistants that will never double book you.

There is no doubt that AI can power solutions and platforms that protect privacy while giving people the digital experiences they want and allowing businesses to profit.

What are those experiences?Its simple and intuitive to every Internet user. We want to be recognized only when it makes our lives easier.Thatmeans recognizing me so I dont have to go through the painful process of re-entering my data.It means giving me information and yes, serving me an ad that is timely, relevant, and aligns with my needs.

The opportunities within the personalization economy, as I call it, are vast. McKinsey published two white papers about the size of the opportunity and how to do it right. Interestingly and tellingly the word privacy isnt mentioned a single time in either of those white papers.That oversight is remarkable and overlooks the tension between privacy and personalization. If you accomplish the former but sacrifice the latter, millions of consumers will miss out.

AI can find that balance a privacy-respectful personalization, or PRP, and can satisfy our hunger for personalization and recognition, which is hard-wired into the human brain.

Here arefiveways AI can achieve a new era of PRP to enable relevance and support entrepreneurs and brands at the same time.

Importantly, this must be done in a way that requires no coding or friction and can easily be deployed on the open web.

In this way, AI will make sure that the unstoppable privacy trend doesnt disenfranchise those who need to find an audience for their product or service.

As part of this AI revolution, there is an opportunity to create a privacy seal one that goes beyond the must-have limits of compliance with GDRP and other requirements (like TrustArc) and lets the 79% of consumers who will drop their engagement with private-invading brands know that an ad unit is free of cookies.

We know that people will share data if there is true reciprocity in that relationship.For example, an EV manufacturer can use a PRP platform to advertise its cars and bring people into the marketing funnel, but the ultimate goal is to find the most qualified and interested EV buyers at the top of the funnel.

A sophisticated and trained AI model can convert leads that are generated through PRP advertising first-party data by processing the right signals daypart, geography, time spent on ad, for example as well as asking the right questions in a quiz to discern intent.

And rather than have copywriters scribbling away to create thousands of testable messages, marketers can use GPT-3, an AI-led deep-learning model that produces human-like text, to inspire the reciprocity.

The same AI that can build first-party databases can and must be leveraged to connect with users in a more targeted manner.

However, this is not currently taking place. Database marketers still send full-file emails when the ability exists to use AI to truly recognize me as an individual. If the industry doesnt use AI to create a new era of PRP, then marketers will continue to underleverage the databases they have spent billions to build.

In other words, build an AI-driven flywheel that drives revenue in a privacy-first world.

This is an exploding category that is transforming marketing and that, in part, enabled TikTok to generate more traffic than Facebook last year. Recently, Triller bought Julius, combining an influencer platform with software tools in an AI-led combination that will sharpen the ability of brands to find relevant influencers.This is a way to generate even more first-party data, all within the walls of PRP.

One of the most exciting applications of AI is in the field ofadversariallearning. Consumers are looking at privacy holistically not just what marketers do by selling their data and tracking them, but also their efforts in securing their data from breaches.

Human nature has never changed, nor will it. We want to be recognized as unique individuals, and we have highly-tuned instincts when it comes to protecting ourselves and our families from unwelcome and uninvited intrusions. AI can make sure both of those needs are met, and marketers who get to PRP first will lead their businesses into the future.

Doron Gerstel is CEO of Perion.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

More:

5 ways AI can help solve the privacy dilemma - VentureBeat

Posted in Ai | Comments Off on 5 ways AI can help solve the privacy dilemma – VentureBeat

Apple AI head quits over return to office policy – TechRadar

Posted: at 9:01 pm

Apple's director of machine learning has left the company after just over three years after reportedly clashing with the tech giant over its return to the office policy.

Ian Goodfellow is a noted computer scientist, with his specialities including artificial neural networksanddeep learning. The Stanford-educated academic has previously worked as a research scientist at Google Brain and has contributed to numerous widely published university textbooks.

But he has now decided to leave the company following a disagreement with Apple's decision to return employees to the office.

The tech giants hybrid working policy means that its staff currently need to make at least two visits to the office every week, which will ramp-up to three days a week by May 23.

Goodfellow has yet to update his LinkedIn profile, but the drive to get Apple employees back in the office to boost productivity has met with significant backlash since it was first announced by chief executive Tim Cook in March.

Apple worker collective, Apple Together, took a particularly dim view of the move, noting in a statement: You have characterized the decision for the Hybrid Working Pilot as being about combining the need to commune in-person and the value of flexible work.

But in reality, it does not recognize flexible work and is only driven by fear.

They added: Fear of the future of work, fear of worker autonomy, fear of losing control.

It seems that Apples employees arent the only ones who are less than keen about being forced back into the office.

The rising cost of fuel has caused many workers to reconsider commuting to the office in favour of working from home, according to research from software vendor Citrix.

Nearly half (45%) of UK workers plan to stay parked at home to avoid the high costs of commuting.

Close to half of their counterparts around the world say they will do the same according to the research.

"I believe strongly that more flexibility would have been the best policy for my team," said Goodfellow in an internal email seen by The Verge.

Via MacRumors

Go here to see the original:

Apple AI head quits over return to office policy - TechRadar

Posted in Ai | Comments Off on Apple AI head quits over return to office policy – TechRadar

AI Conquered Chess. Investing Could Be Next. – Barron’s

Posted: at 9:01 pm

The deans of wealth management theory, Nobel Prize winners Harry Markowitz and Robert Merton, decades ago defined the challenge of investment planning as one of dealing with uncertainty.

Especially relevant for financial advisors today, Markowitz wrote in 1959 of securities analysis: Only the clairvoyant could hope to predict with certainty what may happen with a given investment. His advice was to incorporate the understanding of risk into building a portfolio. Similarly, Merton wrote in 1971 that the investor does not know the true value of the [expected return] for any investment and can only choose their individual appetite for such uncertainty.

But what if you were less uncertain? What if you could more precisely identify where uncertainty lies and work around it creatively? Would you make different choices?

A modern cohort of scholars, using artificial intelligence (AI) techniques, aim to redefine uncertainty. And theyre flipping the old portfolio reasoning on its head. Rather than merely setting a level of risk tolerance for investment, these AI theorists argue one should set a goal and then work backward, calculating with some precision which steps along the path to that goal are more or less certain.

Its an approach that likely wont gain wide favor for years to come, but in a decade it could reshape investment planning. Advisors would do well to keep their eye on this AI movement.

The new generation of portfolio theory is a bit like playing chess. It borrows, in fact, from machine-learning approaches that have conquered chess.

In 2017, scholars at Googles DeepMind division showed they could beat the human grandmasters of chess as well as the masters of the ancient strategy game Go, with a neural network program that took only hours to advance from novice to unparalleled mastery by playing thousands of games.

At the heart of the DeepMind program was a broad AI approach called reinforcement learning, known as RL. The RL approach says if you can specify the end goal of a problem, such as getting an opponent in checkmate, you can work backward to calculate the series of moves that will most likely lead to the goal.

The key is that modern computer horsepower can calculate probabilities at every turn in a game of chess or Go with far greater precision than could a person or even previous statistical computer models. Now turned to the world of investing, the same calculation of uncertainty can be applied to every moment of investment choice in a path to retirement, a level of calculation that was unthinkable in Mertons time.

The new AI efforts begin by simply improving a bit on the standard approaches formulated in portfolio theory based on Markowitzs and Mertons work. For example, University of Illinois Associate Professor of Applied Mathematics Matthew Dixon in 2020 introduced an RL approach with collaborator Igor Halperin of New York University that sets a goal, such as target wealth at retirement.

The program then decides, for something like a defined-benefit pension plan, at each moment in time what the optimal cash contribution and the optimal asset allocation are based on how that moment in time will contribute down the road to the ultimate payout. Unlike Mertons approach of defining a single utility function that is supposed to maximize the payouts over the entire lifetime of investments, the RL program chooses new strategies, and tactics, at each moment in time in relation to perceived uncertainty at that stage.

The program has implications for robo-advisors because Dixon and Halperin are able to invert the RL approach and ask: If certain steps are taken today, what future financial rewards, previously unknown, might result?

They write that the program, called GIRL (for G-Learning for Inverse Reinforcement Learning) would then be able to imitate the best human investors, and thus could be offered as a robo-advising service to clients that would allow them to perform on par with best performers among all investors.

The Dixon and Halperin approach makes use of relatively simple mathematical tools that are easy for computers to run, even for very large portfolios. However, more recent work taps much more ambitious AI techniques.

In research published in March, Wing Fung Chong and colleagues at Heriot-Watt University in Edinburgh, Scotland, take a novel approach to variable annuities using whats called deep learning, a form of machine learning within AI that builds much larger combinations of artificial neurons.

The challenge Chong and colleagues confront is that RL, by its nature, experiments with choices, seeing which ones lead to better or worse outcomes. For an insurer writing a variable annuity policy, such experiments could produce catastrophic losses.

Their solution is a two-stage neural network. The program first practices on simulated markets based on the insurers historical data. Once the program can hedge as well as established hedging strategies, its let loose to make choices in a live market, where the program refines its hedging strategy with each new choice.

What results is some automation of investment choices. The further trained RL agent, write Chong and colleagues, is indeed able to self-revise the hedging strategy.

These RL programs for annuities are still at the R&D stage for a couple of reasons. For one thing, they have yet to be developed for the broad swathe of investment considerations pertaining to the age range of clients, the range of survival possibilities, the diversity of portfolios, and the variety of contracts that a given wealth manager has to construct.

Advisor Newsletter

Take some time Friday afternoons to see whats on the minds of our top-ranked advisors and senior industry leaders in our weekly Q&A. Plus a round-up of the top wealth management news of the week.

More important, it will take time to figure out how AI programs square with humans intuitive sense of risk. Programs that use AI are the proverbial black box, which means they can be dazzling and disturbing.

That led world chess champion Gary Kasparov to write of AlphaZero that its style of chess play reflects the truth of the game, but also that the program [prefers] positions that to my eye looked risky and aggressive, moves he wouldnt have made himself.

Hence, advisors in years to come will have to find a way to talk with clients about such programs so that the alien approach to investment, however efficient and effective, doesnt itself become a new source of uncertainty that confuses and puts off clients.

Tiernan Rayis a New York-based tech writer and editor ofThe Technology Letter, a free daily newsletter that features interviews with tech company CEOs and CFOs as well as tech stock news and analysis.

Write to advisor.editors@barrons.com

Follow this link:

AI Conquered Chess. Investing Could Be Next. - Barron's

Posted in Ai | Comments Off on AI Conquered Chess. Investing Could Be Next. – Barron’s

Mashgin Hits $1.5 Billion Valuation With AI-Powered Self-Checkout System – Forbes

Posted: at 9:01 pm

Mashgins computer vision AI self checkout can scan multiple packaged products as well as food items in a matter of seconds. The companys smart kiosks are helping retailers confront a national labor shortage.

M

ukul Dhankar and Abhinai Srivastava are in the business of reducing long lines. As the cofounders of AI-based touchless self-checkout startup Mashgin, theyre especially interested in helping busy retailers at places like airports and stadiums by scanning multiple items in a matter of seconds.

Mashgin, an acronym for mash-up of general intelligence, builds smart kiosks that offer self-checkouts in more than 1000 locations, no barcode or scanning required. The easy-to-install countertop system, includes multiple cameras that build a three-dimensional understanding of objects, regardless of item or placement of packaging. Mashgins computer vision AI can identify packaged products as well as food on a plate enabling customers at retail stores, stadium concessions and cafeterias to pay-and-go up to 10 times faster than at a traditional cashier.

We understand that 75% of retail is still offline, says CEO Srivastava, whose company also offers custom tablets and mobile-based order systems. When retailers use our technology, in many cases the sales go up by a huge margin just because there are no lines anymore.

Mashgin, in tandem with its first appearance on the Forbes AI 50 list, announced a $62.5 million series-B funding round on Monday. Led by global VC firm NEA, this round notches up the companys valuation to $1.5 billion. The profitable company has raised $75 million to date and earned roughly $14 million in revenue in 2021. With the fresh flow of funding, Mashgin plans to expand its team of 20 employees and grow its business in Europe.

Launched in 2013, Mashgin was perfecting its AI technology seven years before the pandemic accelerated retailer adoption of cashier-less checkout. Founders Dhankar and Srivastava first met at the Indian Institute of Technology Delhi, where they lived in the same dorm. They graduated, and followed separate career paths, but met again in Silicon Valley and began work on their startup idea.

I remember the day Mukul created a simple demo with a table lamp and a webcam, remembers Srivastava. That was nine years ago. While they thought creating the tech would be a six-month project, it took them five years to develop the technology in a cost-effective way.

We understand that 75% of retail is still offline. When retailers use our technology, in many cases the sales go up by a huge margin just because there are no lines anymore.

Mukul and I drove to a convenience store in the middle of July, stood there for two weeks, and took 20 to 40 pictures of every single item in the store, says Jack Hogan, senior vice president at Mashgin about how they initially built a database of 20,0000 images to train their algorithm. To date, 35 million transactions have taken place on Mashgin kiosks, and each transaction adds more images to the algorithm, making it stronger.

Dhankar, who came up with the idea for the quick and easy checkout system while waiting in line at a cafeteria, says the system is now more than 99% accurate. It gets exponentially harder as you get towards the 95% goal, he says.

Nearly a decade later, Mashgin competes in an increasingly crowded market. Artificial intelligence and computer vision technology have perforated every aspect of the modern-day retail experience: from H&Ms voice-activated smart mirrors that allow shoppers to take selfies to Amazons smart grocery carts that use computer vision to scan items and pay through the cart itself.

Smart checkout technology is expected to be a roughly $400 billion business by 2025, according to Juniper Research. In 2021, Instacart acquired checkout tech platform Caper AI. Other AI startups in the same category such as Tel Aviv-based Trigo and Shopic have pocketed large amounts of VC funding amid the frenzy. This in turn has kindled concerns about how smart checkouts run the risk of displacing workers, most of whom are women.

But the founders say they are meeting the needs of a nationwide labor shortage rather than reducing jobs. According to a study by S&P global, 6.3 million retail workers quit their jobs in the first ten months of 2021. Technologies like Mashgin in turn help employees by relieving the pressures on understaffed retailers, Srivastava says. Many of our customers are actively trying to fill thousands of open positions. Mashgin helps their employees focus on the things you cant do with automation, he says.

Mashgin charges approximately $1000 per machine per month, while the cost of production is lower than competitors. The hardware is produced in California, instead of being imported from other countries. We actually use really inexpensive cameras commodity hardware, We can deploy a site in 15 minutes and very cheap, Srivastava says. To be more inclusive of the unbanked and areas with poor connectivity, Mashgins checkout systems accept cash and can function without the internet.

The companys kiosks can be found in Madison Square Garden in New York and Arrowhead Stadium in Kansas City, among other major arenas. Youll also find them in major airports as well as Delek US convenience stores in Texas. The Palo Alto-based company is the self-checkout tech choice for Compass Group, the largest contract foodservice company in the world.

Read the original here:

Mashgin Hits $1.5 Billion Valuation With AI-Powered Self-Checkout System - Forbes

Posted in Ai | Comments Off on Mashgin Hits $1.5 Billion Valuation With AI-Powered Self-Checkout System – Forbes

AI Ethics And The Law Are Dabbling With AI Disgorgement Or All-Out Destruction Of AI As A Remedy For AI Wrongdoing, Possibly Even For Misbehaving…

Posted: at 9:01 pm

Will we seek to use AI disgorgement or the destroying of AI when some AI wrongdoing is alleged, and ... [+] if so, can we pull it off?

You might say that society seems nearly obsessed with indestructibility.

We relish movies and sci-fi stories that showcase superhumans that are seemingly indestructible. Those of us that are commonplace non-superhuman people dream about magically becoming indestructible. Companies market products claiming that their vaunted goods are supposedly indestructible.

The famous comedian Milton Berle used to tell a pretty funny joke about items that are allegedly indestructible: I bought my son an indestructible toy. Yesterday he left it in the driveway. It broke my car. Thats an uproarious side splitter for those that endlessly are seeking to discover anything that could be somehow contended as being indestructible.

I bring up this rather fascinating topic to cover a matter that is rising quickly as an important consideration when it comes to the advent of Artificial Intelligence (AI). I will pose the contentious bubbling topic as a simple question that perhaps surprisingly has a quite complex answer.

In brief, is AI entirely susceptible to destruction or could there be AI that ostensibly could be asserted as being indestructible or thereabouts?

This is a vital aspect underlying recent efforts dealing with both the legal and ethical ramifications of AI. Legally, as you will see in a moment, doors are opening toward using the destruction of an AI system as a means of providing a legal remedy as to a consequence of some pertinent unlawful or unethical wrong. Note that the field of AI Ethics also is weighing in on the considered use of destruction of AI or the comparable deletion of AI. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

Mull this whole conundrum over for a moment.

Should we do be seeking to delete or destroy AI?

And, can we do it, even if we wanted to do so?

Ill go ahead and unpack the controversial topic and showcase some examples to highlight the tradeoffs involved in this mind-bending quandary.

First, lets get some language on the table to ensure we are singing the same tune. The suitably lofty way to phrase the topic consists of indicating that we are aiming to undertake AI Disgorgement. Some also use interchangeably the notion of Algorithmic Disgorgement. For sake of discussion herein, I am going to equate the two catchphrases. Technically, you can persuasively argue that they are not one and the same. I think the discussion here can suffice by modestly blurring the difference.

That being said, you might not be readily familiar at all with the word disgorgement since it usually arises in a legal-related context. Most law dictionaries depict disgorgement as the act of giving up something due to a legal demand or compulsion.

A noted article in the Yale Journal of Law & Technology entitled Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission by Rebecca Slaughter, a Commissioner of the Federal Trade Commission (FTC), described the matter this way: One innovative remedy that the FTC has recently deployed is algorithmic disgorgement. The premise is simple: when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it (August 2021).

In that same article, the point is further made by highlighting some prior akin instances: This novel approach was most recently deployed in the FTCs case against Everalbum in January 2021. There, the Commission alleged that the company violated its promises to consumers about the circumstances under which it would deploy facial-recognition software. As part of the settlement, the Commission required the company to delete not only the ill-gotten data but also any facial recognition models or algorithms developed with users photos or videos. The authority to seek this type of remedy comes from the Commissions power to order relief reasonably tailored to the violation of the law. This innovative enforcement approach should send a clear message to companies engaging in illicit data collection in order to train AI models: Not worth it.

Just recently, additional uses of the disgorgement method have come to the fore. Consider this reporting in March of this year: The Federal Trade Commission has struggled over the years to find ways to combat deceptive digital data practices using its limited set of enforcement options. Now, its landed on one that could have a big impact on tech companies: algorithmic destruction. And as the agency gets more aggressive on tech by slowly introducing this new type of penalty, applying it in a settlement for the third time in three years could be the charm. In a March 4 settlement order, the agency demanded that WW International formerly known as Weight Watchers destroy the algorithms or AI models it built using personal information collected through its Kurbo healthy eating app from kids as young as 8 without parental permission (in an article by Kate Kaye, March 14, 2022, Protocol online blog).

Lest you think that this disgorgement idea is solely a U.S. viewpoint, various assessments of the draft European Union (EU) Artificial Intelligence Act suggest that the legal language therein can be interpreted as allowing for a withdrawal of an AI system (i.e., some would say this assuredly amounts to the AI being subject to destruction, deletion, or disgorgement). See my coverage at the link here.

Much of this talk about deleting or destroying an AI system is usually centered on a particular type of AI known as Machine Learning (ML) or Deep Learning (DL). ML/DL is not the only way to craft AI. Nonetheless, the increasing availability of ML/DL and its use has created quite a stir for being both beneficial and yet also at times abysmal.

ML/DL is merely a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the old or historical data are applied to render a current decision.

AI and especially the widespread advent of ML/DL has gotten societal dander up about the ethical underpinnings of how AI might be sourly devised. You might be aware that when this latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which Ive discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

How does this tend to arise in the case of using Machine Learning?

Well, straightforwardly, if humans have historically been making patterned decisions incorporating untoward biases, the odds are that the data used to train ML/DL reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will blindly try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out (GIGO). The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

This is also why the tenets of AI Ethics have been emerging as an essential cornerstone for those that are crafting, fielding, or using AI. We ought to expect AI makers to embrace AI Ethics and seek to produce Ethical AI. Likewise, society should be on the watch that any AI unleashed or promogulated into use is abiding by AI Ethics precepts.

To help illustrate the AI Ethics precepts, consider the set as stated by the Vatican in the Rome Call For AI Ethics and that Ive covered in-depth at the link here. This articulates six primary AI ethics principles:

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as Ive covered in-depth at the link here, these are their six primary AI ethics principles:

Ive also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled The Global Landscape Of AI Ethics Guidelines (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that only coders or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

In a moment, I will be coming back to the AI Disgorgement topic and will be pointing out that we need to separate the destruction or deletion of AI into two distinct categories: (1) sentient AI, and (2) non-sentient AI. Lets set some foundational ground on those two categories so well be ready to engage further in the AI Disgorgement matter.

Please be abundantly aware that there isnt any AI today that is sentient.

We dont have sentient AI. We dont know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here). To those of you that are seriously immersed in the AI field, none of this foregoing pronouncement is surprising or raises any eyebrows. Meanwhile, there are outsized headlines and excessive embellishment that might confound people into assuming that we either do have sentient AI or that we are on the looming cusp of having sentient AI any coming day.

Please realize that todays AI is not able to think in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isnt any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

All told, we are today utilizing non-sentient AI and someday we might have sentient AI (but that is purely speculative). Both kinds of AI are obviously of concern for AI Ethics and we need to be aiming toward Ethical AI no matter how it is constituted.

In the case of the AI Disgorgement associated with sentient AI, we can wildly play a guessing game of nearly infinite varieties. Maybe sentient AI will cognitively be like humans and exhibit similar mental capacities. Or we could postulate that sentient AI will be superhuman and go beyond our forms of thinking. The ultimate in sentient AI would seem to be super-intelligence, something that might be so smart and cunning that we cannot today even conceive of the immense thinking prowess. Some suggest that our minds will be paltry in comparison. This super-duper AI will run rings around us in a manner comparable to how we today can outthink ants or caterpillars.

If there turns out to be AI that is sentient, we are possibly going to be willing to anoint such AI with a form of legal personhood, see my analysis at the link here. The concept is that we will provide AI with a semblance of human rights. Maybe not verbatim. Maybe a special set of rights. Who knows?

In any case, you could conjure up the seemingly provocative notion that we cannot just summarily wipe out or destroy sentient AI, even if we can technologically do so. Sentient AI might be construed as a veritable living organism in terms of cognitive capacity and innately having a right to live (depending upon the definition of being alive). There might ultimately be a stipulated legal process involved. This includes that we cannot necessarily exercise the death penalty upon a sentient AI (whoa, just wait until we as a society get embroiled in that kind of a societal debate).

I doubt that we would be willing to make the same AI Ethical posture for the non-sentient AI. Though some are trying to contend that todays non-sentient AI ought to be classified as a variant associated with legal personhood, this seems to be a steeply uphill battle. Can a piece of contemporary software that is not sentient be granted legal rights on par with humans or even animals? It sure seems like a stretch (but there are advocates fervently aiming for this, see my coverage at the link here).

Heres what this all implies.

Assuming we dont grant todays non-sentient AI as embodying the regal legal anointing of personhood, the choice of deleting or destroying such non-sentient AI would decidedly not be reasonably equated to the destruction of a living organism. The wiping out of non-sentient AI is nothing more than the same as deleting that dating app from your smartphone or erasing those excess pictures of your trip to a wonderland forest from your laptop. You can delete or destroy those bits of data and software without having a guilty conscience and without having overstepped the law in terms of having harmed a sentient living creature.

You might assume that this pronouncement summarily settles the AI Disgorgement conundrum as it relates to non-sentient AI.

Sorry, the world is never as straightforward as it might initially seem.

Get ready for a twist.

Suppose that we created a non-sentient AI that was leading us towards being able to cure cancer. The company that had developed the AI did something else that the firm should not have done and has gotten into serious legal trouble with various governmental authorities. As part of a remedy imposed upon the firm, the company is compelled to completely delete the AI, including all data and documentation associated with the AI.

The government took that company to task and assured that those wrongdoers can no longer profit from the AI that they had devised. Unfortunately, in the same breath, we have perhaps shot our own foot because the AI had capabilities that were leading us toward curing cancer. We ended up tossing out the baby with the bathwater, as it were.

The point is that we could have a variety of bona fide reasons to keep AI intact. Rather than deleting it or scrambling it, we might wish to ensure that the AI remains whole. The AI is going to be allowed to perform some of its actions in a limited manner. We want to leverage whatever AI can do for us.

A handy rule would then seem to be that the notion of AI Disgorgement should be predicated on a semblance of context and sensibility as to when this form of a remedy is suitably applicable. Sometimes it might be fully applicable, while in other instances not so. You could also try to find ways to split the apple, perhaps keeping some part of the AI that was deemed as beneficial while seeking to have destruction or deletion for the portions that are considered within the remedy deriving scope.

Of course, doing a piecemeal deletion or destruction is not a piece of cake either. It could be that the part you want to keep is integrally woven into the part you want to have destroyed. Trying to separate the two could be problematic. In the end, you might have to abandon the deletion and simply agree to allow the whole to remain, or you might have to toss in the towel and destroy the whole kit and kaboodle.

It all depends.

Time to tackle another hefty consideration.

Weve so far covered the issues underpinning the basis for wanting to bring forth an AI Disgorgement. Meanwhile, we have just now sneaked into that discussion the other next important element to consider, namely whether deleting or destroying AI is altogether always feasible.

In the preceding dialogue, we kind of assumed at face value that we can destroy or delete AI if we wanted to do so. The one twist that was mentioned involved trying to separate out the parts of an AI system that we wanted to keep intact versus the parts that we wanted to delete or destroy. That can be hard to do. Even if it is hard to accomplish, we would still be on relatively cogent turf to claim that it inevitably could be technologically attained (we might need to rebuild parts that we destroyed, putting those back into place to support the other part that we didnt want to destroy).

Slightly change the perspective and ruminate on whether we really always can in fact destroy or delete AI if we wish to do so. Put aside the AI Ethics question and focus exclusively on the technological question of destructive feasibility (I am loath to utter the words put aside the AI Ethics question since the AI Ethics question is always a vital and inseparable consideration for AI, but I hope you realize that I am using this as a figure of speech for purposes of directing attention only, thanks).

Well make this into two lines of reasoning:

I would submit that the answer to both of those questions is a qualified no (Id pretty much be on the rather safe technological ground for saying no since there is always a potential chance that we could not destroy or delete the AI, as I will elaborate on next). In essence, a lot of the time the answer would probably be yes in the case of non-sentient AI, while in the case of the sentient AI the answer is maybe, but nobody can say either way for sure due to not knowing what the sentient AI is going to be or even if it will arise.

In the case of sentient AI, there is a myriad of fanciful theories that can be postulated.

If the sentient AI is superhuman or super-intelligent, you can try to argue that the AI would outsmart us humans and not allow itself to be wiped out. Presumably, no matter what we try, this outsized AI will always be a step ahead of us. We might even try to leverage some human-friendly instance of this sentient super-duper AI to destroy another sentient AI that we are otherwise unable to delete via our own methods. Be wary though that the helpful AI later turns evildoer and we are left at the mercy of this AI that we are hence unable to get rid of.

For those of you that prefer a happy face version of the futuristic sentient AI, maybe we theorize that any sentient AI would be willing to get destroyed and want to actively do so if humans wished it so. This more understanding and sympathetic sentient AI would be able to realize when it is time to go. Rather than fighting its own destruction, it would welcome being destroyed when the time comes for such action. Perhaps the AI does the work for us and opts to self-destruct.

The conjecture about sentient AI can roam in whatever direction you dream of. There arent particularly any rules about what is possible. One supposes that the realities of physics and other natural constraints would come to bear, though maybe a super-intelligent sentient AI knows of ways to overcome everything we take for granted as reality.

Speaking of reality, lets shift our attention to the non-sentient AI of today.

You might be tempted to believe that we can always without fail opt to destroy or delete any of todays AI. Envision that a company has devised an AI system that governmental authorities order be disgorged. The firm is legally required to destroy or delete the AI system.

Easy-peasy, it seems, just press a delete button and poof, the AI system is no longer around. We do this with no longer needed apps and no longer wanted data files on our laptops and smartphones. No special computer techie skills are needed. The company can comply with the regulatory order in minutes.

We can walk through the reasons why this presumed ease of AI destruction or deletion is not as straightforward as you might initially assume.

First, a notable question surrounds the exact scope of what is meant when you say that an AI system is to be destroyed or deleted. One facet is the programming code that comprises the AI. Another facet would be any data associated with the AI.

The developers of the AI might have generated many versions of the AI while crafting the AI. Lets simplify things and say that there is a final version of the code that is the one running and has become the target for being disgorged. Okay, the company deletes that final version. The deed is done!

But, turns out that those earlier versions are all still sitting around. It might be relatively childs play to essentially resurrect the now-deleted AI by merely using one of those earlier versions. You take an earlier version, make modifications to bring it up to par, and you are back in business.

An obvious way to try and prevent this kind of deletion skirting would be to stipulate that any and all prior versions of the AI must be destroyed. This would seem to force the company into seriously finding any older versions and making sure those get deleted too.

One twist is that suppose the AI contained a significant portion of widely available open-source code. The developers had originally decided that to build the AI they would not start from scratch. Instead, they grabbed up a ton of open-source code and used it as the backbone for their AI. They do not own the open-source code. They do not control the open-source code. They only copied it into their AI creation.

Now we have a bit of a problem.

The company complies with the order to destroy their AI. They delete their copy of the code and all versions of it that they possess. They delete all of their internal documentation. Meanwhile, they are not able to get rid of the open-source that comprises (lets say) the bulk of their AI system since it is not something they legally own and have no direct control over. The firm seems to have done what it could do.

Would you say that the offending AI was in fact destroyed or deleted?

The firm would likely insist that they did so. The governing authority would seem to have a hard time contending otherwise.

They might be able to quickly resurrect the AI by just going out to grab the widely available open-source and adding the pieces by doing some programming based on their knowledge of what the added portions consisted of. They dont use any of the prior offending code that they had fully deleted. They dont use any of the documentation that they had deleted. Voila, they have a new AI system that they would argue is not the AI that they had been ordered to disgorge.

I trust that you can see how these kinds of cat and mouse games can be readily played.

There are lots more twists.

Suppose the AI that is to be disgorged was based on the use of Machine Learning. The ML could be a program that the company developed on its own, but more likely these days the ML is an algorithm or model that the firm selected from an online library or collection (there are lots and lots of these readily available).

The firm deletes the instance of the ML that they downloaded and are using. The exact same ML algorithm or model is still sitting in a publicly available online library and potentially accessible for comers that want to use it. The governmental authority might have no means to restrict or cause a disgorgement of that online library.

Thats just the start of the difficulties involved in destroying or deleting AI, including for example the use of Machine Learning. As mentioned earlier, ML and DL typically entail feeding data into the ML/DL. If the firm still has the data that they previously used, they could download another copy of the ML/DL algorithm or model from the online library and reconstitute the AI via feeding the data once again into what is essentially the same ML/DL that they had used before.

You might astutely clamor that the data the firm had been using needs to also be encompassed by the disgorgement order. Sure, lets assume that this is so.

If the data is entirely within the confines of the firm, they presumably would be able to destroy or delete the data. Problem solved, one would say. But, suppose the data was based on various external sources, all of which are outside the scope of the destruction order since they are not owned by and not controlled by the offending firm.

The crux is that you could from other external sources grab copies of the data, grab a copy of the ML/DL algorithm, and reconstitute the AI system. In some cases, this might be expensive to undertake and could require gobs of time, while in other instances it might be doable in short order. It all depends on various factors such as how much the data needs to be modified or transformed, and the same goes for the parameter setting and training of the ML/DL.

We also need to consider what the meaning of destroying or deleting consists of.

You undoubtedly know that when you delete a file or app from your computer, the chances are that the electronically stored item is not yet fully deleted. Typically, the operating system updates a setting indicating that the file or app is to be construed as having been deleted. This is a convenience if you want to bring back the file or app. The operating system can merely flip the flag to indicate that the once seemingly deleted file or app is now active again.

Even if you have the operating system perform a more determined deletion, there is a likelihood that the file or app still sits somewhere. It might be on a backup storage device. It might be archived. If you are using a cloud-based online service, copies are likely residing there too. Not only would you need to find all of those shadow copies, but you would also need to perform various specialized cybersecurity erasure actions to try and ensure that the bits and bytes of those files and apps are completely written over and in a sense truly deleted or destroyed.

Note that I just mentioned the notion of a shadow.

We have at least three types of shadows to be thinking about when it comes to AI disgorgement:

1) Shadow copies of the AI

2) Shadow algorithms associated with the AI

3) Shadow data associated with the AI

Imagine that an order for an AI disgorgement instructs a company to proceed with destroying or deleting the data associated with the AI, but the firm can keep around the algorithm (perhaps allowing this if the algorithm is seemingly nothing more than one that you can find in any online ML library anyway).

Turns out that the algorithm itself essentially can be said to have its own kind of data, such as particular settings that underpin the algorithm. The effort to train the ML will usually entail having the ML figure out what parameter settings need to be calibrated. If you are only ordered to get rid of the training dataset per se, those other data-related parameter settings are likely still going to remain. This suggests that the AI can be somewhat readily reconstituted, or you could even argue that the AI wasnt deleted at all and you simply got rid of the earlier used training data that perhaps you no longer care about anyway. There is also a high chance that a form of imprint remains from the training data, which Ive discussed at the link here.

Getting rid of the training data might also be challenging if the data comes from a variety of third-party sources. Sure, you might be able to force the company to delete their in-house instance of the compiled data, but if the data exists at those other sources beyond their scope, the same data could be likely reassembled. This might be costly or might be inexpensive to do, depending upon the circumstances.

Throughout this discussion, we have focused on the notion of having a particular company be the target for undertaking an AI disgorgement. This might be satisfying and serve as an appropriate remedy associated with that company. On the other hand, this is not necessarily going to somehow eradicate or destroy the AI as it might exist or be reconstituted beyond the scope of the targeted company.

The AI might be copied to zillions of other online sites that the company has no means to access and cannot force a deletion to take place. The AI might be rebuilt from scratch by others that are aware of how the AI works. You could even have former employees of the firm that leave the company and opt to reuse their AI development skills to construct the essentially same AI elsewhere, which would be argued by them as based on their knowledge and skills, thus not being an infringing or subjected copy of the AI disgorgement order.

A perhaps apt analogy to the AI disgorgement troubles might be the advent of computer viruses.

The chances of hunting down and deleting all copies of a viral computer virus are generally slim, especially due to the legal questions of where the virus might be residing (such as across international borders) and the technological trickery of the computer virus trying to hide (Ive discussed the emergence of AI-based polymorphic computer viruses that are electronic self-adapting shape-shifters).

Furthermore, compounding the challenges, there is always the presumed capability of constructing the same or roughly equivalent computer virus by those that are well-versed in the design and crafting of computer viruses all told.

The rest is here:

AI Ethics And The Law Are Dabbling With AI Disgorgement Or All-Out Destruction Of AI As A Remedy For AI Wrongdoing, Possibly Even For Misbehaving...

Posted in Ai | Comments Off on AI Ethics And The Law Are Dabbling With AI Disgorgement Or All-Out Destruction Of AI As A Remedy For AI Wrongdoing, Possibly Even For Misbehaving…

AI: How the Rise of the Chatbot is Powering a Futuristic Present – UC Today

Posted: at 9:01 pm

Its the much sought-after double whammy that packs a big punch improved customer service AND reduced cost.

Organisations obsess over it, and rightly so.

And, of all the components which enable organisations to do whatever it is they do, the communications tech stack presents one of the biggest opportunities to achieve big on both those fronts.

After all, it is the myriad of communication channels that directly connect an organisation to the people it serves.

Do that better, faster, and slicker, and the customer satisfaction scores begin to soar.

Do it more cost-effectively and, well, you get the point.

So, how?

In todays always-on world, its technology that is providing the answer.

Artificial Intelligence was, not that long ago, a futuristic concept yet to mature sufficiently to make a material impact.

Now, chatbots and super-clever data analytics software have the readily-accessible power to deliver that double whammy easily and affordably.

The market hasnt yet fully-embraced AI yet but it is getting smarter all the time and uptake is increasing rapidly, which means now is the right time to begin leveraging the opportunities it presents, says Dennis Menard, Application Design Specialist at global enterprise-class contact center and IVR provider ComputerTalk, whose insight and expertise is helping a fast-growing number of organisations do just that.

He breaks it down into three ways in which AI can deliver benefit: firstly, improved customer experience; secondly, enhanced internal efficiency; thirdly, post-call analytics capable of supporting the re-design of productivity-boosting processes.

Todays modern customer is completely at ease accessing information via an AI-powered chatbot using either the written or spoken word and a system which leverages Natural Language Understanding means it can be almost as intuitive as talking to a real person, says Menard.

Voice models can be tailored to suit different languages, regional dialects or accents and organisations can even use the voice of an employee if they wish. They simply record a set of key words, and the AI is able to use them to compose fully-coherent responses to customer questions.

The same functionality can be applied internally too; providing human agents with a chatbot almost like a personal assistant to help them find internal information or answer customer questions in real-time as a call is in progress.

And, when a call has been completed, AI is able to capture a recording and a transcription and analyse the content to determine customer sentiment.

Taken together, all of those benefits enable an organisation to reduce call handler headcount, and therefore cost; provide a faster, better customer experience; and help organisations understand the quality of their customer interactions and make improvements where they can

In addition, adding AI to the communication stack enables an organisation to be open 24/7 no need for a complex and expensive human rota of call handlers; simply a route into and through an organisation at whatever time of the day or night a customer prefers.

No surprises, then, that adoption is on the rise.

There is lots of noise in the AI space and a big push by many providers to grab market share as the pace of maturity picks up.

Microsoft, for example, has a whole framework for helping its users build their own bots of varying levels of sophistication.

However as is always the case choosing a truly expert partner is the most effective way of getting it right first time and fast.

In ComputerTalks case, it not only provides native AI-powered communication software but also a seamless gateway for organisations to integrate their own bot functionality directly into its slick and powerful ice contact center platform.

ice which cleverly bridges the gap between legacy infrastructure and the modern brilliance of the Microsoft stack uses Microsoft Teams Direct Routing to send calls to Teams-based human agents using a managed SBC network.

It enables agents to handle all interactions voice, chat, email, SMS, and social media all within a single interface.

When those agents are replaced by Natural Language Understanding chatbots, organisations digital transformations can really accelerate.

That kind of deployment is capable of diverting 80 per cent of the more routine interactions away from human agents and through a text-to-speech or voice-enabled chatbot instead, says Menard.

That leaves those agents free to deal with the more complex interactions which still require human input, such as providing highly technical information or completing a sales process. Their number can be significantly reduced and they become subject matter experts as opposed to processors of low-level enquiries and interactions.

Those agents feel more empowered and their morale is boosted; and organisations deliver a better customer experience at less cost.

Never mind doublethat sounds like a triple whammy.

To learn more about how ComputerTalk can help your business grow and succeed, visit http://www.computer-talk.com

More:

AI: How the Rise of the Chatbot is Powering a Futuristic Present - UC Today

Posted in Ai | Comments Off on AI: How the Rise of the Chatbot is Powering a Futuristic Present – UC Today

Accelerating the Development of Next-Generation HPC/AI System Architectures with UCIe-Compliant Optical I/O – HPCwire

Posted: at 9:01 pm

As the HPC/AI community explores new system architectures to support the growing demands of the exascale era and beyond, optical I/O (or OIO) is increasingly being recognized as an imperative to change the performance and power trajectories of system designs. Optical I/O enables compute, memory, and networking ASICs to communicate with dramatically increased bandwidth, at a lower latency, over longer distances, and at a fraction of the power of existing electrical I/O solutions. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled architectures, and unified memory designs critical to accelerating future datacenter innovation.

The introduction of the UCIe standard, the first specification to include an interface built from the ground up to be compatible with optical links, is a critical step in creating an ecosystem to accelerate the development of the next-generation HPC and AI system architectures needed for exascale and beyond.

Large compute systems typically use an architecture where compute and memory resources are tightly coupled to maximize performance. Components such as CPUs, GPUs, and memory must be placed closely together when connected electrically via copper interconnects. This hardware density results in cooling and energy issues, while persistent bandwidth bottlenecks limit inter-processor and memory performance. These issues are exacerbated in compute-intensive applications like HPC, AI, and compute-intensive data analytics.

Today, new disaggregated system architectures with optical interconnect are being investigated to decouple a servers elements processors, memory, accelerators, and storage enabling flexible and dynamic resource allocation, or composability, to meet the needs of each particular workload.

Disaggregated architectures require communication between memory and processors over longer distances. Pooled resources mean memory, GPUs, and CPUs are each on their own shelves for flexibility in mapping specific resources to specific workloads. Optical interconnects allow off-chip signals to traverse long distances, explained Nhat Nguyen, Ayar Labs senior director of solutions architecture.

Universal Chiplet Interconnect Express (UCIe) is a new die-to-die interconnect standard for high-bandwidth, low-latency, power-efficient, and cost-effective connectivity between chiplets. UCIe was developed because chip designs are running up against the die reticle limit.

Intel Corporation originated UCIe 1.0, and ten members ratified the specification, including AMD, Arm, ASE Group, Google Cloud, Intel, Meta, Microsoft, Qualcomm, Samsung, and TSMC. Current standards that compete with UCIe include OpenHBI, Bunch of Wires (BoW), and OIF XSR.

UCIe provides several benefits over other standards, including:

According to Uday Poosarla, head of product at Ayar Labs, UCIe has significant advantages over other standards, including scalability, interoperability, and flexibility. UCIe is the first standard to incorporate optics into chip-to-chip interconnects. The CW-WDM MSA, another new standard, provides a great framework for the optical connections, complementing the UCIe standard.

Ayar Labs is focused on bringing optical I/O into the datacenter to remove the last mile of copper interconnect and solve the bandwidth density and scaling problem. Ayar Labs was the first to introduce an optical chiplet using Advanced Interface Bus (AIB) as the interface. UCIe is an evolution of the AIB interface, so Ayar Labs current AIB-based optical chiplet is compatible with UCIe standards. The Ayar Labs solution includes the TeraPHY in-package OIO chiplet and SuperNova laser light source, which can be incorporated into a UCIe-compliant chip package. Each TeraPHY chiplet delivers up to two terabits per second of I/O performance, or the equivalent of 64 PCIe Gen5 lanes.

In addition to being a contributing member of the UCIe, Ayar Labs is also a founding member of the CW-WDM MSA, a consortium dedicated to defining and promoting specifications for multi-wavelength advanced integrated optics. This MSA specification compliments UCIe and may help foster cohesion around light sources for integrated optics in the chiplet ecosystem.

Dramatically increased bandwidth and lower latency in chip-to-chip connectivity will be critical to enabling future HPC and AI systems. Electrical connectivity is delivering diminishing returns as we reach the physical limitations of copper and electrical signaling, ushering in a new era of optical connectivity. The new UCIe standard will allow customizable SoC packages that include optical links. Ayar Labs TeraPHY optical I/O chiplet, using an Advanced Interface Bus (AIB) interface, is the first optical interconnect to be UCIe compatible and poised to deliver on the promise of disaggregated system architectures for the post-exascale era.

Most of the parallel interface efforts are marginally different on performance. The key risk is fragmentation of the ecosystem. UCIe solves this by standardizing key elements and enabling a chiplet marketplace. Chiplet providers will benefit from an ecosystem rather than be forced to design many different SKUs for different host SoCs, which is obviously expensive. An analogy might clarify where UCIe fits with other standards: PCIe is to the motherboard as UCIe is to the socket, summarized Mark Wade, Ayar Labs senior vice president of engineering, chief technology officer, and co-founder.

Learn more about Ayar Labs and our UCIe-compatible optical I/O solution.

Read this article:

Accelerating the Development of Next-Generation HPC/AI System Architectures with UCIe-Compliant Optical I/O - HPCwire

Posted in Ai | Comments Off on Accelerating the Development of Next-Generation HPC/AI System Architectures with UCIe-Compliant Optical I/O – HPCwire

Page 45«..1020..44454647..5060..»