The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: May 4, 2021
Beshear stresses convenience of getting COVID-19 vaccine – Associated Press
Posted: May 4, 2021 at 8:12 pm
FRANKFORT, Ky. (AP) Getting a COVID-19 shot can be as easy as walking into some vaccination sites without an appointment, Kentuckys governor said Tuesday in his latest plea to boost inoculation rates.
More than 1.8 million Kentuckians have received at least one dose of vaccine, but the pace needs to pick up, especially among younger people, Gov. Andy Beshear said.
There are vaccination appointments available every week, at many different times throughout the day, he said. At some sites, you dont even need an appointment. Get it done, for yourself and for your community, so we can reach our goal and relax more restrictions.
Younger Kentuckians have lagged behind in getting vaccinated.
Data released Monday showed 27% of Kentucky residents between ages 18-29 had gotten the shots. The vaccination rate was 37% among Kentuckians ages 30-39 and 43% in the 40-49 age group, the data showed. Nearly 80% of people ages 65 and older were vaccinated.
Once 2.5 million Kentuckians receive at least their first COVID-19 shot, Beshear has pledged to lift capacity and physical distancing restrictions for nearly all businesses, venues and events catering to 1,000 or fewer patrons. The governor indicated Monday that he will consider relaxing more coronavirus-related restrictions before the state reaches that vaccination target.
The states inoculation rate slowed in recent weeks, and the Democratic governor has repeatedly pleaded with Kentuckians to take the shots to defeat the pandemic.
U.S. Senate Minority Leader Mitch McConnell, R-Ky., made another pitch Monday for Kentuckians to get vaccinated, saying: I want to encourage everybody: finish the job.
Anyone 16 or older is eligible to receive the vaccine in Kentucky.
Among Kentuckys 120 counties, the top five vaccination rates are in Woodford, Franklin, Fayette, Scott and Jefferson counties, the state said. The lowest vaccination rates are in Christian, Spencer, Ballard, McCreary and Lewis counties, it said.
The state reported 776 new coronavirus cases Tuesday and seven more virus-related deaths. At least 6,532 Kentuckians have died from COVID-19.
More than 430 virus patients are hospitalized in Kentucky, including 102 in intensive care units, the state said. The statewide rate of positive cases was 3.47%.
___
Find APs full coverage of the coronavirus pandemic at https://apnews.com/hub/coronavirus-pandemic.
Read the original:
Beshear stresses convenience of getting COVID-19 vaccine - Associated Press
Posted in Covid-19
Comments Off on Beshear stresses convenience of getting COVID-19 vaccine – Associated Press
Why treating Covid-19 with drugs is harder than you think – BBC News
Posted: at 8:12 pm
Unlike broad-spectrum antibiotics, which can be used to treat a wide range of bacterial infections, drugs that work against one type of virus rarely work at treatingother viruses. For example, remdesivir, originally developed for treating hepatitis C, was at one point suggested as a treatment for Covid-19, but clinical trials have shown that it hasonly a limited effectagainst this coronavirus.
The reason there are few effective broad-spectrum antivirals is that viruses are much more diverse than bacteria, including in how they store their genetic information (some in the form of DNA and some as RNA). Unlike bacteria, viruses have fewer of their own protein building blocks that can be targeted with drugs.
For a drug to work, it has to reach its target. This is particularly difficult with viruses because they replicate inside human cells by hijacking our cellular machinery. The drug needs to get inside these infected cells and act on processes that are essential for the normal functioning of the human body. Unsurprisingly, this often results incollateral damageto human cells, experienced as side-effects.
Targeting viruses outside cells to stop them from gaining a foothold before they can replicate is possible, but is also difficult because of the nature of thevirus shell. The shell is extraordinarily robust, resisting the negative effects of the environment on the way to its host. Only when the virus reaches its target does its shell decompose or eject its contents, which contain its genetic information.
This process may be a weak spot in the virus lifecycle, but the conditions that control the release are very specific. While drugs targeting the virus shell sounds appealing, some may still betoxic to humans.
Despite these difficulties, drugs that treat viruses such as influenza and HIVhave been developed. Some of these drugs target the processes of viral replication and the viral shell assembly. Promising drug targets of coronaviruses have beenidentified as well. But developing new drugs takes a long time, and viruses mutate quickly. So even when a drug is developed, the ever-evolving virus might soon developresistance towards it.
More:
Why treating Covid-19 with drugs is harder than you think - BBC News
Posted in Covid-19
Comments Off on Why treating Covid-19 with drugs is harder than you think – BBC News
May is Mental Health Awareness Month; impacts of COVID-19 on mental health (BOCC) – Larimer County
Posted: at 8:12 pm
Recognizing May as National Mental Health Awareness Month is particularly important this year as the impacts on mental health from COVID-19 linger, raising awareness of mental health and its effect on the well-being of individuals, families, and communities.
The Board of Larimer County Commissioners today proclaimed May as National Mental Health Awareness Month in Larimer County.
"It is critical that we keep Mental Health at the forefront of our conversations. Talking about it is one of the best ways to reduce the stigma around it and encourage people to seek help before a crisis hits,"said Larimer County Director of Behavioral Health Services Laurie Stolen.
A study conducted by Mental Health America in 2020 highlights the connection between the pandemic and mental health. Key findings from the study show that the number of people looking for help with anxiety and depression has skyrocketed, more people are reporting frequent thoughts of suicide and self-harm, and young people are struggling the most with their mental health.
Mental Health America also collects state-by-state mental health data to create state rankings. The state rankings for 2020 show that 20% of Coloradans live with mental illness and that Colorado ranks 43rd out of 50 states with a higher prevalence of mental health issues and lower rates of access to care for adults.
We know that this issue touches all of us in our lives, and it is important for us to address this and reduce the stigma and see what we can do to raise awareness with our youth, said Larimer County Commissioner Jody Shadduck-McNally.
Larimer County continues to advance mental health initiatives to support the mental and behavioral wellbeing of county residents. In June, Behavioral Health Services will announce its annual behavioral health grant funding available to area organizations through the Impact Fund Grant Program. Meanwhile, work continues on the new behavioral health facility scheduled to open in early 2023, expanding the availability of acute behavioral health services to county residents.
Mental Health Awareness Month has been observed in May in the United States since 1949 to educate communities about psychological disorders while reducing the stigma around mental health.
Do you need someone to talk to? Call the Connections Emotional Support Line: 1-970-221-5551. Support is available 24-hours a day, 7 days a week.
Are you or someone you know experiencing a mental health crisis? Call 1.844.493.TALK or text TALK to 38255.
Continue reading here:
May is Mental Health Awareness Month; impacts of COVID-19 on mental health (BOCC) - Larimer County
Posted in Covid-19
Comments Off on May is Mental Health Awareness Month; impacts of COVID-19 on mental health (BOCC) – Larimer County
Next Generation of Covid-19 Vaccines Could Be Pill or Spray – The Wall Street Journal
Posted: at 8:12 pm
The next generation of Covid-19 vaccines in development could come as a pill or a nasal spray and be easier to store and transport than the current handful of shots that form the backbone of the world-wide vaccination effort.
These newer vaccines, from U.S. government labs and companies including Sanofi SA, Altimmune Inc. and Gritstone Oncology Inc., also have the potential to provide longer-lasting immune responses and be more potent against newer and multiple viral variants, possibly helping to head off future pandemics, the companies say.
Vaccines currently authorized for use in the U.S. from Pfizer Inc. and its partner BioNTech SE, as well as Moderna Inc., must be transported and stored at low temperatures and require two doses administered weeks apart.
New vaccines could constitute some improvement over those limitations and more easily accommodate vaccination efforts in rural areas, said Gregory Poland, professor and vaccine researcher at the Mayo Clinic in Rochester, Minn. You will see second-generation, third-generation vaccines, he said.
There are 277 Covid-19 vaccines in development globally, of which 93 have entered human testing, according to the World Health Organization. Most of the vaccines in clinical testing are injected, but there are two oral formulations and seven nasal-spray formulations.
Read the original here:
Next Generation of Covid-19 Vaccines Could Be Pill or Spray - The Wall Street Journal
Posted in Covid-19
Comments Off on Next Generation of Covid-19 Vaccines Could Be Pill or Spray – The Wall Street Journal
Bentz urges Gov. Brown to lift renewed COVID-19 restrictions – KTVZ
Posted: at 8:12 pm
WASHINGTON (KTVZ) -- Rep. Cliff Bentz, R-Ore., issued a statement Monday calling on Governor Kate Brown to end the recently reinstituted COVID-19 restrictions.
Here's the statement, in full:
"In the normal course, as a United States Representative, I would not enter debates regarding state-level politics. However, the Governors most recent response to the COVID-19 situation is not only historically broad in impact, but an action that causes far more serious damage than benefit.Additionally, in recent days, the Oregon Health Authority actually invited public input from Oregons congressional delegation.
"Governor Kate Browns decision to again lock down huge parts of Oregon has caused incredible frustration for many in my district, and I share their frustration. In a recent letter by Governor Brown, she commended Oregonians for helping make our state among the lowest COVID-19 case rates, hospitalizations, and deaths in the nation, to which she gave creditinlarge part to the actions of Oregonians to take seriously the health and safety measures.
"And indeed,today, nearly 70 percent of Oregons older population is fully vaccinated and many communities across our state were well on their way to safely returning to some sort of normal. However, Governor Brown has now done completely the opposite of many other states: imposing yet another lockdown.
"Sadly, Governor Browns proposed $20 million safety net for those harmed by this most recent lockdown is woefully inadequate for those Oregon businesses struggling to survive. I believe Oregon must reopen and stay open.
"The reinstatement of the Governors shutdown solution will do more harm than good to our loved ones, communities, and our state especially as risk drops with an ever increasing number of Oregonians being vaccinated.I am calling upon Governor Brown to reverse this unfortunate decision and focus her attention instead on vaccinations and making sure that COVID aid sent to Oregon by the Federal Government be quickly allocated to those in need."
Meanwhile, Sandy Mayor Stan Pulliam, who's exploring a possible Republican run for governor, says a lawsuit is being filed in federal court against Gov. Kate Brown, on behalf of several businesses and a union.
Pulliam, who says it's time to end the restrictions, said the suit will challenge Brown's authority to extend the state of emergency by executive order
Continue reading here:
Bentz urges Gov. Brown to lift renewed COVID-19 restrictions - KTVZ
Posted in Covid-19
Comments Off on Bentz urges Gov. Brown to lift renewed COVID-19 restrictions – KTVZ
Why American individualism is perfectly suited to the doomsday prepper movement – KCRW
Posted: at 8:11 pm
When the pandemic hit last year, it didnt take long for grocery stores to have their shelves wiped clean. Long lines sprung up as shoppers scrambled to stock up on canned goods, water, and toilet paper. For some, however, preparing for the apocalypse is a serious business. Doomsday preppers are more than just fringe survivalists. High-end luxury apocalypse shelters are increasingly popular among the mega-rich, and, fuelled by times of chaos and uncertainty, end-times preparedness has become a multimillion-dollar industry.
In Notes from an Apocalypse, a Personal Journey to the End of the World and Back, Irish author Mark OConnell explores survivalist destinations around the globe and examines why people feel the need to go to extreme lengths preparing for The End of Time. Though survivalism has global appeal, he says that Americas rugged individualism is particularly suited to the prepper movement.
KCRWs Jonathan Bastian talks with OConnell about the need and practice of prepping, from luxury underground bunkers to garages stuffed with canned goods, water and, gasolineand why battening down the hatches has as much to do with fantasy as it does with fear.
Mark O'Connell. Photo by Richard Gilligan.
KCRW: Is survivalism a 20th century phenomenon, or something that goes far back into history?
Mark O'Connell: Survivalism, as in the kinds of doomsday prepper kind of movements that I look at in my book, is, I think, very much a modern contemporary phenomenon. But it has its roots in cyclical moments of apocalyptic fervor that have cropped up throughout history. It's been something that tends to rear its head at times of particular social and political upheaval. Apocalyptic myths tend to be something that people grab on to as a way of explaining, I suppose, the sense of chaos and uncertainty around the future.
It seems like psychologically, the human mind is drawn to this idea of end-of-times. Do you think its one that thematically plays out over and over again?
To put it in sort of pop psychological terms, we are creatures who thrive and move through the world via stories. We're narrative-telling creatures. And one of the things that the apocalypse does is it creates a sense of narrative. The literary critic Frank Kermode has an amazing book called The Sense of an Ending, where he talks about the fact that we're born as, he says, in the midst of things, in medias res. And we have no sense of really where we came from, or where we're going. And what the apocalypse does is it allows us to kind of project ourselves into an end. So it gives a kind of a sense of narrative coherence to times of chaos and uncertainty.
As you began this exploration into survivalism, did you find that those feelings were particularly fervent in the US, that this is where a lot of the action was taking place?
Yes, for sure. I started at the book really looking at the whole scene of doomsday preppers, of people who are digging bunkers and stockpiling tinned goods, and talking about the imminence of the End Times. And it's an international movement, there are a lot of preppers in Ireland and in Britain and across Europe, but really, the most kind of fervent and intense stuff tends to come, unsurprisingly, out of the U.S. And there's a couple of reasons for that.
One of which is that America seems, to me, to be a country with a particularly kind of intense relationship historically and culturally with the apocalypse. The United States, as a colonial enterprise, was born out of a moment of apocalyptic fervor in Europe, with the Pilgrims, and so on. And there's something about the prepper movement that sort of recapitulates that sense of fervor of the first European colonizers of America and the Pilgrims, and so on.
When preppers talk about the collapse of civilization, they're often talking about a situation where there's no more government, where you can't rely on society, you can't rely on your fellow people. And it's just you, the kind of rugged individual, pitting yourself against the wilderness or other people, savage people, and so on. And there's a sense of a return to some of the darker mythologies at the heart of that moment in American history with the prepper movement, I think.
When you began to sample these different prepper movements across the U.S., did you find that there was an archetypal figure or a certain personality that kept cropping up across the landscape?
Yes. To some extent, it's a broad church, but certain kinds pre-existing ideological conditions kept cropping up. One of which is a real investment in the idea of the individual, as opposed to community. So many of the people that I looked at in the book, and so many of the movements, are predicated around the idea that you cant rely on other people to get you through times of difficulty and catastrophe. So preppers tend to be all about looking after themselves, their families, battening down the hatches, stocking up on as much stuff as they can, and to hell with everyone else, but also kind of defending themselves against others.
That was a strand that I saw cropping up in lots of different kinds of movements that I looked at. One of the things I did was I spent some time in a very remote part of South Dakota ... on a former dairy farm that a guy called Robert Vicino, who is kind of an apocalyptic real estate entrepreneur, had bought.
He specializes in, you could say, luxury apocalyptic solutions. Very well-appointed, sort of five star quality bunkers with things like private cinemas and wine cellars and hydroponic vegetable gardens and so on. And he had bought this place in South Dakota and was converting it into what he called the world's largest survival community. And really, that sort of sales rhetoric around it, and also quite conspiratorial political rhetoric, was that some kind of collapse scenario was coming. You could almost take your pick of what you were most afraid of, or most sort of fantasizing about, whether it's your nuclear war, or viral pandemic, or whatever it might be, but in these scenarios, the government is not going to protect you. And you're going to need to band together with a small group of other like-minded individualists, if that's not too much of a contradiction in terms, and protect yourself against humanity at large.
All of these things seem, to me, to be implicitly political, in that if you're arguing that you need to protect yourself, and that other people are what you need to protect yourself against, whether it be sort of urban populations or what have you, that that seems to be quite a political standpoint. And it's no coincidence. I think that most of the kind of doomsday preppers and apocalyptic preparedness aficionados that I'd looked at in the book tend to, though not exclusively, come from quite right wing political milieus.
You said that these folks envision many different ways in which the world could be turned upside down, and they'll have to rely on their own kind. Are there any more common theories you came up with? Would it be a plague? Would it be some kind of civil war? What does the end of times look like for a lot of these people?
I started the book out of a place of my own anxiety about the future for myself and for my family. And for me, climate change, of course, is the big locus of apocalyptic unease. But what I started to find was that, actually, the more fervent kind of apocalyptic obsessives, for want of a better term, tended not to be all that concerned about climate change.
They tended to be concerned about the prospect of nuclear strike from North Korea, for instance. A viral pandemic is quite a common one. Asteroids hitting things, electromagnetic pulse attacks, all these kinds of things that your average person would not spend that much time thinking about. But if you have an anxious disposition, learning about these things is quite a trip.
How big is the doomsday prepper economy?
It depends on who you listen to. If you listen to the people who are selling it, you're going to find that it's a boom area. I think it's still pretty niche. But things like wealthy individuals buying land in New Zealand, that is a genuine thing. The other thing is dedicated companies whose whole thing is to build these compounds with golf courses and defensive provisions and so on. And there's quite a few of those companies. Unsurprisingly, most of them tend to be American. And a lot of their customers tend to be American, but they have facilities all over the world, including, including Europe.
So Robert Vicino, who was the guy that I spent time with in South Dakota, he's got a number of these facilities. The place that I visited was on the sort of less luxurious end of the scale, the kind of lower middle class apocalyptic solution. But this was a situation where you buy a bunker that is basically an empty shell, and you fit it out to your own specifications. And the idea would be that it would develop into a community of like minded individuals. But there are all different kinds, depending on your level of wealth, and how much you want to spend and the various different levels you can go into.
Did you ever find anything that kind of blew your mind, or anything very bizarre, when you entered some of these bunkers?
The only bunkers that I physically entered were those empty shells in South Dakota. That's an extraordinary place. And I write quite a lot in the book about the landscape and almost surreal aspect of the place. It was built initially in the Second World War as a munitions storage facility. So they're all these hexagonal, I think 550 of them, across the ranch, which is about three quarters the size of Manhattan. It almost seems like an alien landscape. Incredibly beautiful, and also very strange, and a little bit uncanny.
See the original post:
Why American individualism is perfectly suited to the doomsday prepper movement - KCRW
Posted in Survivalism
Comments Off on Why American individualism is perfectly suited to the doomsday prepper movement – KCRW
Artificial Intelligence Models For Sale, Another Step In The Spread Of AI Accessibility – Forbes
Posted: at 8:10 pm
Artificial Intelligence
A regular message in this column is that artificial intelligence (AI) wont spread widely until its easier to use than the requirement to have programmers who can work at the model level. That challenge wont be solved instantly, and its slowly changing. While technical knowledge is still too often required, there are ways in which development time can be shortened. One way thats been happening has been is the increased availability of pre-built models.
A few years back, a tech CEO loved to talk about the Cambrian Explosion of deep learning models, as if a lot of models meant real progress in the business world. It doesnt. What matters is the availability of useful models useful for business. In the usual meaning of the clich, the 80/20 paradigm still matters for business. While a large number of models might be of interest to academics, a much smaller subset will provide significant value to people attempting to gain insight in the real world.
In an attempt to help companies not have to recreate the wheel, ElectrifAi has built a body of AI models that can be called by applications. Those models are identified by use case, so developers can quickly narrow down options and chose to test models close to their needed use. I first became aware of the company when it issued a press release about entering the marketplace on Amazon SageMaker. They are also on the Google Cloud Marketplace.
Having worked with other companies using major sources marketplaces, I was curious. While there are still long term questions about such marketplaces, including how acquisitions of companies might impact partner applications on those marketplaces, it was important to find out more.
One key issue about buying models is the fact that privacy is increasingly important. Yet another company seeing data could be a compliance weak spot. We build and support models for our customers, said Luming Wang, CTO, ElectrifAi. However, our business model is that we dont see their data and they dont see our code. While we pre-structure and partially train models, we provide support and services that help customers tune models to their own use with their own data without us needing to see any information. Outside of those marketplaces, the company also works with systems integrators and other partners who work with their clients in implementation.
Mentioned earlier was the customers being able to choose appropriate models. That also extends to the fact that when AI is mentioned, were not only discussing deep learning. The models are built in a variety of AI techniques, including rules engines, xgboost, and neural networks (deep learning). Different domains require different techniques, said Mr. Wang. Still, rules engines can work seamlessly with neural networks for complex problems. Over one hundred rules, a neural network has advantages. In between, depending on context and data, either technique or other technologies can be used.
Given the focus on building a library of models for business, it is no surprise that very few of the models have a UI to present their own data. The models are accessed as function calls by the controlling applications. This is a key step in the evolution of AI accessibility. There needs to be some AI knowledge in order to evaluate the appropriate models to choose, but once that decision has been made, non-AI programmers only need to understand the calls and then can use the results in the wrapper application to address the business solution.
This attitude is excellent for the current state of AI in business. This presents AI not as something scary, or something requiring expensive and unique personnel, but rather as another easy to call function that can be accessed quickly by existing programmers working so solve a problem. The more programmers can access AI by calls, without having to know the details of a neural network or random forest, the faster AI will spread through the corporate technology infrastructure.
View original post here:
Artificial Intelligence Models For Sale, Another Step In The Spread Of AI Accessibility - Forbes
Posted in Ai
Comments Off on Artificial Intelligence Models For Sale, Another Step In The Spread Of AI Accessibility – Forbes
Ethics of AI: Benefits and risks of artificial intelligence – ZDNet
Posted: at 8:10 pm
In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems.
Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised.
Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived."
Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers.
But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve.
Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life.
That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners.
Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens.
Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing.
As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?"
Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion.
Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December. Gebru made clear that she was fired by Google, a claim Mitchell backs up in her letter. Jeff Dean, head of AI at Google, wrote in an internal email to staff that the company accepted the resignation of Gebru. Gebru's former colleagues offer a neologism for the matter: Gebru was "resignated" by Google.
Margaret Mitchell [right], was fired on the heels of the removal of Timnit Gebru.
I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired 🙂
Timnit Gebru (@timnitGebru) December 3, 2020
Mitchell, who expressed outrage at how Gebru was treated by Google, was fired in February.
The departure of the top two ethics researchers at Google cast a pall over Google's corporate ethics, to say nothing of its AI scruples.
As reported by Wired's Tom Simonite last month, two academics invited to participate in a Google conference on safety in robotics in March withdrew from the conference in protest of the treatment of Gebru and Mitchell. A third academic said that his lab, which has received funding from Google, would no longer apply for money from Google, also in support of the two professors.
Google staff quit in February in protest of Gebru and Mitchell's treatment, CNN's Rachel Metz reported. And Sammy Bengio, a prominent scholar on Google's AI team who helped to recruit Gebru, resigned this month in protest over Gebru and Mitchell's treatment, Reuters has reported.
A petition on Medium signed by 2,695 Google staff members and 4,302 outside parties expresses support for Gebru and calls on the company to "strengthen its commitment to research integrity and to unequivocally commit to supporting research that honors the commitments made in Google'sAI Principles."
Gebru's situation is an example of how technology is not neutral, as the circumstances of its creation are not neutral, as MIT scholars Katlyn Turner, Danielle Wood, Catherine D'Ignazio discussed in an essay in January.
"Black women have been producing leading scholarship that challenges the dominant narratives of the AI and Tech industry: namely that technology is ahistorical, 'evolved', 'neutral' and 'rational' beyond the human quibbles of issues like gender, class, and race," the authors write.
During an online discussion of AI in December, AI Debate 2, Celeste Kidd, a professor at UC Berkeley, reflecting on what had happened to Gebru, remarked, "Right now is a terrifying time in AI."
"What Timnit experienced at Google is the norm, hearing about it is what's unusual," said Kidd.
The questioning of AI and how it is practiced, and the phenomenon of corporations snapping back in response, comes as the commercial and governmental implementation of AI make the stakes even greater.
Ethical issues take on greater resonance when AI expands to uses that are far afield of the original academic development of algorithms.
The industrialization of the technology is amplifying the everyday use of those algorithms. A report this month by Ryan Mac and colleagues at BuzzFeed found that "more than 7,000 individuals from nearly 2,000 public agencies nationwide have used technology from startup Clearview AI to search through millions of Americans' faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members."
Clearview neither confirmed nor denied BuzzFeed's' findings.
New devices are being put into the world that rely on machine learning forms of AI in one way or another. For example, so-called autonomous trucking is coming to highways, where a "Level 4 ADAS" tractor trailer is supposed to be able to move at highway speed on certain designated routes without a human driver.
A company making that technology, TuSimple, of San Diego, California, is going public on Nasdaq. In its IPO prospectus, the company says it has 5,700 reservations so far in the four months since it announced availability of its autonomous driving software for the rigs. When a truck is rolling at high speed, carrying a huge load of something, making sure the AI software safely conducts the vehicle is clearly a priority for society.
TuSimple says it has almost 6,000 pre-orders for a driverless semi-truck. When a truck is rolling at high speed, carrying a huge load of something, making sure the AI software safely conducts the vehicle is clearly a priority for society.
Another area of concern is AI applied in the area of military and policing activities.
Arthur Holland Michel, author of an extensive book on military surveillance, Eyes in the Sky, has described how ImageNet has been used to enhance the U.S. military's surveillance systems. For anyone who views surveillance as a useful tool to keep people safe, that is encouraging news. For anyone worried about the issues of surveillance unchecked by any civilian oversight, it is a disturbing expansion of AI applications.
Calls are rising for mass surveillance, enabled by technology such as facial recognition, not to be used at all.
As ZDNet's Daphne Leprince-Ringuet reported last month, 51 organizations, including AlgorithmWatch and the European Digital Society, have sent a letter to the European Union urging a total ban on surveillance.
And it looks like there will be some curbs after all. After an extensive report on the risks a year ago, and a companion white paper, and solicitation of feedback from numerous "stakeholders," the European Commission this month published its proposal for "Harmonised Rules On Artificial Intelligence For AI." Among the provisos is a curtailment of law enforcement use of facial recognition in public.
"The use of 'real time' remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply," the report states.
The backlash against surveillance keeps finding new examples to which to point. The paradigmatic example had been the monitoring of ethic Uyghurs in China's Xianxjang region. Following a February military coup in Myanmar, Human Rights Watch reports that human rights are in the balance given the surveillance system that had just been set up. That project, called Safe City, was deployed in the capital Naypidaw, in December.
As one researcher told Human Rights Watch, "Before the coup, Myanmar's government tried to justify mass surveillance technologies in the name of fighting crime, but what it is doing is empowering an abusive military junta."
Also: The US, China and the AI arms race: Cutting through the hype
The National Security Commission on AI's Final Report in March warned the U.S. is not ready for global conflict that employs AI.
As if all those developments weren't dramatic enough, AI has become an arms race, and nations have now made AI a matter of national policy to avoid what is presented as existential risk. The U.S.'s National Security Commission on AI, staffed by tech heavy hitters such as former Google CEO Eric Schmidt, Oracle CEO Safra Catz, and Amazon's incoming CEO Andy Jassy, last month issued its 756-page "final report" for what it calls the "strategy for winning the artificial intelligence era."
The authors "fear AI tools will be weapons of first resort in future conflicts," they write, noting that "state adversaries are already using AI-enabled disinformation attacks to sow division in democracies and jar our sense of reality."
The Commission's overall message is that "The U.S. government is not prepared to defend the United States in the coming artificial intelligence era." To get prepared, the White House needs to make AI a cabinet-level priority, and "establish the foundations for widespread integration of AI by 2025." That includes "building a common digital infrastructure, developing a digitally-literate workforce, and instituting more agile acquisition, budget, and oversight processes."
Why are these issues cropping up? There are issues of justice and authoritarianism that are timeless, but there are also new problems with the arrival of AI, and in particular its modern deep learning variant.
Consider the incident between Google and scholars Gebru and Mitchell. At the heart of the dispute was a research paper the two were preparing for a conference that crystallizes a questioning of the state of the art in AI.
The paper that touched off a controversy at Google: Gebru and Bender and Major and Mitchell argue that very large language models such as Google's BERT present two dangers: massive energy consumption and perpetuating biases.
The paper, coauthored by Emily Bender of the University of Washington, Gebru, Angelina McMillan-Major, also of the University of Washington, and Mitchell, titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" focuses on a topic within machine learning called natural language processing, or NLP.
The authors describe how language models such as GPT-3 have gotten bigger and bigger, culminating in very large "pre-trained" language models, including Google's Switch Transformer, also known as Switch-C, which appears to be the largest model published to date. Switch-C uses 1.6 trillion neural "weights," or parameters, and is trained on a corpus of 745 gigabytes of text data.
The authors identify two risk factors. One is the environmental impact of larger and larger models such as Switch-C. Those models consume massive amounts of compute, and generate increasing amounts of carbon dioxide. The second issue is the replication of biases in the generation of text strings produced by the models.
The environment issue is one of the most vivid examples of the matter of scale. As ZDNet has reported, the state of the art in NLP, and, indeed, much of deep learning, is to keep using more and more GPU chips, from Nvidia and AMD, to operate ever-larger software programs. Accuracy of these models seems to increase, generally speaking, with size.
But there is an environmental cost. Bender and team cite previous research that has shown that training a large language model, a version of Google's Transformer that is smaller than Switch-C, emitted 284 tons of carbon dioxide, which is 57 times as much CO2 as a human being is estimated to be responsible for releasing into the environment in a year.
It's ironic, the authors note, that the ever-rising cost to the environment of such huge GPU farms impacts most immediately the communities on the forefront of risk from change whose dominant languages aren't even accommodated by such language models, in particular the population of the Maldives archipelago in the Arabian Sea, whose official language is Dhivehi, a branch of the Indo-Aryan family:
Is it fair or just to ask, for example, that the residents of the Maldives (likely to be underwater by 2100) or the 800,000 people in Sudan affected by drastic floods pay the environmental price of training and deploying ever larger English LMs [language models], when similar large-scale models aren't being produced for Dhivehi or Sudanese Arabic?
The second concern has to do with the tendency of these large language models to perpetuate biases that are contained in the training set data, which are often publicly available writing that is scraped from places such as Reddit. If that text contains biases, those biases will be captured and amplified in generated output.
The fundamental problem, again, is one of scale. The training sets are so large, the issues of bias in code cannot be properly documented, nor can they be properly curated to remove bias.
"Large [language models] encode and reinforce hegemonic biases, the harms that follow are most likely to fall on marginalized populations," the authors write.
The risk of the huge cost of compute for ever-larger models has been a topic of debate for some time now. Part of the problem is that measures of performance, including energy consumption, are often cloaked in secrecy.
Some benchmark tests in AI computing are getting a little bit smarter. MLPerf, the main measure of performance of training and inference in neural networks, has been making efforts to provide more representative measures of AI systems for particular workloads. This month, the organization overseeing MLPerf, the MLCommons, for the first time asked vendors to list not just performance but energy consumed for those machine learning tasks.
Regardless of the data, the fact is systems are getting bigger and bigger in general. The response to the energy concern within the field has been two-fold: to build computers that are more efficient at processing the large models, and to develop algorithms that will compute deep learning in a more intelligent fashion than just throwing more computing at the problem.
Cerebras's Wafer Scale Engine is the state of the art in AI computing, the world's biggest chip, designed for the ever-increasing scale of things such as language models.
On the first score, a raft of startups have arisen to offer computers dedicate to AI that they say are much more efficient than the hundreds or thousands of GPUs from Nvidia or AMD typically required today.
They include Cerebras Systems, which has pioneered the world's largest computer chip; Graphcore, the first company to offer a dedicated AI computing system, with its own novel chip architecture; and SambaNova Systems, which has received over a billion dollars in venture capital to sell both systems but also an AI-as-a-service offering.
"These really large models take huge numbers of GPUs just to hold the data," Kunle Olukotun, Stanford University professor of computer science who is a co-founder of SambaNova, told ZDNet, referring to language models such as Google's BERT.
"Fundamentally, if you can enable someone to train these models with a much smaller system, then you can train the model with less energy, and you would democratize the ability to play with these large models," by involving more researchers, said Olukotun.
Those designing deep learning neural networks are simultaneously exploring ways the systems can be more efficient. For example, the Switch Transformer from Google, the very large language model that is referenced by Bender and team, can reach some optimal spot in its training with far fewer than its maximum 1.6 trillion parameters, author William Fedus and colleagues of Google state.
The software "is also an effective architecture at small scales as well as in regimes with thousands of cores and trillions of parameters," they write.
The key, they write, is to use a property called sparsity, which prunes which of the weights get activated for each data sample.
Scientists at Rice University and Intel propose slimming down the computing budget of large neural networks by using a hashing table that selects the neural net activations for each input, a kind of pruning of the network.
Another approach to working smarter is a technique called hashing. That approach is embodied in a project called "Slide," introduced last year by Beidi Chen of Rice University and collaborators at Intel. They use something called a hash table to identify individual neurons in a neural network that can be dispensed with, thereby reducing the overall compute budget.
Chen and team call this "selective sparsification", and they demonstrate that running a neural network can be 3.5 times faster on a 44-core CPU than on an Nvidia Tesla V100 GPU.
As long as large companies such as Google and Amazon dominate deep learning in research and production, it is possible that "bigger is better" will dominate neural networks. If smaller, less resource-rich users take up deep learning in smaller facilities, than more-efficient algorithms could gain new followers.
The second issue, AI bias, runs in a direct line from the Bender et al. paper back to a paper in 2018 that touched off the current era in AI ethics, the paper that was the shot heard 'round the world, as they say.
Buolamwini and Gebru brought international attention to the matter of bias in AI with their 2018 paper "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," which revealed that commercial facial recognition systems showed "substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems."
That 2018 paper, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," was also authored by Gebru, then at Microsoft, along with MIT researcher Joy Buolamwini. They demonstrated how commercially available facial recognition systems had high accuracy when dealing with images of light-skinned men, but catastrophically bad inaccuracy when dealing with images of darker-skinned women. The authors' critical question was why such inaccuracy was tolerated in commercial systems.
Buolamwini and Gebru presented their paper at the Association for Computing Machinery's Conference on Fairness, Accountability, and Transparency. That is the same conference where in February Bender and team presented the Parrot paper. (Gebru is a co-founder of the conference.)
Both Gender Shades and the Parrot paper deal with a central ethical concern in AI, the notion of bias. AI in its machine learning form makes extensive use of principles of statistics. In statistics, bias is when an estimation of something turns out not to match the true quantity of that thing.
So, for example, if a political pollster takes a poll of voters' preferences, if they only get responses from people who talk to poll takers, they may get what is called response bias, in which their estimation of the preference for a certain candidate's popularity is not an accurate reflection of preference in the broader population.
Also: AI and ethics: One-third of executives are not aware of potential AI bias
The Gender Shades paper in 2018 broke ground in showing how an algorithm, in this case facial recognition, can be extremely out of alignment with the truth, a form of bias that hits one particular sub-group of the population.
Flash forward, and the Parrot paper shows how that statistical bias has become exacerbated by scale effects in two particular ways. One way is that data sets have proliferated, and increased in scale, obscuring their composition. Such obscurity can obfuscate how the data may already be biased versus the truth.
Second, NLP programs such as GPT-3 are generative, meaning that they are flooding the world with an amazing amount of created technological artifacts such as automatically generated writing. By creating such artifacts, biases can be replicated, and amplified in the process, thereby proliferating such biases.
On the first score, the scale of data sets, scholars have argued for going beyond merely tweaking a machine learning system in order to mitigate bias, and to instead investigate the data sets used to train such models, in order to explore biases that are in the data itself.
Before she was fired from Google's Ethical AI team, Mitchell lead her team to develop a system called "Model Cards" to excavate biases hidden in data sets. Each model card would report metrics for a given neural network model, such as looking at an algorithm for automatically finding "smiling photos" and reporting its rate of false positives and other measures.
One example is an approach created by Mitchell and team at Google called model cards. As explained in the introductory paper, "Model cards for model reporting," data sets need to be regarded as infrastructure. Doing so will expose the "conditions of their creation," which is often obscured. The research suggests treating data sets as a matter of "goal-driven engineering," and asking critical questions such as whether data sets can be trusted and whether they build in biases.
Another example is a paper last year, featured in The State of AI Ethics, by Emily Denton and colleagues at Google, "Bringing the People Back In," in which they propose what they call a genealogy of data, with the goal "to investigate how and why these datasets have been created, what and whose values influence the choices of data to collect, the contextual and contingent conditions of their creation, and the emergence of current norms and standards of data practice."
Vinay Prabhu, chief scientist at UnifyID, in a talk at Stanford last year described being able to take images of people from ImageNet, feed them to a search engine, and find out who people are in the real world. It is the "susceptibility phase" of data sets, he argues, when people can be targeted by having had their images appropriated.
Scholars have already shed light on the murky circumstances of some of the most prominent data sets used in the dominant NLP models. For example, Vinay Uday Prabhu, who is chief scientist at startup UnifyID Inc., in a virtual talk at Stanford University last year examined the ImageNet data set, a collection of 15 million images that have been labeled with descriptions.
The introduction of ImageNet in 2009 arguably set in motion the deep learning epoch. There are problems, however, with ImageNet, particularly the fact that it appropriated personal photos from Flickr without consent, Prabhu explained.
Those non-consensual pictures, said Prabhu, fall into the hands of thousands of entities all over the world, and that leads to a very real personal risk, he said, what he called the "susceptibility phase," a massive invasion of privacy.
Using what's called reverse image search, via a commercial online service, Prabhu was able to take ImageNet pictures of people and "very easily figure out who they were in the real world." Companies such as Clearview, said Prabhu, are merely a symptom of that broader problem of a kind-of industrialized invasion of privacy.
An ambitious project has sought to catalog that misappropriation. Called Exposing.ai, it is the work of Adam Harvey and Jules LaPlace, and it formally debuted in January. The authors have spent years tracing how personal photos were appropriated without consent for use in machine learning training sets.
The site is a search engine where one can "check if your Flickr photos were used in dozens of the most widely used and cited public face and biometric image datasets [] to train, test, or enhance artificial intelligence surveillance technologies for use in academic, commercial, or defense related applications," as Harvey and LaPlace describe it.
Some argue the issue goes beyond simply the contents of the data to the means of its production. Amazon's Mechanical Turk service is ubiquitous as a means of employing humans to prepare vast data sets, such as by applying labels to pictures for ImageNet or to rate chat bot conversations.
An article last month by Vice's Aliide Naylor quoted Mechanical Turk workers who felt coerced in some instances to produce results in line with a predetermined objective.
The Turkopticon feedback aims to arm workers on Amazon's Mechanical Turk with honest appraisals of the work conditions of contracting for various Turk clients.
A project called Turkopticon has arisen to crowd-source reviews of the parties who contract with Mechanical Turk, to help Turk workers avoid abusive or shady clients. It is one attempt to ameliorate what many see as the troubling plight of an expanding underclass of piece workers, what Mary Gray and Siddharth Suri of Microsoft have termed "ghost work."
There are small signs the message of data set concern has gotten through to large organizations practicing deep learning. Facebook this month announced a new data set that was created not by appropriating personal images but rather by making original videos of over three thousand paid actors who gave consent to appear in the videos.
The paper by lead author Caner Hazirbas and colleagues explains that the "Casual Conversations" data set is distinguished by the fact that "age and gender annotations are provided by the subjects themselves." Skin type of each person was annotated by the authors using the so-called Fitzpatrick Scale, the same measure that Buolamwini and Gebru used in their Gender Shades paper. In fact, Hazirbas and team prominently cite Gender Shades as precedent.
Hazirbas and colleagues found that, among other things, when machine learning systems are tested against this new data set, some of the same failures crop up as identified by Buolamwini and Gebru. "We noticed an obvious algorithmic bias towards lighter skinned subjects," they write.
See the original post:
Ethics of AI: Benefits and risks of artificial intelligence - ZDNet
Posted in Ai
Comments Off on Ethics of AI: Benefits and risks of artificial intelligence – ZDNet
Yet another Google AI leader has defected to Apple – Ars Technica
Posted: at 8:10 pm
Enlarge / AI researcher Samy Bengio (left) poses with his brother Yoshua Bengio (right) for a photo tied to a report from cloud-platform company Paperspace on the future of AI.
Apple has hired Samy Bengio, a prominent AI researcher who previously worked at Google. Bengio will lead "a new AI research unit" within Apple, according to a recent report in Reuters. He is just the latest in a series of prominent AI leaders and workers Apple has hired away from the search giant.
Apple uses machine learning to improve the quality of photos taken with the iPhone, surface suggestions of content and apps that users might want to use, power smart search features across its various software offerings, assist in palm rejection for users writing with the iPad's Pencil accessory, and much more.
Bengio was part of a cadre of AI professionals who left Google to protest the company's firings of its own AI ethics researchers (Margaret Mitchell and Timnit Gebru) after those researchers raised concerns about diversity and Google's approach to ethical considerations around new applications of AI and machine learning. Bengio voiced his support for Mitchell and Gebru, and he departed of his own volition after they were let go.
In his 14 years at Google, Bengio worked on AI applications like speech and image analysis, among other things. Neither Bengio nor Apple has said exactly what he will be researching in his new role in Cupertino.
See the article here:
Yet another Google AI leader has defected to Apple - Ars Technica
Posted in Ai
Comments Off on Yet another Google AI leader has defected to Apple – Ars Technica
Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience – Forbes
Posted: at 8:10 pm
Over the last 12 months, we have seen a surge in investment in Artificial Intelligence (AI) enabled customer self-service technologies as brands have put in place tools that have helped deflect calls away from their support teams and allow customers to self serve.
However, despite these investments, we have also seen how the phone is still an important and vital channel for many organizations regarding customer service. According to Salesforce data, daily call volume reached an all-time high last year, up 24% compared to 2019 levels. Meanwhile, Accenture found that 58% of customers prefer to speak to a support agent if they need to solve an urgent or complex issue, particularly during times of crisis.
Now, consider one of those calls.
When a customer gets through to an agent, they are not thinking about how many calls they have already answered that day, what those calls have been like and how it may have impacted them. The customer, in the moment, is only thinking about solving their particular problem.
That's all very well, you might say.
Stressed call center agent.
But, in the face of consistently high call volumes and the strains of working remotely for an extended period, reports are now starting to emerge that many contact center agents are beginning to experience a similar phenomenon to what many nurses and doctors often go through: compassion fatigue. This is the situation where, due to consistently high workloads, they become emotionally exhausted, on the verge of getting burnt out and become unable to deliver a high level of service.
That, in turn, feeds directly through to the service and experience that the patient or customer receives.
However, Dr Skyler Place, Chief Behavioural Science Officer at Cogito, believes that compassion fatigue is avoidable, and organizations should be using AI to enable and support their agents whilst on a call and, at the same time, manage their well-being and performance.
He believes that there are three areas that organizations are under utilizing AI when trying to improve their customer experience (CX).
The first is that brands should be leveraging AI technology to provide real-time feedback whilst an agent is on a call to support and empower them in the moment.
Secondly, given that many support teams are still working remotely, AI technology can replace the tradition of walking the floor and help supervisors understand how their teams are doing and what sort of coaching and support they need from call to call.
Thirdly, when you combine that data with customer outcome data and apply AI technology, you can identify insights that, as Place puts it, "can help you improve your business processes, your business outcomes and drive macro strategies beyond the call and beyond the call center."
The potential of a system that provides both in-call real-time support for agents but also intelligently understands call demand, an agents experience and in-shift call profiles such that it can optimize call matching to help achieve positive customer and employee outcomes is nothing but a good thing.
Compassion fatigue is real, and organizations need to be managing their agent's performance and well-being if they are to achieve excellent phone-based customer service.
Visit link:
Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience - Forbes
Posted in Ai
Comments Off on Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience – Forbes