The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Bigger Screens, Smaller AI: What CES 2020 Told Us About the Future of Cars – Robb Report
Posted: March 24, 2020 at 5:34 am
I hate auto shows. Cars are meant to move, make noise and be felt. Seeing a Bugatti caged on an auto-show stand is worse than watching a lion at a particularly cruel zoo. At least you get some sense of the lions power from the languid roll of its shoulders as it pads miserably around its pen. At an auto show? Nothing.
The luxury marques know this. Dragging you out to a cold, soulless exhibition hall in a grim part of town isntIm guessinggoing to get you in a spendy frame of mind. Ferrari or Aston would far rather show you its latest model on some sunlit lawn at Pebble Beach or Goodwood, with a glass of good Champagne in your hand, or better yet, let you drive it on a track or along the scenic, scented roads of Carmel or the Mediterranean coast.
And so it seems strange that luxury carmakers make a show of going to CES, the dauntingly vast tech extravaganza in Vegas. Pressure on floor space means their booths at CES are typically far smaller and less showy than those at an auto show. But in truth, the manufacturers dont actually want their customers to visit them. They just want you to know theyre there. They are often unfairly maligned as industrial dinosaurs whose grip on how we get around will soon be broken by Uber, Waymo and whichever of the raft of new mobility start-ups actually hit pay dirt. The automakers hope that, by appearing at CES alongside tech giants and start-ups, theyll appropriate a little of their mojo.
The other reason for carmakers to attend CES is to learn stuff. I bumped into Daimler CEO Ola Kallenius three times as he stalked the Las Vegas Convention Center. Despite having an R&D budget in the billions, he told me, his people might spot an idea or a possible collaboration among the myriad tiny booths of exhibitors that would not have presented itself otherwise.
Experiencing the Audi AI:MEs VR technology from Holoride.Photo: Courtesy of Audi AG.
For me, two trends stood out: screens and AI. The full-width displays in Sonys shock Vision-S concept car and in the production version of automotive-tech newcomer Bytons M-Byte electric SUV made Teslas hallmark iPad-style displays look kind of puny, and Id bet on others following Sonys and Bytons big-screen lead. Endlessly configurable, the screens allow you to display a bunch of your in-car apps at once.
Artificial intelligence is making huge strides, but some applications of it might still be fictional at this point. Audis AI:ME concept car appeared at CES with a working interior for the first time and added eye control for use with VR headsets, so youll need your Audi to be autonomous before you can free your eyes from the road for long enough to scroll through Spotify with a glance.
The show also saw the launch of affordable LiDAR laser sensors that can build a 3-D image of your cars surroundings and will make systems for driver assistance and crash avoidance almost supernatural in their abilities. Its just a pity that, despite the braininess of AI, the data it generates may never be used to drive our cars for us: At the show, in private, car-industry leaders were saying that full autonomy seems to be driving off into the distance rather than getting closer, with the emphasis switching to self-driving trucks, unmanned delivery vehicles and driverless ride-hailing services.
CES is enlightening, but its scale kills the fun. As with most auto shows, unless you have a professional interest I cant recommend you go. Its better to read about than to attend. You enjoy the Champagne at that Porsche private party on a manicured lawn, and let me rack up the steps and peer into the ever-bigger screens of the future in Vegas for you.
Read the rest here:
Bigger Screens, Smaller AI: What CES 2020 Told Us About the Future of Cars - Robb Report
Posted in Ai
Comments Off on Bigger Screens, Smaller AI: What CES 2020 Told Us About the Future of Cars – Robb Report
How AI May Prevent The Next Coronavirus Outbreak – Forbes
Posted: March 5, 2020 at 6:24 pm
AI can be used for the early detection of virus outbreaks that might result in a pandemic. (Photo by ... [+] Emanuele Cremaschi/Getty Images)
AI detected the coronavirus long before the worlds population really knew what it was. On December 31st, a Toronto-based startup called BlueDot identified the outbreak in Wuhan, several hours after the first cases were diagnosed by local authorities. The BlueDot team confirmed the info its system had relayed and informed their clients that very day, nearly a week before Chinese and international health organisations made official announcements.
Thanks to the speed and scale of AI, BlueDot was able to get a head start over everyone else. If nothing else, this reveals that AI will be key in forestalling the next coronavirus-like outbreak.
BlueDot isn't the only startup harnessing AI and machine learning to combat the spread of contagious viruses. One Israel-based medtech company, Nanox, has developed a mobile digital X-ray system that uses AI cloud-based software to diagnose infections and help prevent epidemic outbreaks. Dubbed the Nanox System, it incorporates a vast image database, radiologist matching, diagnostic reviews and annotations, and also assistive artificial intelligence systems, which combine all of the above to arrive at an early diagnosis.
Nanox is currently building on this technology to develop a new standing X-ray machine that will supply tomographic images of the lungs. The company plans to market the machine so that it can be installed in public places, such as airports, train stations, seaports, or anywhere else where large groups of people rub shoulders.
Given that the new system, as well as the existing Nanox System, are lower cost mobile imaging devices, it's unsurprising to hear that Nanox has attracted investment from funds looking to capitalise on AI's potential for thwarting epidemics. This month, the company announced a $26 million strategic investment from Foxconn. It also signed an agreement this week to supply 1,000 of its Nanox Systems to medical imaging services across Australia, New Zealand and Norway. Coronavirus be warned.
Its CEO and co-founder Ran Poliakine, explains that such deals are a testament to how the future of epidemic prevention lies with AI-based diagnostic tools. "Nanox has achieved a technological breakthrough by digitizing traditional X-rays, and now we are ready to take a giant leap forward in making it possible to provide one scan per person, per year, for preventative measures," he tells me.
Importantly, the key feature of AI in terms of preventing epidemics is its speed and scale. As Poliakine explains, "AI can detect conditions instantly which makes it a great source of power when trying to prevent epidemics. If we talk about 1,000 systems scanning 60 people a day on average, this translates to 60,000 scans that need to be processed daily by the professional teams."
Poliakine also affirms that no human force available today that can support this volume with the necessary speed and efficiency. Time and again, this is a point made forcefully by other individuals and companies working in this burgeoning sector.
"When it comes to detecting outbreaks, machines can be trained to process vast amounts of data in the same way that a human expert would," explains Dr Kamran Khan, the founder and CEO of BlueDot, as well as a professor at the University of Toronto. "But a machine can do this around the clock, tirelessly, and with incredible speed, making the process vastly more scalable, timely, and efficient. This complements human intelligence to interpret the data, assess its relevance, and consider how best to apply it with decision-making."
Basically, AI is set to become a giant firewall against infectious diseases and pandemics. And it won't only be because of AI-assisted screening and diagnostic techniques. Because as Sergey Young, a longevity expert and founder of the Longevity Vision Fund, tells me, artificial intelligence will also be pivotal in identifying potential vaccines and treatments against the next coronavirus, as well as COVID-19 itself.
"AI has the capacity to quickly search enormous databases for an existing drug that can fight coronavirus or develop a new one in literally months," he says. "For example, Longevity Vision Funds portfolio company Insilico Medicine, which specializes in AI in the area of drug discovery and development, used its AI-based system to identify thousands of new molecules that could serve as potential medications for coronavirus in just four days. The speed and scalability of AI is essential to fast-tracking drug trials and the development of vaccines."
This kind of treatment-discovery will prove vitally important in the future. And in conjunction with screening, it suggests that artificial intelligence will become one of the primary ingredients in ensuring that another coronavirus won't have an outsized impact on the global economy. Already, the COVID-19 coronavirus is likely to cut global GDP growth by $1.1 trillion this year, in addition to having already wiped around $5 trillion off the value of global stock markets. Clearly, avoiding such financial destruction in the future would be more than welcome, and artificial intelligence will prove indispensable in this respect. Especially as the scale of potential pandemics increases with an increasingly populated and globalised world.
Sergey Young also explains that AI could play a substantial role in the area of impact management and treatment, at least if we accept their increasing encroachment into society. He notes that, in China, robots are being used in hospitals to alleviate the stresses currently being piled on medical staff, while ambulances in the city of Hangzhou are assisted by navigational AI to help them reach patients faster. Robots have even been dispatched to a public plaza in Guangzhou in order to warn passersby who aren't wearing face-masks. Even more dystopian, China is also allegedly using drones to ensure residents are staying at home and reducing the risk of the coronavirus spreading further.
Even if we don't reach that strange point in human history where AI and robots police our behaviour during possible health crises, artificial intelligence will still become massively important in detecting outbreaks before they spread and in identifying possible treatments. Companies such as BlueDot, Nanox, and Insilico Medicine will prove increasingly essential in warding off future coronavirus-style pandemics, and with it they'll provide one very strong example of AI being a force for good.
Read the original here:
Posted in Ai
Comments Off on How AI May Prevent The Next Coronavirus Outbreak – Forbes
How AI and Neuroscience Can Help Each Other Progress? – Analytics Insight
Posted: at 6:24 pm
Artificial Intelligence has progressed immensely in the past few years. From being just a fiction context to penetrating into the regular lives of people, AI has brought transformation in several ways. Such advancements are an output of various factors that include the application of new statistical approaches and enhanced computing powers. However, according to 2017 report by DeepMind,a Perspective in the journal Neuron, argues that people often discount the contribution and use of ideas from experimental and theoretical neuroscience.
TheDeepMind reportsresearchers believe that drawing inspiration from neuroscience in AI research is important for two reasons. First, neuroscience can help validate AI techniques that already exist. They said, Put simply if we discover one of our artificial algorithms mimics a function within the brain, it suggests our approach may be on the right track. Second, neuroscience can provide a rich source of inspiration for new types of algorithms and architectures to employ when building artificial brains. Traditional approaches to AI have historically been dominated by logic-based methods and theoretical mathematical models.
Moreover,in a recent blog post, DeepMind suggests that the human brain and AI learning methods are closely linked when it comes to learning through reward.
Computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.
As noted by the post, a recent development in computer science which yields significant improvements in performance on reinforcement learning problems may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brains dopamine system, with potential implications for learning and motivation disorders.
DeepMind found that dopamine neurons in the brain were each tuned to different levels of pessimism or optimism. If they were a choir, they wouldnt all be singing the same note, but harmonizing each with a consistent vocal register, like bass and soprano singers. In artificial reinforcement learning systems, this diverse tuning creates a richer training signal that greatly speeds learning in neural networks, and researchers speculate that the brain might use it for the same reason.
The existence of distributional reinforcement learning in the brain has interesting implications both for AI and neuroscience. Firstly, this discovery validates distributional reinforcement learning it gives researchers increased confidence that AI research is on the right track since this algorithm is already being used in the most intelligent entity they are aware of: the brain.
Therefore, a shared framework for intelligence in context to artificial intelligence and neuroscience will allow scientists to build smarter machines, and enable them to understand humankind better. This collaborative drive to propel both could possibly expand human cognitive capabilities while bridging the gap between humans and machines.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!
The rest is here:
How AI and Neuroscience Can Help Each Other Progress? - Analytics Insight
Posted in Ai
Comments Off on How AI and Neuroscience Can Help Each Other Progress? – Analytics Insight
Creating a Curious, Ethical, and Diverse AI Workforce – War on the Rocks
Posted: at 6:24 pm
Does the law of war apply to artificially intelligent systems? The U.S. Department of Defense is taking this question seriously, and adopted ethical principles for artificial intelligence (AI) in February 2020 based on the Defense Innovation Boards set of AI ethical guidelines proposed last year. However, just as defense organizations must abide by the law of war and other norms and values, individuals are responsible for the systems they create and use.
Ensuring that AI systems are as committed as we are to responsible and lawful behavior requires changes to engineering practices. Considerations of ethical, moral, and legal implications are not new to defense organizations, but they are starting to become more common in AI engineering teams. AI systems are revolutionizing many commercial sector products and services, and are applicable to many military applications from institutional processes and logistics to those informing warfighters in the field. As AI becomes ubiquitous, changes are necessary to integrate ethics into AI development now, before it is too late.
The United States needs a curious, ethical AI workforce working collaboratively to make trustworthy AI systems. Members of AI development teams must have deep discussions regarding the implications of their work on the warfighters using them. This work does not come easily. In order to develop AI systems effectively and ethically, defense organizations should foster an ethical, inclusive work environment and hire a diverse workforce. This workforce should include curiosity experts (people who focus on human needs and behaviors), who are more likely to imagine the potential unwanted and unintended consequences associated with the systems use and misuse, and ask tough questions about those consequences.
Create an Ethical, Inclusive Environment
People with similar concepts of the world and a similar education are more likely to miss the same issues due to their shared bias. The data used by AI systems are similarly biased, and people collecting the data may not be aware of how that is conveyed through the data they create. An organizations bias will be pervasive in the data provided by that organization, and the AI systems developed with that data will perpetuate the bias.
Bias can be mitigated with ethical workforces that value diverse human intelligence and the wide set of possible life experiences. Diversity doesnt just mean making sure that there are a mix of genders on the project team, or that people look different, though those attributes are important. A project team should have a wide set of life experiences, disability status, social status, and experience being the other. Diversity also means including a mix of people in uniform, civilians, academic partners, and contractors as well as those individuals who have diverse life experiences. This diversity does not mean lowering the bar of experience or talent, but rather extending it. To be successful, all of these individuals need to be engaged as full members of the project team in an inclusive environment.
Individuals coming from different backgrounds will be more capable of imagining a broad set of uses, and more importantly, misuses of these systems. Assembling a diverse workforce that brings talented, experienced people together will reinforce technology ethics. Imbuing the workforce with curiosity, empathy, and understanding for the warfighters using the systems and affected by the systems will further support the work.
Diverse and inclusive leadership is key to an organizations success. When leadership in the organization isnt diverse, it is less likely to attract and, more importantly, retain talent. This is primarily thought to be because those talented individuals may assume that the organization is not inclusive or that there is no future in the organization for them. If leadership is lacking in diversity, an organization can promote someone early or hire from the outside if necessary.
Adopting a set of technology ethics is a first step to supporting project teams in making better, more confident decisions that are ethical. Technology ethics are ethics that are designed specifically for development of software and emerging technologies. They help align diverse project teams to assist them in setting appropriate norms for AI systems. Much like physicians adhere to a version of the American Medical Associations Code of Medical Ethics, technology ethics help guide a project team working on AI systems that have the potential for harm (most AI systems do). These project teams need to have early, difficult conversations about how they will manage a variety of situations.
A shared set of technology ethics serves as a central point to guide decision-making. We are all unique, and many of us have shared knowledge and experiences. These are what naturally draw people together, making it feel like a bigger challenge to work with people who have completely different experiences. However, the experience of working with people who are significantly different builds the capacity for innovation and creative thinking. Using ethics as a bridge between differences strengthens the team by creating shared knowledge and common ground. Technology ethics must be weaved into the work at a very early stage, and the AI workforce must continue to advocate technology ethics as the AI system matures. Human involvement (a human-in-the-loop) is required throughout the life cycle of AI systems an AI system cannot be simply turned on and left to run. Technology ethics should be considered throughout the entire life cycle.
Without technology ethics it is harder for project teams to align, and important discussions may be inadvertently skipped. Technology ethics bring into focus the obligation for the project team to take its work and its implications seriously, and can also empower individuals to ask tough questions with regard to unwanted and unintended consequences that they imagine with the systems use and misuse. By aligning on a set of technology ethics, the development team can define clear directives with regard to system functionality.
Identifying a set of technology ethics is an intimidating task and one that should be approached carefully. Some project teams will want to adopt guidance initially from organizations such as the Association for Computing Machinerys Code of Ethics and Professional Conduct, and the Montreal Declaration for a Responsible Development of Artificial Intelligence, while others like IBM and Microsoft are developing their own guidance. The Defense Departments newly adopted five AI ethics principles are: responsible, equitable, traceable, reliable, and governable. The original Defense Innovation Board recommendation is described in detail in the supporting document.
In the past, ethics have only been referenced in, and not directly part of, software development efforts. The knowledge that AI systems can cause much broader harm more quickly than software technologies could in the past raises new ethical questions that need to be addressed by the AI workforce. A skilled and diverse workforce, bursting with curiosity and engaged with the AI system, will result in AI systems that are accountable to humans, de-risked, respectful, secure, honest, and usable.
Value Curiosity
AI systems will be created and used by a wide range of individuals, and misuse will come from potentially unexpected sources individuals and organizations with completely different experiences and with potentially unlimited resources. Adversaries already are using techniques that are very difficult to anticipate. The adoption of technology ethics isnt enough to make AI systems safe. Making sure that the teams building these systems are able to imagine and then mitigate issues is profoundly important.
The term curiosity experts is shorthand for people who have a broad range of skills and job titles including: cognitive psychologists, digital anthropologists, human-machine interaction and human-computer interaction professionals, and user experience researchers and designers. Curiosity experts core responsibility is to be curious and speculative within the ethical and inclusive environment an organization has created. Curiosity experts will partner with defense experts, and may already be part of your team, doing research and helping to make interactions more usable.
Curiosity experts help connect the human needs, the initial problem to be solved, and the solution to an engineering problem. Working with defense experts (and ideally the warfighters themselves), they will enable a project team to uncover potential issues before they arise by focusing on understanding how the system will be used, the situation and constraints for using the system, as well as the abilities of the people who will use it. Curiosity experts can conduct a variety of proven qualitative and quantitative methods, and once they have a solid understanding, they share that information with the project team in easy to consume formats such as stories. The research they conduct is necessary to understand the needs that are being addressed, so that the team builds the right thing. This may sound familiar wargaming uses very similar tactics, and storytelling is an important component.
Its important for curiosity experts to lead (and then teach others to lead) co-design activities such as abusability testing and other speculative exercises, in which the project team imagines the misuse of the AI system they are considering building. AI systems need to be interpretable and usable by warfighters, and this has been recognized as a priority by the Defense Advanced Research Projects Agency, which is working on the Explainable AI program. Curiosity experts with interaction design experience can contribute materially to this effort as they help keep the people using these systems in mind, and call out the AI workforce when necessary. When the project team asks, Why dont they get it? curiosity experts can nudge the project team to pivot instead to What can we do better to meet the warfighters needs? As individuals on the team become more comfortable with this mindset, they become curiosity experts at heart, even when their primary responsibility is something else.
Hire a Diverse Workforce
Building diverse project teams helps to increase each individuals creativity and effectiveness. Diversity in this sense relates to skill sets, education (with regard to school and program), and problem-framing approach. Coming together with different ways of looking at the world will help teams and organizations solve challenging problems faster.
Building a diverse project team to advance this ethical framework will take time and effort. Organizations that represent minority groups such as the National Society of Black Engineers and technical conferences that embrace diversity such as the Grace Hopper Celebration can be a great resource. Prospective candidates should ask hard questions about the organization, including about the organizations ethics, diversity, and inclusion. These questions are indicative of curious individuals you want on your team. Once you recruit more diverse individuals, you can set progress goals. For example, Atlassian introduced a new approach to diversity reporting in 2016 that focused on team dynamics and shared how people from underrepresented backgrounds were spread across the companys teams.
It is common in technology, and AI specifically, to value specific degrees and learning styles. Some employers have staffed their organization with class after class of graduates from particular degree programs at particular universities. These organizations benefit from the ability of these graduates to easily bond and rely on shared knowledge. However, these same benefits can become weaknesses for the project team. The peril of creating high-risk products and services with a homogeneous team is that they may all miss the same critical piece of information; have the same gaps in technical knowledge; assume the same things about the process; or not be able to think differently enough to imagine unintended consequences. They wont even realize their mistake until it is too late.
In many organizations this risk is disguised by adding one or two individuals to a group who are significantly different from the majority in an aspect such as gender, race, or culture. Unfortunately, their presence isnt enough to significantly reduce the risk of groupthink, and their experience will be dismissed because it is different if there are not enough individuals who are socially distinct in the group. Eventually, due to many of these factors, retention becomes a significant concern. Project teams need to be built with diversity from the start, or be quickly adjusted.
A diverse team of thoughtful and talented machine learning experts, programmers, and curiosity experts (among others) is not yet complete. The AI workforce needs direct access to experts in the military or defense industry who are familiar with the situations and organizations the AI system is being designed for, and who can spot assumptions and issues early. These individuals, be they in uniform, civilians, or consultants, may also be able to act as liaisons to the warfighters so that more direct contact can be made with those closest to the work.
Rethinking the Workforce
Encouraging project teams to be curious and speculative in imagining scenarios at the edges of AI will help to prepare for actual system use. As the AI workforce considers how to manage a variety of use cases, framing conversations with technology ethics will provoke serious and contentious discussions. These conversations are precious with regard to aligning the team prior to facing a difficult situation. A clear understanding of what the expectations are in specific situations helps the team to create mitigation plans for how they will respond, both during the creation of the AI system and once it is in production.
The AI sector needs to think about the workforce in different ways. As Prof. Hannah Fry suggests in The Guardian, diversity and inclusion in the workforce is just as important as a technology ethics pledge (if not more so) to ensure that we are reducing unwanted bias and unintended consequences. Creating an ethical, inclusive environment, valuing curiosity, and hiring a diverse workforce are necessary steps to make ethical AI. Clear communication and alignment on ethics is the best way to bring disparate groups of people into a shared understanding and to create AI systems that are accountable to humans, de-risked, respectful, secure, honest, and usable.
Over the next several years, my organization, Carnegie Mellon Universitys Software Engineering Institute, is advancing a professional discipline of AI Engineering to help the defense and national security communities develop, deploy, operate, and evolve game-changing mission capabilities that leverage rapidly evolving artificial intelligence and machine learning technologies. At the core of this effort is supporting the AI workforce in designing trustworthy AI systems by successfully integrating ethics in a diverse workforce.
Carol Smith (@carologic) is a senior research scientist in Human-Machine Interaction at Carnegie Mellon Universitys Software Engineering Institute and an adjunct instructor for CMUs Human-Computer Interaction Institute. She has been conducting user experience research to improve the human experience across industries for 19 years and working to improve AI systems since 2015. Carol is recognized globally as a leader in user experience and has presented over 140 talks and workshops in over 40 cities around the world, served two terms on the User Experience Professionals Association international board, and is currently an editor for the Journal of Usability Studies and the upcoming Association for Computing Machinery Digital Threats: Research and Practice journals Special Issue on Human-Machine Teaming. She holds an M.S. in Human-Computer Interaction from DePaul University.
This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center.
The views, opinions, and/or findings contained in this material are those of the author(s) and should not be construed as an official government position, policy, or decision, unless designated by other documentation.
Image: U.S. Air Force (Photo by J.M. Eddins Jr.)
See the article here:
Creating a Curious, Ethical, and Diverse AI Workforce - War on the Rocks
Posted in Ai
Comments Off on Creating a Curious, Ethical, and Diverse AI Workforce – War on the Rocks
Is Artificial Intelligence (AI) A Threat To Humans? – Forbes
Posted: at 6:24 pm
Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer scientist Alan Turing wondered and began to believe that there would be a time when machines could have an unlimited impact on humanity through a process that mimicked evolution.
Is Artificial Intelligence (AI) A Threat To Humans?
When Oxford University Professor Nick Bostroms New York Times best-seller, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. However, in my recent conversation with Bostrom, he also acknowledged theres an enormous upside to artificial intelligence technology.
You can see the full video of our conversation here:
Since the writing of Bostrom's book in 2014, progress has been very rapid in artificial intelligence and machine and deep learning. Artificial intelligence is in the public discourse, and most governments have some sort of strategy or road map to address AI. In his book, he talked about AI being a little bit like children playing with a bomb that could go off at any time.
Bostrom explained, "There's a mismatch between our level of maturity in terms of our wisdom, our ability to cooperate as a species on the one hand and on the other hand our instrumental ability to use technology to make big changes in the world. It seems like we've grown stronger faster than we've grown wiser."
There are all kinds of exciting AI tools and applications that are beginning to affect the economy in many ways. These shouldnt be overshadowed by the overhype on the hypothetical future point where you get AIs with the same general learning and planning abilities that humans have as well as superintelligent machines.These are two different contexts that require attention.
Today, the more imminent threat isn't from a superintelligence, but the usefulyet potentially dangerousapplications AI is used for presently.
How is AI dangerous?
If we focus on whats possible today with AI, here are some of the potential negative impacts of artificial intelligence that we should consider and plan for:
Change the jobs humans do/job automation: AI will change the workplace and the jobs that humans do. Some jobs will be lost to AI technology, so humans will need to embrace the change and find new activities that will provide them the social and mental benefits their job provided.
Political, legal, and social ramifications: As Bostrom advises, rather than avoid pursuing AI innovation, "Our focus should be on putting ourselves in the best possible position so that when all the pieces fall into place, we've done our homework. We've developed scalable AI control methods, we've thought hard about the ethics and the governments, etc. And then proceed further and then hopefully have an extremely good outcome from that." If our governments and business institutions don't spend time now formulating rules, regulations, and responsibilities, there could be significant negative ramifications as AI continues to mature.
AI-enabled terrorism: Artificial intelligence will change the way conflicts are fought from autonomous drones, robotic swarms, and remote and nanorobot attacks. In addition to being concerned with a nuclear arms race, we'll need to monitor the global autonomous weapons race.
Social manipulation and AI bias: So far, AI is still at risk for being biased by the humans that build it. If there is bias in the data sets the AI is trained from, that bias will affect AI action. In the wrong hands, AI can be used, as it was in the 2016 U.S. presidential election, for social manipulation and to amplify misinformation.
AI surveillance: AIs face recognition capabilities give us conveniences such as being able to unlock phones and gain access to a building without keys, but it also launched what many civil liberties groups believe is alarming surveillance of the public. In China and other countries, the police and government are invading public privacy by using face recognition technology. Bostrom explains that AI's ability to monitor the global information systems from surveillance data, cameras, and mining social network communication has great potential for good and for bad.
Deepfakes: AI technology makes it very easy to create "fake" videos of real people. These can be used without an individual's permission to spread fake news, create porn in a person's likeness who actually isn't acting in it, and more to not only damage an individual's reputation but livelihood. The technology is getting so good the possibility for people to be duped by it is high.
As Nick Bostrom explained, The biggest threat is the longer-term problem introducing something radical thats super intelligent and failing to align it with human values and intentions. This is a big technical problem. Wed succeed at solving the capability problem before we succeed at solving the safety and alignment problem.
Today, Nick describes himself as a frightful optimist that is very excited about what AI can do if we get it right. He said, The near-term effects are just overwhelmingly positive. The longer-term effect is more of an open question and is very hard to predict. If we do our homework and the more we get our act together as a world and a species in whatever time we have available, the better we are prepared for this, the better the odds for a favorable outcome. In that case, it could be extremely favorable.
For more on AI and other technology trends, see Bernard Marrs new book Tech Trends in Practice: The 25 Technologies That Are Driving The 4Th Industrial Revolution, which is available to pre-order now.
See the article here:
Is Artificial Intelligence (AI) A Threat To Humans? - Forbes
Posted in Ai
Comments Off on Is Artificial Intelligence (AI) A Threat To Humans? – Forbes
The Pentagon’s AI Shop Takes A Venture Capital Approach to Funding Tech – Defense One
Posted: at 6:24 pm
The Joint Artificial Intelligence Center will take a Series A, B, approach to building tech for customers, with product managers and mission teams.
The Joint Artificial Intelligence Center will take a Series A, B, approach to building tech for customers, with product managers and mission teams. By PatrickTucker
Military leaders who long to copy the way Silicon Valley funds projects should know: the Valley isnt the hit machine people think it is, says Nand Mulchandani, chief technical officer of the Pentagons Joint Artificial Intelligence Center. The key is to follow the right venture capitalmodel.
Mulchandani, a veteran of several successful startups, aims to ensure JAICs investments in AI software and tools actually work out. So he is bringing a very specific venture-capital approach to thePentagon.
Heres the plan: when a DoD agency or military branch asks JAIC for help with some mission or activity, the Center will assign a mission team of, essentially, customer representatives to figure out what agency data might be relevant to theproblem.
Subscribe
Receive daily email updates:
Subscribe to the Defense One daily.
Be the first to receive updates.
Next, the JAIC will assign a product manager not DoDs customary program manager, but a role imported from the techindustry.
He or she handles the actual building of the product, not the administrative logistics of running a program. The product manager will gather customer needs, make those into product features, work with the program manager, ask, What does the product do? How is it priced? Mulchandani told Defense One in a phone conversation onThursday.
The mission team and product manager will take a small part of the agencys data to the software vendors or programs that they hire to solve the problem. These vendors will need to prove their solution works before scaling up to take on all availabledata.
Were going to have a Series A, a seed amount of money. You [the vendor] get a half a million bucks to curate the data, which tends to be the problem. Do the problem x in a very tiny way, taking sample data, seeing if an algorithm applies to it, and then scale it, Mulchandani saidon Wednesday at an event hosted by the Intelligence and National Security Alliance, orINSA.
In the venture capital industry, you take a large project, identify core risk factors, like team risk, customer risk, etc. you fund enough to take care of these risks and see if you can overcome the risks through a prototype or simulation, before you try to scale, he addedlater.
The customer must also plan to turn the product into a program of record or give it some other life outside of theJAIC.
Thats very different from the way the Defense Department pays for tech today, he said. The unit of currency in the DoD seems to be Well, this was a great idea; lets stick a couple million bucks on it, see what happens. Were not doing that way anymore he said onWednesday.
The JAIC is working with the General Services Administration Centers of Excellence to create product manager roles in DoD and to figure out how to scale small solutions up. Recently, some members of the JAIC and the Centers of Excellence participated in a series of human-centered design workshops to determine essential roles and responsibilities for managing data assets, across areas that the JAIC will be developing products, like cybersecurity, healthcare, predictive maintenance, and business automation, according to thestatement.
Mulchandani urges the Pentagon not to make a fetish of Silicon Valley. Without the right business and funding processes, many venture startups fail just as badly as poorly thought out government projects. You just dont hear aboutthem.
When you end up in a situation where theres too much capital chasing too few good ideas that are real, you end up in a situation where you are funding a lot of junk. What ends up happening [in Silicon Valley] is many of those companies just fail, he said Wednesday. The problem in DOD is similar. How do you apply discipline up front, on a venture model, to fund the good stuff as opposed to funding a lot of junk and then seeing two or three products that becomesuccessful?
Read more:
The Pentagon's AI Shop Takes A Venture Capital Approach to Funding Tech - Defense One
Posted in Ai
Comments Off on The Pentagon’s AI Shop Takes A Venture Capital Approach to Funding Tech – Defense One
Predicting the coronavirus outbreak: How AI connects the dots to warn about disease threats – GCN.com
Posted: at 6:24 pm
Predicting the coronavirus outbreak: How AI connects the dots to warn about disease threats
Canadian artificial intelligence firmBlueDothas been in the news in recent weeks forwarning about the new coronavirusdays ahead of the official alerts from the Centers for Disease Control and Prevention and the World Health Organization. The company was able to do this by tapping different sources of information beyond official statistics about the number of cases reported.
BlueDots AI algorithm, a type of computer program that improves as it processes more data, brings together news stories in dozens of languages, reports from plant and animal disease tracking networks and airline ticketing data. The result is an algorithm thats better at simulating disease spread than algorithms that rely on public health data -- better enough to be able to predict outbreaks. The company uses the technology to predict and track infectious diseases for its government and private-sector customers.
Traditional epidemiology tracks where and when people contract a disease to identify the source of the outbreak and which populations are most at risk. AI systems like BlueDots model how diseases spread in populations, which makes it possible to predict where outbreaks will occur and forecast how far and fast diseases will spread. So while the CDC and laboratories around the world race to find cures for thenovel coronavirus, researchers are using AI to try to predict where the disease will go next and how much of an impact it might have. Both play a key role in facing the disease.
However, AI is not a silver bullet. The accuracy of AI systems is highly dependent on the amount and quality of the data they learn from. And how AI systems are designed and trained can raise ethical issues, which can be particularly troublesome when the technologies affect large swathes of a population about something as vital as public health.
Its all about the data
Traditional disease outbreak analysis looks at the location of an outbreak, the number of disease cases and the period of time -- the where, what and when -- to forecast thelikelihood of the disease spreadingin a short amount of time.
More recent efforts using AI and data science have expanded the what to include many different data sources, which makes it possible to make predictions about outbreaks. With the advent of Facebook, Twitter and other social and micro media sites, more and more data can be associated with a location and mined for knowledge about an event like an outbreak. The data can include medical worker forum discussions about unusual respiratory cases and social media posts about being out sick.
Much of this data is highly unstructured, meaning that computers cant easily understand it. The unstructured data can be in the form of news stories, flight maps, messages on social media, check ins from individuals, video and images. On the other hand, structured data, such as numbers of reported cases by location, is more tabulated and generally doesnt need as much preprocessing for computers to be able to interpret it.
Newer techniques such asdeep learningcan help make sense of unstructured data. These algorithms run on artificial neural networks, which consist of thousands of small interconnected processors, much like the neurons in the brain. The processors are arranged in layers, and data is evaluated at each layer and either discarded or passed onto the next layer. By cycling data through the layers in a feedback loop, a deep learning algorithm learns how to, for example, identify cats in YouTube videos.
Researchers teach deep learning algorithms to understand unstructured data by training them to recognize the components of particular types of items. For example, researchers can teach an algorithm to recognize a cup by training it with images of several types of handles and rims. That way it can recognize multiple types of cups, not just cups that have a particular set of characteristics.
Any AI model is only as good as the data used to train it. Too little data and theresults these disease-tracking models deliver can be skewed. Similarly, data quality is critical. It can be particularly challenging to control the quality of unstructured data, including crowd-sourced data. This requires researchers to carefully filter the data before feeding it to their models. This is perhaps one reason some researchers, includingthose at BlueDot, choose not to use social media data.
One way to assess data quality is by verifying the results of the AI models. Researchers need tocheck the output of their modelsagainst what unfolds in the real world, a process called ground truthing. Inaccurate predictions in public health, especially with false positives, can lead to mass hysteria about the spread of a disease.
Here is the original post:
Posted in Ai
Comments Off on Predicting the coronavirus outbreak: How AI connects the dots to warn about disease threats – GCN.com
YC-backed Turing uses AI to help speed up the formulation of new consumer packaged goods – TechCrunch
Posted: at 6:24 pm
One of the more interesting and useful applications of artificial intelligence technology has been in the world of biotechnology and medicine, where now more than 220 startups (not to mention universities and bigger pharma companies) are using AI to accelerate drug discovery by using it to play out the many permutations resulting from drug and chemical combinations, DNA and other factors.
Now, a startup called Turing which is part of the current cohort at Y Combinator due to present in the next Demo Day on March 22 is taking a similar principle but applying it to the world of building (and discovering) new consumer packaged goods products.
Using machine learning to simulate different combinations of ingredients plus desired outcomes to figure out optimal formulations for different goods (hence the Turing name, a reference to Alan Turings mathematical model, referred to as the Turing machine), Turing is initially addressing the creation of products in home care (e.g. detergents), beauty and food and beverage.
Turings founders claim that it is able to save companies millions of dollars by reducingthe average time it takes to formulate and test new products, from an average of 12 to 24 months down to a matter of weeks.
Specifically, the aim is to reduce all the time it takes to test combinations, giving R&D teams more time to be creative.
Right now, they are spending more time managing experiments than they are innovating, Manmit Shrimali, Turings co-founder and CEO, said.
Turing is in theory coming out of stealth today, but in fact it has already amassed an impressive customer list. It is already generating revenues by working with eight brands owned by one of the worlds biggest CPG companies, and it is also being trialed by another major CPG behemoth (Turing is not disclosing their names publicly, but suffice it to say, they and their brands are household names).
Turing aims to become the industry norm for formulation development and we are here to play the long game, Shrimali said. This requires creating an ecosystem that can help at each stage of growing and scaling the company, and YC just does this exceptionally well.
Turing is co-founded by Shrimali and Ajith Govind, two specialists in data science that worked together on a previous startup called Dextro Analytics. Dextro had set out to help businesses use AI and other kinds of business analytics to help with identifying trends and decision making around marketing, business strategy and other operational areas.
While there, they identified a very specific use case for the same principles that was perhaps even more acute: the research and development divisions of CPG companies, which have (ironically, given their focus on the future) often been behind the curve when it comes to the digital transformation that has swept up a lot of other corporate departments.
We were consulting for product companies and realised that they were struggling, Shrimali said. Add to that the fact that CPG is precisely the kind of legacy industry that is not natively a tech company but can most definitely benefit from implementing better technology, and that spells out an interesting opportunity for how (and where) to introduce artificial intelligence into the mix.
R&D labs play a specific and critical role in the world of CPG.
Before eventually being shipped into production, thisis where products are discovered; tested; tweaked in response to input from customers, marketing, budgetary and manufacturing departments and others; then tested again; then tweaked again; and so on. One of the big clients that Turing works with spends close to $400 million in testing alone.
But R&D is under a lot of pressure these days. While these departments are seeing their budgets getting cut, they continue to have a lot of demands. They are still expected to meet timelines in producing new products (or often more likely, extensions of products) to keep consumers interested. There are a new host of environmental and health concerns around goods with huge lists of unintelligible ingredients, meaning they have to figure out how to simplify and improve the composition of mass-market products. And smaller direct-to-consumer brands are undercutting their larger competitors by getting to market faster with competitive offerings that have met new consumer tastes and preferences.
In the CPG world, everyone was focused on marketing, and R&D was a blind spot, Shrimali said, referring to the extensive investments that CPG companies have made into figuring out how to use digital to track and connect with users, and also how better to distribute their products. To address how to use technology better in R&D, people need strong domain knowledge, and we are the first in the market to do that.
Turings focus is to speed up the formulation and testing aspects that go into product creation to cut down on some of the extensive overhead that goes into putting new products into the market.
Part of the reason why it can take upwards of years to create a new product is because of all the permutations that go into building something and making sure it works as consistently as a consumer would expect it to (which still being consistent in production and coming in within budget).
If just one ingredient is changed in a formulation, it can change everything, Shrimali noted. And so in the case of something like a laundry detergent, this means running hundreds of tests on hundreds of loads of laundry to make sure that it works as it should.
The Turing platform brings in historical data from across a number of past permutations and tests to essentially virtualise all of this: It suggests optimal mixes and outcomes from them without the need to run the costly physical tests, and in turn this teaches the Turing platform to address future tests and formulations. Shrimali said that the Turing platform has already saved one of the brands some $7 million in testing costs.
Turings place in working withR&D gives the company some interesting insights into some of the shifts that the wider industry is undergoing. Currently, Shrimali said one of the biggest priorities for CPG giants include addressing the demand for more traceable, natural and organic formulations.
While no single DTC brand will ever fully eat into the market share of any CPG brand, collectively their presence and resonance with consumers is clearly causing a shift. Sometimes that will lead to acquisitions of the smaller brands, but more generally it reflects a change in consumer demands that the CPG companies are trying to meet.
Longer term, the plan is for Turing to apply its platform to other aspects that are touched by R&D beyond the formulations of products. The thinking is that changing consumer preferences will also lead to a demand for better formulations for the wider product, including more sustainable production and packaging. And that, in turn, represents two areas into which Turing can expand, introducing potentially other kinds of AI technology (such as computer vision) into the mix to help optimise how companies build their next generation of consumer goods.
Excerpt from:
Posted in Ai
Comments Off on YC-backed Turing uses AI to help speed up the formulation of new consumer packaged goods – TechCrunch
Hailo raises $60 million to accelerate the launch of its AI edge chip – VentureBeat
Posted: at 6:24 pm
Hailo, a startup developing hardware designed to speed up AI inferencing at the edge, today announced that its raised $60 million in series B funding led by previous and new strategic investors. CEO Orr Danon says the tranche will be used to accelerate the rollout of Hailos Hailo-8 chip, which was officially detailed in May 2019 ahead of an early 2020 ship date a chip that enables devices to run algorithms that previously would have required a datacenters worth of compute. Hailo-8 could give edge devices far more processing power than before, enabling them to perform AI tasks without the need for a cloud connection.
The new funding will help us [deploy to] areas such as mobility, smart cities, industrial automation, smart retail and beyond, said Danon in a statement, adding that Hailo is in the process of attaining certification for ASIL-B at the chip level (and ASIL-D at the system level) and that it is AEC-Q100 qualified.
Hailo-8, which Hailo says it has been sampling over a year with select partners, features an architecture (Structure-Defined Dataflow) that ostensibly consumes less power than rival chips while incorporating memory, software control, and a heat-dissipating design that eliminates the need for active cooling. Under the hood of the Hailo-8, resources including memory, control, and compute blocks are distributed throughout the whole of the chip, and Hailos software which supports Googles TensorFlow machine learning framework and ONNX (an open format built to represent machine learning models) analyzes the requirements of each AI algorithm and allocates the appropriate modules.
Hailo-8 is capable of 26 tera-operations per second (TOPs), which works out to 2.8 TOPs per watt. Heres how that compares with the competition:
In a recent benchmark test conducted by Hailo, the Hailo-8 outperformed hardware like Nvidias Xavier AGX on several AI semantic segmentation and object detection benchmarks, including ResNet-50. At an image resolution of 224 x 224, it processed 672 frames per second compared with the Xavier AGXs 656 frames and sucked down only 1.67 watts (equating to 2.8 TOPs per watt) versus the Nvidia chips 32 watts (0.14 TOPs per watt).
Hailo says its working to build the Hailo-8 into products from OEMs and tier-1 automotive companies in fields such as advanced driver-assistance systems (ADAS) and industries like robotics, smart cities, and smart homes. In the future, Danon expects the chip will make its way into fully autonomous vehicles, smart cameras, smartphones, drones, AR/VR platforms, and perhaps even wearables.
In addition to existing investors, NEC Corporation, Latitude Ventures, and the venture arm of industrial automation and robotics company ABB (ABB Technology Ventures) also participated in the series B. It brings three-year-old, Tel Aviv-based Hailos total venture capital raised to date to $88 million.
Its worth noting that Hailo has plenty in the way of competition. Startups AIStorm,Esperanto Technologies, Quadric, Graphcore, Xnor, andFlex Logix are developing chips customized for AI workloads and theyre far from the only ones. The machine learning chip segment was valued at $6.6 billion in 2018, according to Allied Market Research, and it is projected to reach $91.1 billion by 2025.
Mobileye, the Tel Aviv companyIntel acquired for $15.3 billionin March 2017, offers a computer vision processing solution for AVs in its EyeQ product line.Baiduin July unveiled Kunlun, a chip for edge computing on devices and in the cloud via datacenters. Chinese retail giantAlibaba said it launched an AI inference chip for autonomous driving, smart cities, and logistics verticals in the second half of 2019. And looming on the horizon is Intels Nervana, a chip optimized for image recognition that can distribute neural network parameters across multiple chips, achieving very high parallelism.
Read the original here:
Hailo raises $60 million to accelerate the launch of its AI edge chip - VentureBeat
Posted in Ai
Comments Off on Hailo raises $60 million to accelerate the launch of its AI edge chip – VentureBeat
AI Is Growing, But The Robots Are Not Coming For Customer Service – Forbes
Posted: at 6:24 pm
Recent data out of the World Economic Forum in Davos has shed new light on the role that AI and customer service are playing in shaping the future of work. Jobs of Tomorrow: Mapping Opportunity in the New Economy provides much-needed insights into emerging global employment opportunities and the skill sets needed to maximize those opportunities. Interestingly, the report, supported by data from LinkedIn, found that demand for both digital and human factors is fueling growth in the jobs of tomorrow, raising important considerations for a breadth of industries worldwide.
The report predicts that in the next three years, 37% of job openings in emerging professions will be in the care economy; 17% in sales, marketing and content; 16% in data and AI; 12% in engineering and cloud computing; and 8% in people and culture. Among the roles with fastest projected growth include specialists in both AI and customer success, underscoring the need for technology, yes, but technology that incorporates the human touch.
Taking A Closer Look At The DTC Landscape
This increasing demand for digital-human hybrid solutions is all around us. We dont need to look further than the rising crop of DTC retail brands the Dollar Shave Clubs, Bonobos and Glossiers of the world to see and understand the critical role that this hybrid approach can play, especially when it comes to transforming the customer experience. Todays DTC brands have figured out how to harness AI technology to provide intelligent and personalized customer service from start to finish, moving the customer experience from a back-end cost center to a front-and-center brand differentiator, loyalty builder and ultimately profit center.
Theres been much chatter and speculation about how so many DTC brands have been able to go from zero to 60 in a relatively short amount of time and, of course, the ones that have attained the elite unicorn status. While many factors have contributed to this growth, one of the most interesting (and obvious) is the tremendous opportunities selling directly to the consumer offers. By eliminating the middleman, the salesperson of yore, brands are able to put the consumer center stage and focus on meeting the full spectrum of their needs through a richer understanding of each stage of their journey. Nowadays, there is an entire ecosystem that is forming around the customer with a suite of platforms and services designed to handle everything from marketing to payments to delivery to shipping. But, without customer service as the human touch point, this ecosystem would crumble like a precarious house of cards.
This brings us back to the Davos report and why the growing demand for AI and customer service specialists makes so much sense. Its projected that AI will create nearly $3 trillion in business value by 2021 and AI usage in customer service will increase by 143% by late 2020. At the same time, leading companies understand that AI solutions are most effective when they work hand in hand with humans, not instead of them. And with more and more customer service departments on the frontlines, serving as the main voice of the company, the need for practical AI solutions become more urgent. After all, 77% of customers expect their problem to be solved immediately upon contacting customer service, but most brands simply cant afford to have unlimited agents working 24 hours a day, seven days a week. By relying on AI, companies can promote more self-service and eliminate agents tedious and menial tasks, in order to focus on the bigger picture: building long-lasting customer relationships and more authentic engagement.
Beyond DTC: How AI Is Transforming Other Industries
While the growth of AI and focus on customer service across the DTC landscape is prevalent, its far from the only industry experiencing the digital-human crossover. In healthcare, for example, AI is being used to augment patient care and develop drugs. For example, the startup Sense.ly has developed Molly, a digital nurse to help people monitor patient wellness between doctor visits. And during the recent Ebola scare, a program powered by AI was used to scan existing medicines that could be redesigned to fight the disease instead of waiting for lengthy and costly clinical trial programs to be completed.
The travel industry has also been disrupted by AI, aiding travel companies in providing personalized and intelligent travel solutions and recommendations tailored to meet customer needs. The AI system at the Dorchester Collection hotel chain pours through thousands of online customer reviews to pinpoint what matters most to customers, a process that would otherwise take weeks. Google Flights uses AI to predict flight delays before the airlines even announce them, and Lufthansas bot Mildred helps customers find the cheapest flights, freeing up time for airline employees to focus on more creative tasks.
In these and other industries, the possibilities for AI seem limitless. But the need for human oversight of AI cannot be discounted.
What Can We Learn?
Although some still fear that AI will eventually automate everything and humans will be replaced by robots, this really isnt true. If anything, the current climate demands human involvement to help the industries and brands of today navigate the evolving business landscape. While AI is changing the skills and requirements that the jobs of tomorrow require, we are also reaching a point in time when AI is elevating the role of people, not vice versa. When technology and humans interact seamlessly to improve the way we work, both businesses and their consumers will be able to reap more and more benefits. Whether its to enhance the DTC customer journey, discover new medicines or better plan a trip, we know that the jobs of the future will be filled with a healthy balance of advancing technology and human interaction to ensure customer satisfaction at all costs.
Originally posted here:
AI Is Growing, But The Robots Are Not Coming For Customer Service - Forbes
Posted in Ai
Comments Off on AI Is Growing, But The Robots Are Not Coming For Customer Service – Forbes