Heres why AI didnt save us from COVID-19 – The Next Web

When the COVID-19 pandemic began we were all so full of hope. We assumed our technology would save us from a disease that could be stymied by such modest steps as washing our hands and wearing face masks. We were so sure that artificial intelligence would become our champion in a trial by combat with the coronavirus that we abandoned any pretense of fear the moment the curve appeared to flatten in April and May. We let our guard down.

Pundits and experts back in January and February very carefully explained how AI solutions such as contact tracing, predictive modeling, and chemical discovery would lead to a truncated pandemic. Didnt most of us figure wed be back to business as usual by mid to late June?

But June turned to July and now were seeing record case numbers on a daily basis. August looks to be brutal. Despite playing home to nearly all of the worlds largest technology companies, the US has become the epicenter of the outbreak. Other nations with advanced AI programs arent necessarily fairing much better.

Among the countries experts would consider competitive in the field of AI compared to the US, nearly all of them have lost the handle on the outbreak: China, Russia, UK, South Korea, etc. Its bad news all the way down.

Figuring out why requires a combination of retrospect and patience. Were not far enough through the pandemic to understand exactly whats gone wrong this things far too alive and kicking for a post-mortem. But we can certainly see where AI hype is currently leading us astray.

Among the many early promises made by the tech community and the governments depending on it, was the idea that contact tracing would make it possible for targeted reopenings. The big idea was that AI could sort out who else a person who contracted COVID-19 may have also infected. More magical AI would then figure out how to keep the healthies away from the sicks and wed be able to both quarantine and open businesses at the same time.

This is an example of the disconnect between AI devs and general reality. A system wherein people allow the government to track their every movement can only work with complete participation from a population with absolute faith in their government. Worse, the more infections you have the less reliable contact-tracing becomes.

Thats why only a handful of small countries even went so far as to try it and as far as we know there isnt any current data supporting this approach actually mitigates the spread of COVID-19.

The next big area where AI was supposed to help was in modeling. For a time, the entire technology news cycle was dominated by headlines declaring that AI had first discovered the COVID-19 threat and machine learning would determine exactly how the virus would spread.

Unfortunately modeling a pandemic isnt an exact science. You cant train a neural network on data from past COVID-19 pandemics because there arent any, this coronavirus is novel. That means our models started with guesses and were subsequently trained on up-to-date data from the unfolding pandemic.

To put this in perspective: using on-the-fly data to model a novel pandemic is the equivalent of knowing you have at least a million dollars in pennies, but only being able to talk about the amount youve physically counted in any given period of time.

In other words: our AI models havent proven much better than our best guesses. And they can only show us a tiny part of the overall picture because were only working with the data we can actually see. Up to 80 percent of COVID-19 carriers are asymptomatic and a mere fraction of all possible carriers have been tested.

What about testing? Didnt AI make testing easier? Kind of but not really. AI has made a lot of things easier for the medical community, but not perhaps in the way you think. There isnt a test bot that you can pour a vial of blood into to get an instantgreen or red infected indicator. The best weve got, for the most part, is background AI that generally helps the medical world run.

Sure theres some targeted solutions from the ML community helping frontline professionals deal with the pandemic. Were not taking anything away from the thousands of developers working hard to solve problems. But, realistically, AI isnt providing game-changer solutions that face up against major pandemic problems.

Its making sure truck drivers know which supplies to deliver first. Its helping nurses autocorrect their emails. Its working traffic lights in some cities, which helps with getting ambulances and emergency responders around.

And its even making pandemic life easier for regular folks too. The fact that youre still getting packages (even if theyre delayed) is a testament to the power of AI. Without algorithms, Amazon and its delivery pipeline would not be able to maintain the infrastructure necessary to ship you a set of fuzzy bunny slippers in the middle of a pandemic.

AI is useful during the pandemic, but its not out there finding the vaccine. Weve spent the last few years here at TNW talking about how AI will one day make chemical compound discovery a trivial matter. Surely finding the proper sequence of proteins or figuring out exactly how to mutate a COVID-killer virus is all in a days work for todays AI systems right? Not so much.

Despite the fact Google and NASA told us wed reached quantum supremacy last year, we havent seen useful quantum algorithms running on cloud-accessible quantum computers like weve been told we would. Scientists and researchers almost always tout chemical discovery as one of the hard problems that quantum computers can solve. But nobody knows when. What we do know is that today, in 2020, humans are still painstakingly building a vaccine. When its finished itll be squishy meatbags who get the credit, not quantum robots.

In times of peace, every new weapon looks like the end-all-be-all solution until you test it. We havent had many giant global emergencies to test our modern AI on. Its done well with relatively small-scale catastrophes like hurricanes and wildfires, but its been relegated to the rear echelon of the pandemic fight because AI simply isnt mature enough to think outside of the boxes we build it in yet.

At the end of the day, most of our pandemic problems are human problems. The science is extremely clear: wear a mask, stay more than six feet away from each other, and wash your hands. This isnt something AI can directly help us with.

But that doesnt mean AI isnt important. The lessons learned by the field this year will go a long way towards building more effective solutions in the years to come. Heres hoping this pandemic doesnt last long enough for these yet-undeveloped systems to become important in the fight against COVID-19.

Published July 24, 2020 19:21 UTC

Read the original here:

Heres why AI didnt save us from COVID-19 - The Next Web

How Hospitals Are Using AI to Battle Covid-19 – Harvard Business Review

Executive Summary

The spread of Covid-19 is stretching operational systems in health care and beyond. The reason is both simple: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. Heres how some hospitals are employing artificial intelligence to handle the surge of patients.

Weve made our coronavirus coverage free for all readers. To get all of HBRs content delivered to your inbox, sign up for the Daily Alert newsletter.

On Monday March 9, in an effort to address soaring patient demand in Boston, Partners HealthCare went live with a hotline for patients, clinicians, and anyone else with questions and concerns about Covid-19. The goals are to identify and reassure the people who do not need additional care (the vast majority of callers), to direct people with less serious symptoms to relevant information and virtual care options, and to direct the smaller number of high-risk and higher-acuity patients to the most appropriate resources, including testing sites, newly created respiratory illness clinics, or in certain cases, emergency departments. As the hotline became overwhelmed, the average wait time peaked at 30 minutes. Many callers gave up before they could speak with the expert team of nurses staffing the hotline. We were missing opportunities to facilitate pre-hospital triage to get the patient to the right care setting at the right time.

The Partners team, led by Lee Schwamm, Haipeng (Mark) Zhang, and Adam Landman, began considering technology options to address the growing need for patient self-triage, including interactive voice response systems and chatbots. We connected with Providence St. Joseph Health system in Seattle, which served some of the countrys first Covid-19 patients in early March. In collaboration with Microsoft, Providence built an online screening and triage tool that could rapidly differentiate between those who might really be sick with Covid-19 and those who appear to be suffering from less threatening ailments. In its first week, Providences tool served more than 40,000 patients, delivering care at an unprecedented scale.

Our team saw potential for this type of AI-based solution and worked to make a similar tool available to our patient population. The Partners Covid-19 Screener provides a simple, straightforward chat interface, presenting patients with a series of questions based on content from the U.S. Centers for Disease Control and Prevention (CDC) and Partners HealthCare experts. In this way, it too can screen enormous numbers of people and rapidly differentiate between those who might really be sick with Covid-19 and those who are likely to be suffering from less threatening ailments. We anticipate this AI bot will alleviate high volumes of patient traffic to the hotline, and extend and stratify the systems care in ways that would have been unimaginable until recently. Development is now under way to facilitate triage of patients with symptoms to most appropriate care setting, including virtual urgent care, primary care providers, respiratory illness clinics, or the emergency department. Most importantly, the chatbot can also serve as a near instantaneous dissemination method for supporting our widely distributed providers, as we have seen the need for frequent clinical triage algorithm updates based on a rapidly changing landscape.

Similarly, at both Brigham and Womens Hospital and at Massachusetts General Hospital, physician researchers are exploring the potential use of intelligent robots developed at Boston Dynamics and MIT to deploy in Covid surge clinics and inpatient wards to perform tasks (obtaining vital signs or delivering medication) that would otherwise require human contact in an effort to mitigate disease transmission.

Several governments and hospital systems around the world have leveraged AI-powered sensors to support triage in sophisticated ways. Chinese technology company Baidu developed a no-contact infrared sensor system to quickly single out individuals with a fever, even in crowds. Beijings Qinghe railway station is equipped with this system to identify potentially contagious individuals, replacing a cumbersome manual screening process. Similarly, Floridas Tampa General Hospital deployed an AI system in collaboration with Care.ai at its entrances to intercept individuals with potential Covid-19 symptoms from visiting patients. Through cameras positioned at entrances, the technology conducts a facial thermal scan and picks up on other symptoms, including sweat and discoloration, to ward off visitors with fever.

Beyond screening, AI is being used to monitor Covid-19 symptoms, provide decision support for CT scans, and automate hospital operations. Meanwhile, Zhongnan Hospital in China uses an AI-driven CT scan interpreter that identifies Covid-19 when radiologists arent available. Chinas Wuhan Wuchang Hospital established a smart field hospital staffed largely by robots. Patient vital signs were monitored using connected thermometers and bracelet-like devices. Intelligent robots delivered medicine and food to patients, alleviating physician exposure to the virus and easing the workload of health care workers experiencing exhaustion. And in South Korea, the government released an app allowing users to self-report symptoms, alerting them if they leave a quarantine zone in order to curb the impact of super-spreaders who would otherwise go on to infect large populations.

The spread of Covid-19 is stretching operational systems in health care and beyond. We have seen shortages of everything, from masks and gloves to ventilators, and from emergency room capacity to ICU beds to the speed and reliability of internet connectivity. The reason is both simple and terrifying: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.

While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. This is because traditional processes those that rely on people to function in the critical path of signal processing are constrained by the rate at which we can train, organize, and deploy human labor. Moreover, traditional processes deliver decreasing returns as they scale. On the other hand, digital systems can be scaled up without such constraints, at virtually infinite rates. The only theoretical bottlenecks are computing power and storage capacity and we have plenty of both. Digital systems can keep pace with exponential growth.

Importantly, AI for health care must be balanced by the appropriate level of human clinical expertise for final decision-making to ensure we are delivering high quality, safe care. In many cases, human clinical reasoning and decision making cannot be easily replaced by AI, rather AI is a decision aid that helps human improve effectiveness and efficiency.

Digital transformation in health care has been lagging other industries. Our response to Covid today has accelerated the adoption and scaling of virtual and AI tools. From the AI bots deployed by Providence and Partners HealthCare to the Smart Field Hospital in Wuhan, rapid digital transformation is being employed to tackle the exponentially growing Covid threat. We hope and anticipate that after Covid-19 settles, we will have transformed the way we deliver health care in the future.

Read the original post:

How Hospitals Are Using AI to Battle Covid-19 - Harvard Business Review

Artificial Intelligence’s Power, and Risks, Explored in New Report – Education Week

Picture this: a small group of middle school students are learning about ancient Egypt, so they strap on a virtual reality headset and, with the assistance of an artificial intelligence tour guide, begin to explore the Pyramids of Giza.

The teacher, also journeying to one of the oldest known civilizations via a VR headset, has assigned students to gather information to write short essays. During the tour, the AI guide fields questions from students and points them to specific artifacts and discuss what they see.

In preparing the the AI-powered lesson on Egypt, the teacher beforehand would have worked with the AI program to craft a lesson plan that not only dives deep into the subject, but figures out how to keep the group moving through the virtual field trip and how to create more equal participation during the discussion. In that scenario, the AI listens, observes and interacts naturally to enhance a group learning experience, and to make a teachers job easier.

That classroom scenario doesnt quite exist yet, but its one example of AIs potential to transform students academic experiences, as described in a new report that also warns of the risks to privacy and students being treated unfairly by the technologys algorithms. Experts in the field of K-12 and AI say the day is coming when teachers will engage with AI in a way that goes beyond simply reading metrics off a dashboard to form actual partnerships that achieve end goals for students together.

The recently-released report is from the Center for Integrative Research in Computing and Learning Sciences, a hub for National Science Foundation-funded projects that focus on cyberlearning, and looks at how AI could shape K-12 education in the future, along with pressing questions centered on privacy, bias, transparency and fairness.

The report summarizes a two-day online panel that featured 22 experts in AI and learning and provides a set of recommendations for school leaders, policymakers, academics and education vendors to consider as general AI research progresses in leaps and bounds and technology is integrated into classrooms at an accelerated pace due to COVID-19.

It also provides some concrete visions for new and expanded uses of AI in K-12, from reimagined automated essay scoring and next-level assessments to AI used in combo with virtual reality and voice-or-gesture-based systems.

Researchers expect it to be about five to 10 years before AI can work lockstep with teachers as classroom partners in a process they have dubbed as orchestration.

That describes when an educator offloads time-consuming classroom tasks to AI, such as forming groups, creating lesson plans, and helping students work together to revise essays, and eventually to monitor progress towards bigger goals.

The report cautions, however, that experts are concerned about the tendency to overpromise what AI can do and to overgeneralize beyond todays limited capabilities.

The researchers also touched on longstanding risks related to AI and education, such as privacy, security, bias, transparency, and fairness and went further to discuss design risks and how poor design practices of AI systems could unintentionally harm users.

While the fusion of AI and K-12 is far from new, the technologys impact in the classroom so far can be measured only as small scale, according to the report.

Thats set to potentially change, and researchers who participated in the CIRCLS online panel made clear that decision-makers need to make sure they dont delay when it comes to planning and ensuring AI in K-12 is used in a manner that is equitable, ethical, and effective and to mitigate weaknesses, risks, and potential harm.

We do not yet know all of the uses and applications of AI that will emerge; new innovations are appearing regularly and the most consequential applications of AI to education are likely not even invented yet, the report says.In a future where technology is ubiquitous in education, AI will also become pervasive in learning, teaching, and assessment. Now is the time to begin responding to the novel capabilities and challenges this will bring.

See also:

Continued here:

Artificial Intelligence's Power, and Risks, Explored in New Report - Education Week

AI Could Revolutionize War as Much as Nukes – WIRED

In 1899, the worlds most powerful nations signed a treaty at The Hague that banned military use of aircraft, fearing the emerging technologys destructive power. Five years later the moratorium was allowed to expire, and before long aircraft were helping to enable the slaughter of World War I. Some technologies are so powerful as to be irresistible, says Greg Allen, a fellow at the Center for New American Security, a non-partisan Washington DC think tank. Militaries around the world have essentially come to the same conclusion with respect to artificial intelligence.

Allen is coauthor of a 132-page new report on the effect of artificial intelligence on national security. One of its conclusions is that the impact of technologies such as autonomous robots on war and international relations could rival that of nuclear weapons. The report was produced by Harvards Belfer Center for Science and International Affairs, at the request of IARPA, the research agency of the Office of the Director of National Intelligence. It lays out why technologies like drones with bird-like agility, robot hackers, and software that generates photo-real fake video are on track to make the American military and its rivals much more powerful.

New technologies like those can be expected to bring with them a series of excruciating moral, political, and diplomatic choices for America and other nations. Building up a new breed of military equipment using artificial intelligence is one thingdeciding what uses of this new power are acceptable is another. The report recommends that the US start considering what uses of AI in war should be restricted using international treaties.

The US military has been funding, testing and deploying various shades of machine intelligence for a long time. In 2001, Congress even mandated that one-third of ground combat vehicles should be uncrewed by 2015a target that has been missed. But the Harvard report argues that recent, rapid progress in artificial intelligence that has invigorated companies such as Google and Amazon is poised to bring an unprecedented surge in military innovation. Even if all progress in basic AI research and development were to stop, we would still have five or 10 years of applied research, Allen says.

Eric Adams

Darpa's Developing Tiny Drones That Swarm to and From Motherships

Tom Simonite

Its Too Late to Stop China From Becoming an AI Superpower

Cade Metz

Hackers Dont Have to Be Human Anymore. This Bot Battle Proves It

In the near-term, Americas strong public and private investment in AI should give it new ways to cement its position as the worlds leading military power, the Harvard report says. For example, nimbler, more intelligent ground and aerial robots that can support or work alongside troops would build on the edge in drones and uncrewed ground vehicles that has been crucial to the US in Iraq and Afghanistan. That should mean any given mission requires fewer human soldiersif any at all.

The report also says that the US should soon be able to significantly expand its powers of attack and defense in cyberwar by automating work like probing and targeting enemy networks or crafting fake information. Last summer, to test automation in cyberwar, Darpa staged a contest in which seven bots attacked each other while also patching their own flaws.

As time goes on, improvements in AI and related technology may also shake up balance of international power by making it easier for smaller nations and organizations to threaten big powers like the US. Nuclear weapons may be easier than ever to build, but still require resources, technologies, and expertise in relatively short supply. Code and digital data tend to get cheap, or end up spreading around for free, fast. Machine learning has become widely used and image and facial recognition now crop up in science fair projects.

The Harvard report warns that commoditization of technologies such as drone delivery and autonomous passenger vehicles could become powerful tools of asymmetric warfare. ISIS has already started using consumer quadcopters to drop grenades on opposing forces. Similarly, techniques developed to automate cyberwar can probably be expected to find their way into the vibrant black market in hacking tools and services.

You could be forgiven for starting to sweat at the thought of nation states fielding armies of robots that decide for themselves whether to kill. Some people who have helped build up machine learning and artificial intelligence already are. More than 3,000 researchers, scientists, and executives from companies including Microsoft and Google signed a 2015 letter to the Obama administration asking for a ban on autonomous weapons. I think most people would be very uncomfortable with the idea that you would launch a fully autonomous system that would decide when and if to kill someone, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, and a signatory to the 2015 letter. Although he concedes it might just take one country deciding to field killer robots to set others changing their minds about autonomous weapons. Perhaps a more realistic scenario is that countries do have them, and abide by a strict treaty on their use, he says. In 2012, the Department of Defense set a temporary policy requiring a human to be involved in decisions to use lethal force; it was updated to be permanent in May this year.

The Harvard report recommends that the National Security Council, DoD, and State Department should start studying now what internationally agreed-on limits ought to be imposed on AI. Miles Brundage, who researches the impacts of AI on society at the University of Oxford, says theres reason to think that AI diplomacy can be effectiveif countries can avoid getting trapped in the idea that the technology is a race in which there will be one winner. One concern is that if we put such a high premium on being first, then things like safety and ethics will go by the wayside, he says. We saw in the various historical arms races that collaboration and dialog can pay dividends.

Indeed, the fact that there are only a handful of nuclear states in the world is proof that very powerful military technologies are not always irresistible. Nuclear weapons have proven that states have the ability to say I dont even want to have this technology, Allen says. Still, the many potential uses of AI in national security suggest that the self-restraint of the US, its allies, and adversaries is set to get quite a workout.

UPDATE 12:50 pm ET 07/19/17: An earlier version of this story incorrectly said the Department of Defenses directive on autonomous weapons was due to expire this year.

Follow this link:

AI Could Revolutionize War as Much as Nukes - WIRED

An IBM AI Invented Perfumes That’ll Soon Sell in 4,000 Stores

Eau de AI

We already knew that IBM’s artificial intelligence (AI) Watson was a “Jeopardy” champion that dabbled in cancer diagnosis and life insurance.

Now, according to a new story in Vox, a cousin of Watson has accomplished an even finer task: creating perfumes. Soon, these AI-invented perfumes will go on sale in about 4,000 locations in Brazil.

Smell Curve

Fragrances have traditionally been created by sought-after perfumers. But Symrise, a German fragrance company with clients including Avon and Coty, recently struck a deal with IBM to see how its AI tools could be used to modernize the process.

IBM created an algorithm — its name is Philyra, according to Datanami — that studies fragrance formulas and customer data, then spits out new perfumes inspired by those training materials.

Nose Warmer

The first results of the collaboration are two scents, and the AI-invented perfumes will soon go on sale at O Boticário, a prominent Brazilian beauty chain. Spokespeople declined to confirm to Vox precisely which fragrances were invented by the AI.

It’s a sure sign, though, that AI creations are starting to leak out into the world of consumer products — and next time you sniff a sampler, it’s possible that it wasn’t formulated by a human at all.

READ MOREIs AI the Future of Perfume? IBM Is Betting on It [Vox]

More on IBM AI: IBM’s New AI Can Predict Psychosis in Your Speech

Visit link:

An IBM AI Invented Perfumes That’ll Soon Sell in 4,000 Stores

What Salesforce Einstein teaches us about enterprise AI – VentureBeat

Every business has customers. Every customer needs care. Thats why CRM is so critical to enterprises, but between incomplete data and clunky workflows, sales and marketing operations at most companies are less-than-optimal.

At the same time, companies who arent Google or Facebook dont have the billion dollar R&D budgets to build out AI teams to take away our human efficiencies. Even companies with the right technical talent dont have the petabytes of data that the tech titans use to train cutting-edge neural network models.

Salesforce hopes to plug this AI knowledge gap with Einstein. According to Chief Scientist, Richard Socher, Einstein is an AI layer, not a standalone product, that infuses AI features and capabilities across all the Salesforce Clouds.

The 150,000+ companies who already use Salesforce should be able to simply flip a switch and deploy AI capabilities to their organization. Organizations with data science and machine learning teams of their own can extend the base functionality through predictive APIs such as Predictive Vision and Predictive Sentiment Services, which allows companies to understand how their products feature in images and video and how consumers feel about them.

The improvements are already palpable. According to Socher, Salesforce Marketing Clouds predictive audiences feature helps marketers hone in on high-value outreach as well as re-engage users who might be in danger of unsubscribing. The technology has led to an average 25% lift on clicks and opens. Customers of Salesforces Sales Cloud have seen a 300% increase in conversions from leads to opportunities with predictive lead scoring while customers of Commerce Cloud have seen a 7-15% increase in revenue per site visitor.

Achieving these results has not been cheap. Salesforces machine learning and AI buying spree includes RelateIQ ($390 million), BeyondCore ($110 million), PredictionIO ($58 million) as well as deep learning specialist Metamind of which Socher was previously founder & CEO / CTO. Marc Benioff has spent over $4 billion to acquire the right talent and tech in 2016.

Even with all the right money and the right people, rolling out AI for enterprises is fraught with peril due to competition and high expectations. Gartner analyst Todd Berkowitz pointed out that Einsteins capabilities were not nearly as sophisticated as standalone solutions on the market. Other critics say the technology is at least a year and a half from being fully baked.

Infer is one of those aforementioned standalone solutions offering predictive analytics for sales and marketing, putting them in direct competition with Salesforce. In a detailed article about the current AI hype, CEO Vik Singh challenges that big companies like Salesforce are making machine learning feel like AWS infrastructure which wont result in sticky adoption. Singh adds that Machine learning is not like AWS, which you can just spin up and magically connect to some system.

Chief Scientist Socher acknowledges that challenges exist, but believes they are surmountable.

Communication is at the core of CRM, but while computers have surpassed humans in many key computer vision tasks, natural language processing (NLP) and natural language understanding (NLU) approaches fall short of being performant in high stakes enterprise environments.

The problem with most neural network approaches is that they train models on a single task and a single data type to solve a narrow problem. Conversation, on the other hand, requires different types of functionality. You have to be able to understand social cues and the visual world, reason logically, and retrieve facts. Even the motor cortex appears to be relevant for language understanding, explains Socher. You cannot get to intelligent NLP without tackling multi-task approaches.

Thats why the Salesforce AI Research team is innovating on a joint many-task learning approach that leverages transfer learning, where a neural network applies knowledge of one domain to other domains. In theory, understanding linguistic morphology should also also accelerate understanding of semantics and syntax.

In practice, Socher and his deep learning research team have been able to achieve state-of-the-art results on academic benchmark tests for main entity recognition (can you identify key objects, locations, and persons?) and semantic similarity (can you identify words and phrases that are synonyms?). Their approach can solve five NLP tasks chunking, dependency parsing, semantic relatedness, textual entailment, and part of speech tagging all at once and also builds in a character model to handle incomplete, misspelled, or unknown words.

Socher believes that AI researchers will achieve transfer learning capabilities in more comprehensive ways in 2017 and speech recognition will be embedded in many more aspects of our lives. Right now consumers are used to asking Siri about the weather tomorrow, but we want to enable people to ask natural questions about their own unique data.

For Salesforce Einstein, Socher is building a comprehensive Q&A system on top of multi-task learning models. To learn more about Salesforces vision for AI, you can hear Socher speak at the upcoming AI By The Bay conference in San Francisco (VentureBeat discount code VB20 for 20% off).

Solving difficult research problems is only step one. Whats surprising is that you may have solved a critical research problem, but operationalizing your work for customers requires so much more engineering work and talented coordination across the company, Socher reveals.

Salesforce has hundreds of thousands of customers, each with their own analyses and data, he explains. You have to solve the problem at a meta level and abstract away all the complexity of how you do it for each customer. At the same time, people want to modify and customize the functionality to predict anything they want.

Socher identifies three key phases of enterprise AI rollout: data, algorithms, and workflows. Data happens to be the first and biggest hurdle for many companies to clear. In theory, companies have the right data, but then you find the data is distributed across too many places, doesnt have the right legal structure, is unlabeled, or is simply not accessible.

Hiring top talent is also non-trivial, as computer scientists like to say. Different types of AI problems have different complexity. While some AI applications are simpler, challenges with unstructured data such as text and vision mean experts who can handle them are rare and in-demand.

The most challenging piece is the last part: workflows. Whats the point of fancy AI research when nobody uses your work? Socher emphasizes that you have to be very careful to think about how to empower users and customers with your AI features. This is very complex but very specific. Workflow integration for sales processes is very different from those for self-driving cars.

Until we invent AI that invents AI, iterating on our data, research, and operations is a never-ending job for us humans. Einstein will never be fully complete. You can always improve workflows and make them more efficient, Socher concludes.

This article appeared originally at Topbots.

Mariya Yao is the Head of R&D atTopbots, a site devoted to chatbots and AI.

Read this article:

What Salesforce Einstein teaches us about enterprise AI - VentureBeat

AI will boost global GDP by nearly $16 trillion by 2030with much of the gains in China – Quartz

Much has already been made about how artificial intelligence is going to transform our lives, ranging from visions of the future in which robots make humans obsolete to utopias in which technology solves intractable problems and frees up people to pursue their passions. Consultancy firm PwC ran the numbers, and came up with a relatively rosy scenario with regards to the impact AI will have on the global economy. By 2030, global GDP could increase by 14%, or $15.7 trillion, because of AI, the firm said in a report today (pdf).

Almost half of these economic gains will accrue to China, where AI is projected to give the economy a 26% boost over the next 13 yearsthe equivalent of an extra $7 trillion in GDP. North America can expect a 14.5% increase in GDP, worth $3.7 trillion.

According to PwC, North America will get the biggest economic boost in the next few years as consumers and industries are more ready to incorporate AI. By the mid-2020s, however, China will rise to the top. Since Chinas economy is heavy on manufacturing, there is a lot of potential for technology to boost productivity, but it will take time for new technology and the necessary expertise to come up to speed. When this happens, AI-enabled technologies will be exported from China to North America. Whats more, Chinese firms tend to re-invest more capital than North American and European ones, further boosting business growth in AI.

PwCs study defines four types of AI: automated intelligence, which performs tasks on its own; assisted intelligence, which helps people perform tasks faster and better; augmented intelligence, which helps people make better decisions; and autonomous intelligence, which automates decision-making entirely. Their forecasts isolate the potential growth from AI, keeping all other factors in the economy equal.

A large part of the forecast GDP gains$6.6 trillionare expected to come from increased labor productivity, with businesses automating processes or using AI to assist their existing workforce. This suggests PwC believes AI will generate a productivity boost thats bigger than previous technological breakthroughsdespite recent advancements, global productivity growth is very low and economists are puzzled about how to get out of this trap.

The rest of the projected economic growth would come from increased consumer demand for personalized and AI-enhanced products and services. The sectors that have the most to gain on this front are health care, financial services, and the auto industry.

Notably, there is no panic in the report about AI leading to excessive job lossesin previous reports, PwC has already tried to debunk some of the scarier forecasts about that. Instead the researchers recommend that businesses prepare for a hybrid workforce where humans and AI work side-by-side. In harmony? TBD.

Read this next: An economist explains why education wont save us from being replaced by robots

Read more here:

AI will boost global GDP by nearly $16 trillion by 2030with much of the gains in China - Quartz

Nepal should gamble on AI – The Phnom Penh Post

A statue of Beethoven by German artist Ottmar Hoerl stands in front of a piano during a presentation of a part of the completion of Beethovens 10th symphony made using artificial intelligence at the Telekom headquarters in Bonn, western Germany. AFP

Artifical intelligence (AI) is an essential part of the fourth industrial revolution, and Nepal can still catch up to global developments.

Although unrelated, the last decade has seen two significant events: Nepal promulgated a new constitution after decades of instability and is now aiming for prosperity. At the same time, AI saw a resurgence through deep learning, impacting a wide variety of fields. Though unrelated, one can help the other AI can help Nepal in its quest for development and prosperity.

AI was conceptualised during the 1950s, and have seen various phases. The concept caught the publics imagination and hundreds, if not thousands, of movies and novels were created based on a similar idea of a machines intelligence being on par with humans.

But human intelligence is a very complex phenomenon and is diverse in its abilities like rationalisation or recognising a persons face. Even the seemingly simple task of recognising faces, when captured at different camera angles, was considered a difficult challenge for AI as late as the first decade of this century.

However, thanks to better algorithms, computation capabilities, and loads of data from the magic of the internet and social media giants, the current AI systems are now capable of performing such facial recognition tasks better than humans. Some other exciting landmarks include surpassing trained medical doctors in diagnosing diseases from X-ray images and a self-taught AI beating professionals in the strategy board game Go. Although AI may be still far away from general human intelligence, these examples should be more than enough to start paying attention to the technology.

The current leap in AI is now considered an essential ingredient for the fourth industrial revolution. The first industrial revolution, via the steam engine, that started in Great Britain during the late 1700s and quickly expanded to other European countries and America led to rapid economic prosperity. This further opened floodgates of innovation and wealth creation leading to the second and third industrial revolution. A case study of this could be the relationship between Nokia and Finland.

Both of them failing miserably in economic terms in the late 1980s. But both the company and the country gambled on GSM technology, which later went on to become the worlds dominant network standard. In a single decade that followed, Finland achieved unprecedented economic growth with Nokia accounting for more than 70 per cent of Helsinkis stock exchange market capital. This single decade transformed Finland into the most specialised countries in terms of information and communication despite it being under a severe economic crisis since World War II.

The gamble involved not just the motivation to support new technology, but a substantial investment through the Finnish Funding Agency for Technology and Innovation into Nokias research and development projects. This funding was later returned in the form of colossal tax revenue, employment opportunities and further demand for skilled human resources. All these resulted in an ecosystem with a better educational system and entrepreneurial opportunities.

Owing to the years of political turmoil and instability, Nepal missed out on these past industrial revolutions. But overlooking the current one might leave us far behind.

Global AI phenomenon

A recent study of the global AI phenomenon has shown that developed countries have invested heavily in talent and the associated market and have already started to see a direct contribution from AI in their economy. Some African countries are making sure that they are not being left behind, with proper investment in AI research and development. AI growth in Africa has seen applications in the area of agriculture and healthcare. Google, positioning itself to be an AI-first company, has caught this trend in Africa and opened its first African AI lab in Accra, Ghana.

So is Nepal too late to this party? Perhaps. But Nepal still has a chance of catching up. Instead of scattering its focus and the available resources, the country now needs to narrow its investments into AI and technology. It will all start with the central government beginning with a concrete plan for AI development for the upcoming decade.

Similar policies have already been released by many other countries, including its neighbours India and China. It is unfortunate to note that AI strategy from China, reported in the 19th Congress by Chinese President Xi Jinping in 2017, received close to no attention in Nepal, in comparison to the Belt and Road Initiative (BRI) strategy that was announced in 2013.

An essential component of such a strategic plan should be on enhancing Nepals academic institutions. Fortunately, any such programme from the government could be facilitated by recent initiatives like Naami Nepal, a research organisation for AI or NIC Nepal, an innovation centre started by Mahabir Pun.

Moreover, thanks to the private sector, Nepal has also begun to see AI-based companies like Fuse machines or Paaila Technologies that are attempting to close the gap. It has now become necessary to leverage AI for inclusive economic growth to fulfil our dreams of prosperity.

THE KATHMANDU POST/ASIA NEWS NETWORK

View post:

Nepal should gamble on AI - The Phnom Penh Post

All Organizations Developing AI Must Be Regulated, Warns Elon Musk – Analytics Insight

Through the development of artificial intelligence (AI) in the past few years, Teslas Elon Musk has been expressing serious concerns and warnings regarding its negative effects. Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of AI. He tweeted recently, all organizations developing advanced AI should be regulated, including Tesla.

Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and John Schulman. At first, OpenAI was formed as a non-profit backed by US$1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).

He also responded to a tweet posted back in July about how OpenAI originally billed itself as a nonprofit, but the company is now seeking to license its closed technology. In response, Musk, who was one of the companys founders but is no longer a part of the company, said that there were reasonable concerns.

Musk exited the company later, reportedly due to disagreements about the companys direction.

Back in April, Musk said during an interview at the World Artificial Intelligence Conference in Shanghai, that computers would eventually surpass us in every single way.

The first thing we should assume is we are very dumb, Musk said. We can definitely make things smarter than ourselves.

Musk pointed to computer programs that allow computers to beat chess champions as well as technology from Neuralink, his own brain interface company that may eventually be able to help people boost their cognitive abilities in some spheres, as examples.

AI is being criticized by others beside Musk, however. Digital rights groups and the American Civil Liberties Union (ACLU) have called for either a complete ban or more transparency in AI technology such as facial recognition software. Even Googles CEO, Sundar Pichai, has warned of the dangers of AI, calling for more regulation of the technology.

The Tesla and SpaceX CEO has been outspoken about the potential dangers of AI before. During a talk sponsored by South by Southwest in Austin in 2018, Musk talked about the dangers of artificial intelligence.

Moreover, he tweeted in 2014 that it could be more dangerous than nukes, and told an audience at an MIT Aeronautics and Astronautics symposium that year that AI was our biggest existential threat, and humanity needs to be extremely careful. He quoted, With artificial intelligence, we are summoning the demon. In all those stories where theres the guy with the pentagram and the holy water, its like yeah hes sure he can control the demon. Didnt work out.

However, not all his Big Tech contemporaries agree. Facebooks chief AI scientist Yann LeCun described his call for prompt AI regulation nuts, while Mark Zuckerberg said his comments on the risks of the tech were pretty irresponsible. Musk responded by saying the Facebook founders understanding of the subject is limited.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Link:

All Organizations Developing AI Must Be Regulated, Warns Elon Musk - Analytics Insight

Global AI in Healthcare Diagnosis Market 2020-2027 – AI in Future Epidemic Outbreaks Prediction and Response – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Global Analysis by Diagnostic Tool; Application; End User; Service; and Geography" report has been added to ResearchAndMarkets.com's offering.

The global artificial intelligence (AI) in healthcare diagnosis market was valued at US$ 3,639.02 million in 2019 and is projected to reach US$ 66,811.97 million by 2027; it is expected to grow at a CAGR of 44% during 2020-2027.

The growth of the market is mainly attributed to factors such rising adoption of AI in disease identification and diagnosis, and increasing investments in AI healthcare startups. However, the lack of skilled workforce and ambiguity in regulatory guidelines for medical software are the factor hindering the growth of the market.

Artificial Intelligence in healthcare is one of the most significant technological advancements in medicine so far. The involvement of multiple startups in the development of AI-driven imaging and diagnostic solutions is the major factors contributing to the growth of the market. China, the US, and the UK are emerging as popular hubs for healthcare innovations.

Additionally, the British government has announced the establishment of a National Artificial Intelligence Lab that would collaborate with the country's universities and technology companies to conduct research on cancer, dementia, and heart diseases. The UK-based startups have received benefits from the government's robust library of patient data, as British citizens share their anonymous healthcare data with the British National Health Service. As a result, the number of artificial intelligence startups in the healthcare sector has significantly grown in the past few years, and the trend is expected to be the same in the coming years.

Based on diagnostic tool, the global artificial intelligence in healthcare diagnosis market is segmented into medical imaging tool, automated detection system, and others. The medical imaging tool segment held the largest share of the market in 2019, and the market for automated detection system is expected to grow at the highest CAGR during the forecast period.

Based on application, the global artificial intelligence in healthcare diagnosis market is segmented into eye care, oncology, radiology, cardiovascular, and others. The oncology segment held the larger share of the market in 2019, and the radiology segment is expected to register the highest CAGR during the forecast period.

Based on service, the global artificial intelligence in healthcare diagnosis market is segmented into tele-consultation, tele monitoring, and others. The tele-consultation segment held the largest share of the market in 2019, however, tele monitoring segment it is further expected to report the highest CAGR in the market during the forecast period.

Based on end-user, the global artificial intelligence in healthcare diagnosis market is segmented into hospital and clinic, diagnostic laboratory, and home care. The hospital and clinic segment held the highest share of the market in 2019 and is expected to register the highest CAGR in the market during the forecast period.

Key Topics Covered

1. Introduction

1.1 Scope of the Study

1.2 Report Guidance

1.3 Market Segmentation

1.3.1 Artificial Intelligence in Healthcare Diagnosis Market - By Diagnostic Tool

1.3.2 Artificial Intelligence in Healthcare Diagnosis Market - By Application

1.3.3 Artificial Intelligence in Healthcare Diagnosis Market - By Service

1.3.4 Artificial Intelligence in Healthcare Diagnosis Market - By End User

1.3.5 Global Artificial Intelligence in Healthcare Diagnosis Market - By Geography

2. Artificial Intelligence in Healthcare Diagnosis Market - Key Takeaways

3. Research Methodology

3.1 Coverage

3.2 Secondary Research

3.3 Primary Research

4. Artificial Intelligence in Healthcare Diagnosis Market - Market Landscape

4.1 Overview

4.2 PEST Analysis

4.2.1 North America - PEST Analysis

4.2.2 Europe - PEST Analysis

4.2.3 Asia-Pacific - PEST Analysis

4.2.4 Middle East & Africa - PEST Analysis

4.2.5 South & Central America

4.3 Expert Opinion

5. Artificial Intelligence in Healthcare Diagnosis Market - Key Market Dynamics

5.1 Market Drivers

5.1.1 Rising Adoption of Artificial Intelligence (AI) in Disease Identification and Diagnosis

5.1.2 Increasing Investment in AI Healthcare Start-ups

5.2 Market Restraints

5.2.1 Lack of skilled AI Workforce and Ambiguous Regulatory Guidelines for Medical Software

5.3 Market Opportunities

5.3.1 Increasing Potential in Emerging Economies

5.4 Future Trends

5.4.1 AI in Epidemic Outbreak Prediction and Response

5.5 Impact Analysis

6. Artificial Intelligence in Healthcare Diagnosis Market - Global Analysis

6.1 Global Artificial Intelligence in Healthcare Diagnosis Market Revenue Forecast and Analysis

6.2 Global Artificial Intelligence in Healthcare Diagnosis Market, By Geography - Forecast and Analysis

6.3 Market Positioning of Key Players

7. Artificial Intelligence in Healthcare Diagnosis Market - By Diagnostic Tool

7.1 Overview

7.2 Artificial Intelligence in Healthcare Diagnosis Market Revenue Share, by Diagnostic Tool (2019 and 2027)

7.3 Medical Imaging Tool

7.4 Automated Detection System

7.5 Others

8. Artificial Intelligence in Healthcare Diagnosis Market Analysis, By Application

8.1 Overview

8.2 Artificial Intelligence in Healthcare Diagnosis Market Revenue Share, by Application (2019 and 2027)

8.3 Eye Care

8.4 Oncology

8.5 Radiology

8.6 Cardiovascular

8.7 Others

9. Artificial Intelligence in Healthcare Diagnosis Market - By End-User

9.1 Overview

9.2 Artificial Intelligence in Healthcare Diagnosis Market, by End-User, 2019 and 2027 (%)

9.3 Hospital and Clinic

9.4 Diagnostic Laboratory

9.5 Home Care

10. Artificial Intelligence in Healthcare Diagnosis Market - By Service

10.1 Overview

10.2 Artificial Intelligence in Healthcare Diagnosis Market, by Service, 2019 and 2027 (%)

10.3 Tele-Consultation

10.4 Tele-Monitoring

10.5 Others

11. Artificial Intelligence in Healthcare Diagnosis Market - Geographic Analysis

11.1 North America: Artificial Intelligence in Healthcare Diagnosis Market

11.2 Europe: Artificial Intelligence in Healthcare Diagnosis Market

11.3 Asia-Pacific: Artificial Intelligence in Healthcare Diagnosis Market

11.4 Middle East and Africa: Artificial Intelligence in Healthcare Diagnosis Market

11.5 South & Central America: Artificial Intelligence in Healthcare Diagnosis Market

12. Impact of COVID-19 Pandemic on Global Artificial Intelligence in Healthcare Diagnosis Market

12.1 North America: Impact Assessment of COVID-19 Pandemic

12.2 Europe: Impact Assessment of COVID-19 Pandemic

12.3 Asia-Pacific: Impact Assessment of COVID-19 Pandemic

12.4 Rest of the World: Impact Assessment of COVID-19 Pandemic

13. Artificial Intelligence in Healthcare Diagnosis Market - Industry Landscape

13.1 Overview

13.2 Growth Strategies Done by the Companies in the Market, (%)

13.3 Organic Developments

13.4 Inorganic Developments

14. Company Profiles

14.1 General Electric Company

14.2 Aidoc

14.3 Arterys Inc.

14.4 icometrix

14.5 IDx Technologies Inc.

14.6 MaxQ AI Ltd.

14.7 Caption Health, Inc.

14.8 Zebra Medical Vision, Inc.

14.9 Siemens Healthineers AG

More here:

Global AI in Healthcare Diagnosis Market 2020-2027 - AI in Future Epidemic Outbreaks Prediction and Response - ResearchAndMarkets.com - Business Wire

$2.5 Million NIH Grant Will Support AI Approach to Study and Predict Excessive Drinking | | SBU News – Stony Brook News

A unique data-driven scientific approach to study and predict excessive drinking using social media and mobile-phone data has wonAndrew Schwartz, assistant professor in the Department of Computer Science, and his team a $2.5M award from the National Institutes of Health (NIH). Their research will develop an innovative approach utilizing data from texting and social media as well as mobile phone apps to better understand how unhealthy drinking manifests in daily life and push the envelope for the ability of Artificial Intelligence (AI) to predict human behaviors.

The cross-disciplinary team will build AI models that predict future drinking as well as future effects of drinking on an individuals mood among service industry workers. The award will be distributed over four years in collaboration withRichard Rosenthal, MD, Director of Addiction Psychiatry at Stony Brook Medicine, andChristine DeLorenzo, Associate Professor in the Departments of Biomedical Engineering and Psychiatry and a team at the University of Pennsylvania.

Andy is blazing the trail in advancing AI tools for tackling major health challenges, said Fotis Sotiropoulos, Dean,College of Engineering and Applied Sciences. His work is an ingenious approach using data-science tools, smart-phones and social media postings to identify early signs of alcohol abuse and alcoholism and guide interventions. This is AI-driven discovery and innovation at its very best!

With the aid of the teams methods, social media content can be collected and analyzed faster and cheaper, and present an unscripted, less biased psychological assessment. Traditional ecological studies of unhealthy drinking are done via costly and time-consuming phone interviews, which can also be subject to poor data quality and biases. Schwartz and his team will use their novel AI-based approach over social media text content. Samir Das, Chair of the Department of Computer Science, said: Andrew has very successfully applied large-scale data and text analysis techniques for important and timely human health and well-being applications with very impactful results.

We now know analysis of everyday language can cover a wide array of daily factors affecting individual health but their use over timespans is limited. The methods we will develop in this project should enable real-time study into how health plays out in the each individuals own words and bring about the possibility for personalized mental health care, said Schwartz.

The technology will be developed with a focus on a population of frontline restaurant workers bartenders and servers a group that has among the highest rates of heavy alcohol use of all professions. This unhealthy drinking (defined as seven drinks a week for women and 14 for men, according to the National Institute on Alcohol Abuse and Alcoholism) creates the potential for extensive negative consequences related to work performance, relationships, and physical and psychological health. For example, the team will look at the effect of empathy, as measured through language. Psychologists on the team note that empathy can be both health-promoting (beneficial) and health-threatening (depleting). Distinguishing beneficial versus depleting empathy is an example where AI can capture something difficult to get at through questions. Its also a dimension of human psychology suspected to play a role in stress on servers and bartenders since they often listen to customers problems and provide advice, which could have a negative effect on them.

The teams research will include development of:

Schwartzs research has also focused on the impact of social media to predict mental and physical health issues. He is also using Twitter to studyCOVID-19and before that he focused ondepression and social media posts.

About the ResearcherAndrew Schwartzis an Assistant Professor in the Department of Computer Science and a faculty member of theInstitute for AI-Driven Discovery and Innovation. His research utilizes natural language processing and machine learning techniques to focus on large-scale language analysis for health and social sciences.

Dan Olawski

See the original post here:

$2.5 Million NIH Grant Will Support AI Approach to Study and Predict Excessive Drinking | | SBU News - Stony Brook News

4 tips for transforming your customer communications with AI – VentureBeat

Machine learning and artificial intelligence (AI) are not just tools for streamlining customer engagement. They represent an opportunity for companies to completely rethink how they build context around each individual, ultimately creating a better experience and a more loyal customer.

By tapping into the potential of these new technologies, brands can cost-effectively enable and empower sophisticated, relevant types of two-way communications with existing and potential customers. These technologies also change how brands can use digital channels to reimagine their essential communications such as bills, statements, tax documents, and other important information. Traditionally, these touchpoints were static, generic mailings created to address the widest audience. They lacked personalization and relevance. They were informational, but not engaging.

An important brand message no longer needs to be a common document that looks the same to everyone who receives it. Personalized essential communications bring additional value to the customer. Brands can offer more than a sum-total utility or service bill, by including a customers usage compared to the previous billing period or year and providing tips on how to cut down on their use and cost. But this is just the tip of the iceberg.

With some strategic planning and investment, businesses can tap into machine learning and AI to transform customer communications by following these four steps:

Consumer expectations for how they engage with brands continue to evolve. Customers now demand that every piece of communication is tailored to their individual needs and preferences. These technologies enable brands to meet and exceed these constantly rising expectations.

Its no surprise that first on the list of Gartners top 10 strategic technology trends for 2017 is AI and advanced machine learning. These solutions can sift through, analyze, and respond to volumes of data at a speed no number of humans can rival. As these technologies continue to advance, they have the power to benefit both companies and the customers they serve. They represent an opportunity for companies to completely rethink how they build context around each individual to create a better experience and a more loyal customer.

How will you embrace these new technological innovations to catapult your business into the next year and beyond?

Rob Krugman is the Chief Data Officer at Broadridge, a customer communication and data analytics company.

Read more here:

4 tips for transforming your customer communications with AI - VentureBeat

When robots learn to lie, then we worry about AI – The Australian Financial Review

Beware the hyperbole surrounding artificial intelligence and how far it has progressed.

Great claims are being made for artificial intelligence, or AI, these days.

Amazon's Alexa, Google's assistant, Apple's Siri, Microsoft's Cortana: these are all cited as examples of AI. Yet speech recognition is hardly new: we have seen steady improvements in commercial software like Dragon for 20 years.

Recently we have seen a series of claims that AI, with new breakthroughs like "deep learning", could displace 2 million or more Australian workers from their jobs by 2030.

Similar claims have been made before.

I was fortunate to discuss AI with a philosopher, Julius Kovesi, in the 1970s as I led the team that eventually developed sheep-shearing robots. With great insight, he argued that robots, in essence, were built on similar principles to common toilet cisterns and were nothing more than simple automatons.

"Show me a robot that deliberately tells you a lie to manipulate your behaviour, and then I will accept you have artificial intelligence!" he exclaimed.

That's the last thing we wanted in a sheep-shearing robot, of course.

To understand future prospects, it's helpful to see AI as just another way of programming digital computers. That's all it is, for the time being.

We have been learning to live with computers for many decades. Gradually, we are all becoming more dependent on them and they are getting easier to use. Smartphones are a good example.

Our jobs have changed as a result, and will continue to change.

Smartphones can also disrupt sleep and social lives, but so can many other things too. Therefore, claims that we are now at "a convergence" where AI is going to fundamentally change everything are hard to accept.

We have seen several surges in AI hyperbole. In the 1960s, machine translation of natural language was "just two or three years away". And we still have a long way to go with that one. In the late 1970s and early 1980s, many believed forecasts that 95 per cent of factory jobs would be eliminated by the mid-1990s. And we still have a long way to go with that one too. The "dot com, dot gone" boom of 2001 saw another surge. Disappointment followed each time as claims faded in the light of reality. And it will happen again.

Self-driving cars will soon be on our streets, thanks to decades of painstaking advances in sensor technology, computer hardware and software engineering. They will drive rather slowly at first, but will steadily improve with time. You can call this AI if you like, but it does not change anything fundamental.

The real casualty in all this hysteria is our appreciation of human intelligences ... plural. For artificial intelligence has only replicated performances like masterful game playing and mathematical theorem proving, or even legal and medical deduction. These are performances we associate with intelligent people.

Consider performances easily mastered by people we think of as the least intelligent, like figuring out what is and is not safe to sit on, or telling jokes. Cognitive scientists are still struggling to comprehend how we could begin to replicate these performances.

Even animal intelligence defies us, as we realised when MIT scientists perfected an artificial dog's nose sensitive enough to detect TNT vapour from buried landmines. When tested in a real minefield, this device detected TNT everywhere and the readings appeared to be unrelated to the actual locations of the mines. Yet trained mine detection dogs could locate the mines in a matter of minutes.

To appreciate this in a more familiar setting, imagine a party in a crowded room. One person lights up a cigarette and, to avoid being ostracised, keeps it hidden in an ashtray under a chair. Everyone in the room soon smells the cigarette smoke but no one can sense where it's coming from. Yet a trained dog would find it in seconds.

There is speculation that quantum computers might one day provide a real breakthrough in AI. At the moment, however, experiments with quantum computers are at much the same stage as Alan Turing was when he started tinkering with relays in the 1920s. There's still a long way to go before we will know whether these machines will tell deliberate lies.

In the meantime it might be worth asking whether the current surge of interest in AI is being promoted by companies like Google and Facebook in a deliberate attempt to seduce investors. Then again, it might just be another instance of self-deception group-think.

James Trevelyan is emeritus professor in the School of Mechanical and Chemical Engineering at the University of Western Australia.

Excerpt from:

When robots learn to lie, then we worry about AI - The Australian Financial Review

Microsofts CTO explains how AI can help health care in the US right now – The Verge

This week for our Vergecast interview series, Verge editor-in-chief Nilay Patel chats with Microsoft chief technology officer Kevin Scott about his new book Reprogramming the American Dream: From Rural America to Silicon ValleyMaking AI Serve Us All.

Scotts book tackles how artificial intelligence and machine learning can help rural America in a more grounding way, from employment to education to public health. In one chapter of his book, Scott focuses on how AI can assist with health care and diagnostic issues a prominent concern in the US today, especially during the COVID-19 pandemic.

In the interview, Scott refocuses the solutions he describes in the book around the current crisis, specifically supercomputers Microsoft has been using to train natural language processing now being used to search for vaccine targets and therapies for the novel coronavirus.

Below is a lightly edited excerpt of the conversation.

So lets talk about health care because its something you do focus on in the book. Its a particularly poignant time to talk about health care. How do you see AI helping broadly with health care and then more specifically with the current crisis?

I think there are a couple of things going on.

One I think is a trend that I wrote about in the book and that is just getting more obvious every day is that we need to do more. So that particular thing is that if our objective as a society is to get higher-quality, lower-cost health care to every human being who needs it, I think the only way that you can accomplish all three of those goals simultaneously is if you use some form of technological disruption.

And I think AI can be exactly that thing. And youre already seeing an enormous amount of progress on the AI-powered diagnostics front. And just going into the crisis that were in right now, one of the interesting things that a bunch of folks are doing including, I think I read a story about the Chan Zuckerberg Initiative is doing this is the idea is that if you have ubiquitous biometric sensing, like youve got a smartwatch or a fitness band or maybe something even more complicated that can sort of read off your heart-tick data, that can look at your body temperature, that can measure the oxygen saturation in your blood, that can basically get a biometric readout of how your bodys performing. And its sort of capturing that information over time. We can build diagnostic models that can look at those data and determine whether or not youre about to get sick and sort of predict with reasonable accuracy whats going on and what you should do about it.

Like you cant have a cardiologist following you around all day long. There arent enough cardiologists in the world even to give you a good cardiological exam at your annual checkup.

I think this isnt a far-fetched thing. There is a path forward here for deploying this stuff on a broader scale. And it will absolutely lower the cost of health care and help make it more widely available. So thats one bucket of things. The other bucket of things is like just some mind-blowing science that gets enabled when you intersect AI with the leading-edge stuff that people are doing in the biosciences.

Give me an example.

So, two things that we have done relatively recently at Microsoft.

One is one of the big problems in biology that weve had that that immunologists have been studying for years and years and years, is whether or not you could take a readout of your immune system by looking at the distribution of the types of T-cells that are active in your body. And from that profile, determine what illnesses that your body may be actively dealing with. What is it prepared to deal with? Like what might you have recently had?

And that has been a hard problem to figure out because, basically, youre trying to build something called a T-cell receptor antigen map. And now, with our sequencing technology, we have the ability to get the profile so you can sort of see what your immune system is doing. But we have not yet figured out how to build that mapping of the immune system profile to diseases.

Except were partnering with this company called Adaptive that is doing really great work with us, like bolting machine learning onto this problem to try to figure out what the mapping actually looks like. We are rushing right now a serologic test like a blood test that we hope well be able to sort of tell you whether or not you have had a COVID-19 infection.

So I think its mostly going to be useful for understanding the sort of spread of the disease. I dont think its going to be as good a diagnostic test as like a nasal swab and one of the sequence-based tests that are getting pushed out there. But its really interesting. And the implications are not just for COVID-19, but if you are able to better understand that immune system profile, the therapeutic benefits of that are just absolutely enormous. Weve been trying to figure this out for decades.

The other thing that were doing is when youre thinking about SARS-CoV-2 which is the virus that causes COVID-19 that is raging through the world right now we have never in human history had a better understanding of a virus and how it is attacking the body. And weve never had a better set of tools for precision engineering, potential therapies, and vaccines for this thing. And part of that engineering process is using a combination of simulation and machine learning and these cutting-edge techniques of biosciences in a way where youre sort of leveraging all three at the same time.

So weve got this work that were doing with a partner right now where I have taken a set of supercomputing clusters that we have been using to train natural language processing, deep neural networks, just massive scale. And those clusters are now being used to search for vaccine targets and therapies for SARS-CoV-2.

Were one among a huge number of people who are very quickly searching for both therapies and potential vaccines. There are reasons to be hopeful, but weve got a way to go.

But its just unbelievable to me to see how these techniques are coming together. And one of the things that Im hopeful about as we deal with this current crisis and think about what we might be able to do on the other side of it is it could very well be that this is the thing that triggers a revolution in the biological sciences and investment in innovation that has the same sort of a decades-long effect that the industrialization push around World War II had in the 40s that basically built our entire modern world.

Yeah, thats what I keep coming back to, this idea that this is a reset on a scale that very few people living today have ever experienced.

And you said out of World War II, a lot of basic technology was invented, deployed, refined. And now we kind of get to layer in things like AI in a way that is, quite frankly, remarkable. I do think, I mean, it sounds like were going to have to accept that Cortana might be a little worse at natural language processing while you search for the protein surfaces. But I think its a trade most people make.

[Laughs] I think thats the right trade-off.

Read the original here:

Microsofts CTO explains how AI can help health care in the US right now - The Verge

Artificial Intelligence Isn’t an Arms Race With China, and the United States Shouldn’t Treat It Like One – Foreign Policy

At the last Democratic presidential debate, the technologist candidate Andrew Yang emphatically declared that were in the process of potentially losing the AI arms race to China right now. As evidence, he cited Beijings access to vast amounts of data and its substantial investment in research and development for artificial intelligence. Yang and othersmost notably the National Security Commission on Artificial Intelligence, whichreleased its interim report to Congress last monthare right about Chinas current strengths in developing AI and the serious concerns this should raise in the United States. But framing advances in the field as an arms race is both wrong and counterproductive. Instead, while being clear-eyed about Chinas aggressive pursuit of AI for military use and human rights-abusing technological surveillance, the United States and China must find their way to dialogue and cooperation on AI. A practical, nuanced mix of competition and cooperation would better serve U.S. interests than an arms race approach.

AI is one of the great collective Rorschach tests of our times. Like any topic that captures the popular imagination but is poorly understood, it soaks up the zeitgeist like a sponge.

Its no surprise, then, that as the idea of great-power competition has reengulfed the halls of power, AI has gotten caught up in therace narrative.ChinaAmericans are toldis barreling ahead on AI, so much so that the United States willsoon be lagging far behind. Like the fears that surrounded Japans economic rise in the 1980s or the Soviet Union in the 1950s and 1960s, anxiety around technological dominance are really proxies for U.S. insecurity about its own economic, military, and political prowess.

Yet as technology, AI does not naturally lend itself to this framework and is not a strategic weapon.Despite claims that AI will change nearly everything about warfare, and notwithstanding its ultimate potential, for the foreseeable future AI will likely only incrementally improve existing platforms, unmanned systems such as drones, and battlefield awareness. Ensuring that the United States outpaces its rivals and adversaries in the military and intelligence applications of AI is important and worth the investment. But such applications are just one element of AI development and should not dominate the United States entire approach.

The arms race framework raises the question of what one is racing toward. Machine learning, the AI subfield of greatest recent promise, is a vast toolbox of capabilities and statistical methodsa bundle of technologies that do everything from recognizing objects in images to generating symphonies. It is far from clear what exactly would constitute winning in AI or even being better at a national level.

The National Security Commission is absolutely right that developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. U.S. leadership in AI is imperative. Leading, however, does not mean winning. Maintaining superiority in the field of AI is necessary but not sufficient. True global leadership requires proactively shaping the rules and norms for AI applications, ensuring that the benefits of AI are distributed worldwidebroadly and equitablyand stabilizing great-power competition that could lead to catastrophic conflict.

That requires U.S. cooperation with friends and even rivals such as China. Here, we believe that important aspects of the National Security Commission on AIs recent report have gotten too little attention.

First, as the commission notes, official U.S. dialogue with China and Russia on the use of AI in nuclear command and control, AIs military applications, and AI safety could enhance strategic stability, like arms control talks during the Cold War. Second, collaboration on AI applications by Chinese and American researchers, engineers, and companies, as well as bilateral dialogue on rules and standards for AI development, could help buffer the competitive elements of anincreasingly tense U.S.-Chinese relationship.

Finally, there is a much higher bar to sharing core AI inputs such as data and software and building AI for shared global challenges if the United States sees AI as an arms race. Although commercial and military applications for AI are increasing, applications for societal good (addressing climate change,improving disaster response,boosting resilience, preventing the emergence of pandemics, managing armed conflict, andassisting in human development)are lagging. These would benefit from multilateral collaboration and investment, led by the United States and China.

The AI arms race narrative makes for great headlines, buttheunbridled U.S.-Chinese competition it implies risks pushing the United States and the world down a dangerous path. Washington and Beijing should recognize the fallacy of a generalized AI arms race in which there are no winners. Instead, both should lead by leveraging the technology to spur dialogue between them and foster practical collaboration to counter the many forces driving them apartbenefiting the whole world in the process.

Read the original here:

Artificial Intelligence Isn't an Arms Race With China, and the United States Shouldn't Treat It Like One - Foreign Policy

Puma, PVH Corp. on AI and Forecasting, Merchandising in the COVID-19 Era – WWD

Artificial intelligence has come to mean more to retailers than just chatbots and recommendation engines. AI has found its place as a critical tool, particularly for companies managing a fleet of stores.

Thats certainly the case for Puma and PVH Corp., which have both come to rely on the technology in the critical months of the coronavirus pandemic.

Katie Darling, Pumas vice president of merchandising, and Kate Nadolny, senior vice president of business strategy and innovation at PVH, weighed in on how their companies used AI to improve forecasting and merchandising during a tough year that has vexed brick-and-mortar retail like no other.

Nadolny explained to host Prashant Agrawal, of Impact Analytics, that when PVH Corp. began its journey with AI forecasting a couple of years ago, the company wasnt entirely sure about what it would entail.

We identified the clear need and opportunity for us to be smarter about how were making our forecasting and prediction decisions, she said. But we werent really sure about what the tools and capabilities were that we needed.

As the parent company of Van Heusen, Tommy Hilfiger, Calvin Klein, Izod, Geoffrey Beene and more, PVH had more to weigh than a chain of single-branded stores. It was dealing with a multibrand portfolio, with different target customers and locations, bringing an extra layer of complexity. But its the sort of challenge that AI meets head on.

The company first looked to brainstorm with tech partners over areas like assortment and allocation, but COVID-19 quickly changed the priorities.

Suddenly, Nadolny said, PVH realized that it needed to make swift decisions, as its stores contended with varying rules that meant some stores closed, while others remained opened or swung between the two, as infection rates changed. Meanwhile, the rules of retail were being rewritten, as consumer behaviors morphed.

This was the period when stores were exploring curbside pickup and retailers that never before offered appointment shopping suddenly raced to meet new customer expectations. And such services may not work equally well in all areas, particularly in regions hard hit by the economic downturn, or perhaps work best for certain product categories or customer segments, which can vary by store.

Nadolny explained that knowing where and how to shift inventories or change pricing in real time, AI is simply faster at crunching the data and pulling insights than humans.

Darling agreed. She discovered at Puma that granular planning, down to the store level, across multiple doors is impossible without artificial intelligence. Additionally, it can find patterns youre not looking for, she explained, especially when compared to the way people dig through spreadsheets.

Its not only slower, but also less efficient at spotting identifying critical insights.

If one product sells out, whats the next best item in stock that can fill the gap? If a certain item performs well, but whats really leading sales are the smaller sizes in that particular style or stockkeeping unit, could a human staffer pinpoint that? In a normal year, such questions would point to missed opportunities, but in 2020, those insights can determine survival.

The idea of using artificial intelligence to help us make smarter decisions, whether it be at a category level, a collection level or even down to a size level is really important, said Darling.

But before AI can be really useful in the retail setting or in any setting the fundamentals need to be in place. Nadolny pointed out that AI initiatives need to start out with good data, which was one of the PVHs biggest early challenges.

As the saying goes, Garbage in, garbage out, she said. So how do you make sure that your data is right? That attributes are right, that the information that we have is correct and aligned? So while we have quite a bit of data thats very, very useful for us, its not always in the same place, in the same structure, in the same format.

As we start to move towards being able to better utilize these types of tools, internally were spending a lot of time focused on the clarity of our data governance and data structure, so we can therefore take that information and utilize that appropriately in the tool, she added.

The human element also remains important, Nadolny noted, in that staff should have appropriate training on how to best use the tools for the business.

The process could be a challenge for the humans, Nadolny acknowledged. It can even feel like a loss of control, but ideally theyll come to see and appreciate the tech. The machine can really learn more quickly and adapt to whats happening in the space more so than we can in our Excel-based toolset that we have today, she said.

Read more here:

Puma, PVH Corp. on AI and Forecasting, Merchandising in the COVID-19 Era - WWD

Go Beyond Artificial Intelligence: Why Your Business Needs Augmented Intelligence – Forbes

Augmented Intelligence

The nasal test for Covid-19 requires a nurse to insert a 6-inch long swab deep into your nasal passages. The nurse inserts this long-handled swab into both of your nostrils and moves it around for 15 seconds.

Now, imagine that your nurse is a robot.

A few months ago, a nasal swab robot was developed by Brain Navi, a Taiwanese startup. The companys intent was to minimize the spread of infection by reducing staff-patient contact. So, here we have a robot autonomously navigating the probe down into your throat, and carefully avoiding channels that lead up to the eyes.

The robot is supposed to be safe. But many patients would, understandably, be terrified.

Unfortunately, enterprise applications of artificial intelligence (AI) are often no less misguided. Today, AI has picked up remarkable capabilities. Its better than humans in tasks such as voice and image recognition, across disciplines from audio transcription to games.

But does this mean we should simply hand over the reins to machines and sit back? Not quite.

You need humans to make your AI solutions more effective, acceptable, and humane for your users. Thats when they will be adopted and deliver ROI for your organization. When AI and humans combine forces, the whole can be greater than the sum of its parts.

This is called augmented intelligence.

Here are 4 reasons why you need augmented intelligence to transform your business:

A large computer manufacturer wanted to find out what made its customers happy. Gramener, a company providing data science solutions analyzed tens of thousands of comments from the clients bi-annual voice of customer (VoC) survey. A key step in this text analytics process was to find what the customers were talking about. Were they worried about billing or after-sales service?

The team used AI language models to classify comments into the right categories. The algorithm delivered an average accuracy of over 90%, but the business users werent happy. While the algorithm aced at most categories, there were a few where it stumbled, at around 60% accuracy. This led to poor decisions in those areas.

Algorithms perform best when they are trained on large volumes of data, with a representative variety of scenarios. The low-accuracy categories in this project had neither. The project team experimented by bringing in humans to handle those categories where the models confidence was low.

At low manual effort, the overall solution accuracy shot up. This delivered an improvement of 2 percentage points in the clients Net Promoter Score.

Algorithms detect online fraud by studying factors such as consumer behavior and historical shopping patterns. They learn from past examples to identify whats normal and whats not. With the onset of the pandemic, these algorithms started failing.

In todays new normal, consumers have gone remote. They spend more time online, and the spending patterns have shifted in unexpected ways. Suddenly, everything these algorithms have learned has become irrelevant. Covid-19 threw them a curveball.

Algorithms work well only in scenarios that they are trained for. In completely new situations, humans must step in. Organizations that have kept humans in the loop can quickly transition control to them in such situations. Humans can keep systems running smoothly by ensuring that they are resilient in the face of change.

Meanwhile algorithms can go back to the classroom to unlearn, relearn, and come back a little smarter. For example, a recent NIST study found that the use of face masks is breaking facial recognition algorithms, such as the ones used in border crossings. Most systems had error rates up to 50%, calling for manual intervention. The algorithms are being retrained to use areas visible around the eyes.

On March 18, 2018, Elaine Herzberg was walking her bike across Mill Avenue. It was around 10 p.m in Tempe, Arizona. She crossed several lanes of traffic, before being struck by a Volvo.

But this wasnt any Volvo. It was a self-driving car, being tested by Uber.

The car was trained to detect jaywalkers at crosswalks. But, Herzberg had been crossing in the middle of the road, so the AI failed to detect her.

This tragic incident was the first pedestrian death caused by a self-driving car. It raised several questions. When AI makes a mistake, who should be held responsible? Is it the carmaker (Volvo), the AI system maker (Uber), the car driver (Rafaela Vasquez), or the pedestrian (Elaine Herzberg)?

Occasionally, high-precision algorithms will falter, even in familiar scenarios. Rather than roll back the advances made in automation, we must make efforts to improve accountability. Last month, the European Commission published recommendations from an independent expert report for self-driving cars.

The experts call for identifying ownership of all parties and for devising ways to attribute responsibility across scenarios. The report recommends an improvement of human-machine interactions so that AI and drivers can communicate better and understand each others limitations.

Will Siri, Alexa or Google Assistant discriminate against you? Earlier this year, researchers at Stanford University attempted to answer this question by studying the top voice recognition systems in the world. They found that these popular devices had more trouble understanding Black people than white people. They misidentified 35 percent of words spoken by Black users, but only 19 percent for white users.

Bias is a thorny issue in AI. But we must remember that algorithms are only as good as the data used to train them. Our world is anything but perfect. When algorithms learn from our data, they mimic these imperfections and magnify the bias. There is ongoing research in AI to improve fairness and ethics. However, no amount of model engineering will make algorithms perfect.

In the real world, if we are serious about fighting bias, we use our judgement. We make rules more inclusive and adopt measures to amplify suppressed voices. The same approach is needed in AI solutions. Design human intervention to check and address potential scenarios of discrimination. Use human judgment to fight a machines learned bias.

Rethink Design

We often measure progress in AI by comparing AIs abilities to that of humans.

While thats a useful benchmarking exercise, its a mistake to use this approach while designing AI solutions. Organizations often pit AI against humans. This doesnt do justice to either one. It leads to suboptimal performance, brittle solutions, untrustworthy applications and unfair decisions.

Augmented intelligence combines the strengths of humans with those of AI. It combines the speed, logic and consistency of machines with the common sense, emotional intelligence and empathy of humans.

To achieve augmented intelligence, you need humans in the loop. This mustbe planned upfront. Merely adding new processes or responsibilities to an existing technology solution leads to poor results. You must (re)design the solution workflow, and decide which areas are best handled by algorithms. You should define whether humans must make decisions or review decisions made by a machine.

Building augmented intelligence is an ongoing journey. With evolution in machine capabilities and changes in users comfort and trust levels, you must continuously improve the design.

This will make AI-driven systems that do invasive medical procedures or that make high-stakes financial decisions more compassionate and trustworthy for your users.

Disclosure: I co-founded the company, Gramener, that's mentioned in one of the examples in this article.

More here:

Go Beyond Artificial Intelligence: Why Your Business Needs Augmented Intelligence - Forbes

R&D Special Focus: Robotics/AI – R & D Magazine

Robotics and artificial intelligence (A.I) were once considered fantasies of the future.

Today, both technologies are being incorporated into many elements of everyday life, with applications popping up in everything from healthcare and education to communication and transportation.

In July, R&D Magazine took a deeper dive into this breakthrough area of research.

We kicked off our coverage by speaking to several experts about where the field of robotics is going in, Robotics Industry Has Big Future as Applications Grow.

Susan Teele of the Advanced Robotics for Manufacturing Institute and Bob Doyle of the Robotics Industries Association, discussed the impact that robots will have on the workforce and what technological advancements are needed for them to truly flourish.

We expanded on that idea in Creating Robots That Are More Like Humans, which features a research group at Northeastern University focused on creating software that makes robots more autonomous, so eventually they are able to perform tasks on their own with little human supervision or intervention.

The groups leader, Taskin Padir told R&D Magazine how reliable robots with human-like dexterity and improved autonomy could take over jobs that are dangerous or difficult for humans to perform.

Robots can also be used to reduce danger. Our article, Creator of Suicidal Robot Explains How Robot Security Could Prevent 'The Next Sandy Hook, focused on the robotic security company Knightscope, which made headline recently for a humorous mishap involving one of its robots falling into a fountain.

However, the real story is the true mission behind Knightscope.

The company was created by a former police officer who was deeply impacted by the Sandy Hook Elementary School shooting. Knightscopes robots now serve as intelligence gathering tools, which law enforcement officials can utilize during, as well as afteran emergency, to better understand what is going on, de-escalate a dangerous situation, and potentially help capture or gather evidence against the perpetrator of the crime.

We wrapped up our robotics coverage with, Robotic Teachers Can Adjust Style Based on Student Success, which focuses on the development of socially assistive robotics a new field of robotics that focuses on assisting users through social rather than physical interaction. A research group at Yale University is designing these robots to work with children, including those with challenges such as autism, hearing impairment, or those whose first language is one other than English.

A.I. Advancements

Our A.I. coverage kicked off with Why Canada is Becoming a Hub for A.I. Research, which highlighted the significant commitment to A.I. research and development our neighbor to the north is making.

The Vector Institute which received an estimated $150 million investment from both the Canadian government and as well as Canadian businessesis one example of that commitment.

The independent not-for-profit institution based in Ontario seeks to to build and sustain A.I.-based innovation, growth and productivity in Canada by focusing on the transformative potential of deep learning and machine learning.

We also looked at the impact of A.I. in the healthcare space. One article, Startup Uses A.I. to Streamline Drug Discovery Process, features an interview with the CEO of Exscientia, which is using A.I. fueled programs in conjunction with experienced drug developers to implement a rapid design-make-test cycle. This essentially ascertains how certain molecules will behave and then predicts how likely they are to become useful drugs.

Another startup, Potbotics is using A.I. to comb through the different strains of medical marijuana to find the right one for a specific ailment with its app PotBot.Once a medical cannabis recommendation is calculated, the app helps patients find their recommended cannabis at a nearby dispensary or set up an appointment with a licensed medical cannabis clinic. We featured the company in our article, PotBot Uses A.I. to Match Medical Marijuana Users to Best Strain.

The use of A.I. to create autonomous vehicles is another area that is rapidly growing. In our article Algorithm Improves Energy Efficiency of Autonomous Underwater Vehicles, we focused on researchers from Oregon State University, who developed a new algorithm to direct autonomous underwater vehicles to ride the ocean currents when traveling from point to point.

Improving the A.I. of the vehicles extends their battery life by decreasing the amount of battery power wasted through inefficient trajectory cuts.

Also, we took a deep dive into Toyotas plan for A.I., featuring an exclusive interview with Jim Adler the managing director of the Japanese car companys new venture fund, Toyota A.I. Ventures in our article, How Toyotas New Venture Fund Will Tackle A.I. Investments.

The venture fund will use an initial fund of $100 million to collaborate with entrepreneurs from all over the world, in an effort to improve the quality of human life through artificial A.I.

Toyota A.I. Ventures will work with startups at an early stage and offer a founder-friendly environment that wont impact their ability to work with other investors. They will also offer assistance with technology and product expertise to validate that the product being built is for the right market, and give these entrepreneurs access to Toyotas global network of affiliates and partners to ensure a successful market launch.

Next Months Special Focus

In August, R&D Magazine will continue its special focus series, this time highlighting the many applications of virtual reality. The technology has expanded significantly outside of the video gaming world, and is now being used across multiple disciplines.

Continue reading here:

R&D Special Focus: Robotics/AI - R & D Magazine

AI Can Tell if You’re Depressed by Reading Your Facebook Posts

Dr. Facebook

It shouldn’t come as much of a surprise, but Facebook knows a lot about you.

And while the information it collects about you isn’t exactly in the safest of hands, it could give mental health care professionals a huge leg up in predicting your future mental well-being — if you’re willing to hand over your login information.

In research published this week Proceedings of the National Academy of Sciences, scientists at the University of Pennsylvania described how they were able to determine whether a particular Facebook user is likely to become depressed in the future, simply by analyzing their status updates.

Robot Psychologists

Wired reports that the researchers used machine learning algorithms to analyze almost half a million Facebook posts — spanning a period of seven years — by 684 willing patients at a Philadelphia emergency ward.

“We’re increasingly understanding that what people do online is a form of behavior we can read with machine learning algorithms, the same way we can read any other kind of data in the world,” UPenn psychologist Johannes Eichstaedt told the magazine.

The algorithms looked for markers of depression in the patients’ posts, and found that depressed individuals used more first-person language — a finding in line with many previous studies. The algorithm got so good at catching those markers that it could predict if a Facebook user was depressed up to three months prior to a formal diagnosis by a health care professional.

The Human Touch

For now, take these results with a grain of salt — this won’t ever be a substitute for human psychologists, because there are far too many variables. But social media information — along with heart rates or sleep data collected by fitness trackers — could be a powerful tool to catch mental health problems early on.

That is, if we’re willing to share that kind of information with them in the first place.

Read More: Your Facebook Posts Can Reveal If You’re Depressed [Wired]

More on mental health and social media: Instagram is Trying to Make Users Feel Better Without Scaring Them Off

See the article here:

AI Can Tell if You’re Depressed by Reading Your Facebook Posts

Google to set up AI research lab in Bengaluru – The Hindu

U.S.-headquartered Google on Thursday announced the setting up of a research lab in Bengaluru that will work on advancing artificial intelligence-related research with an aim to solve problems in sectors such as healthcare, agriculture and education.

...we announced Google Research India a new AI research team in Bangalore that will focus on advancing computer science and applying AI research to solve big problems in healthcare, agriculture, education and more, Sundar Pichai, CEO, Google, tweeted.

Caesar Sengupta, vice-president, Next Billion Users Initiative and Payments at Google, added that this team would focus on advancing fundamental computer science and AI research by building a strong team and partnering with the research community across the country. It will also be applying this research to tackle problems in fields such as healthcare, agriculture, and education.

The new lab will be a part of and support Googles global network of researchers. Were also exploring the potential for partnering with Indias scientific research community and academic institutions to help train top talent and support collaborative programmes, tools and resources, Jay Yagnik, vice-president and Google Fellow, Google AI, said.

Google Pay

The technology giant announced a host of additions to its UPI-powered digital payments app Google Pay, which the company said had grown more than three times in the last 12 months to 67 million monthly active users, driving transactions worth over $110 billion on an annualised basis.

To start with, the company has introduced Spot Platform within Google Pay, which will enable merchants to create new experiences that bridge the offline and online world. Ambarish Kenghe, director, product management, Google Pay, said: A Spot is a digital front for a business that is created, branded and hosted by them, and powered by Google Pay. Users can discover a Spot online or at a physical location, and transact with the merchant easily and securely within the Google Pay app.

Users will now also be able to search for entry-level jobs that could not be easily discovered online via the application.

Google will also roll out tokenized cards, which will enable users to make payments using debit and credit cards without using actual card number.

View post:

Google to set up AI research lab in Bengaluru - The Hindu