China wants to be a $150 billion world leader in AI in less than 15 years – CNBC

Zhang Peng | LightRocket | Getty Images

Robots dance for the audience on the expo. On Jul. 8th, Beijing International Consumer electronics Expo was held in Beijing China National Convention Center.

The first part of the plan runs up to 2020 and proposes that China makes progress in developing a "new generation" of AI theory and technology. This will be implemented in some devices and basic software. It will also involve the development of standards, policies, and ethics for AI across the world's second-largest economy.

In the second step of the plan which is up to 2025, China expects to achieve a "major breakthrough" in AI technology and the application of it, which will lead to "industrial upgrading and economic transformation".

The last step, which will happen between 2025 and 2030 sees China become the world leader in AI, with the industry worth 1 trillion yuan.

Read the original post:

China wants to be a $150 billion world leader in AI in less than 15 years - CNBC

Local COVID-19 Forecasts by AI – The UCSB Current

Despite efforts throughout the United States last spring to suppress the spread of the novel coronavirus, states across the country have experienced spikes in the past several weeks. The number of confirmed COVID-19 cases in the nation has climbed to more than 3.5 million since the start of the pandemic.

Public officials in many states, including California, have now started to roll back the reopening process to help curb the spread of the virus. Eventually, state and local policymakers will be faced with deciding for a second time when and how to reopen their communities. A pair of researchers in UC Santa Barbaras College of Engineering, Xifeng Yan and Yu-Xiang Wang, have developed a novel forecasting model, inspired by artificial intelligence (AI) techniques, to provide timely information at a more localized level that officials and anyone in the public can use in their decision-making processes.

We are all overwhelmed by the data, most of which is provided at national and state levels, said Yan, an associate professor who holds the Venkatesh Narayanamurti Chair in Computer Science. Parents are more interested in what is happening in their school district and if its safe for their kids to go to school in the fall. However, there are very few websites providing that information. We aim to provide forecasting and explanations at a localized level with data that is more useful for residents and decision makers.

The forecasting project, Interventional COVID-19 Response Forecasting in Local Communities Using Neural Domain Adaption Models, received a Rapid Response Research (RAPID) grant for nearly $200,000 from the National Science Foundation (NSF).

The challenges of making sense of messy data are precisely the type of problems that we deal with every day as computer scientists working in AI and machine learning, said Wang, an assistant professor of computer science and holder of the Eugene Aas Chair. We are compelled to lend our expertise to help communities make informed decisions.

Yan and Wang developed an innovative forecasting algorithm based on a deep learning model called Transformer. The model is driven by an attention mechanism that intuitively learns how to forecast by learning what time period in the past to look at and what data is the most important and relevant.

If we are trying to forecast for a specific region, like Santa Barbara County, our algorithm compares the growth curves of COVID-19 cases across different regions over a period of time to determine the most-similar regions. It then weighs these regions to forecast cases in the target region, explained Yan.

In addition to COVID-19 data, the algorithm also draws information from the U.S. Census to factor in hyper-local details when calibrating the forecast for a local community.

The census data is very informative because it implicitly captures the culture, lifestyle, demographics and types of businesses in each local community, said Wang. When you combine that with COVID-19 data available by region, it helps us transfer the knowledge learned from one region to another, which will be useful for communities that want data on the effectiveness of interventions in order to make informed decisions.

The researchers models showed that, during the recent spike, Santa Barbara County experienced spread similar to what Mecklenburg, Wake, and Durham counties in North Carolina saw in late March and early April. Using those counties to forecast future cases in Santa Barbara County, the researchers attention-based model outperformed the most commonly used epidemiological models: the SIR (susceptible, infected, recovered) model, which describes the flow of individuals through three mutually exclusive stages; and the autoregressive model, which makes predictions based solely on a series of data points displayed over time. The AI-based model had a mean absolute percentage error (MAPE) of 0.030, compared with 0.11 for the SIR model and 0.072 with autoregression. The MAPE is a common measure of prediction accuracy in statistics.

Yan and Wang say their model forecasts more accurately because it eliminates key weaknesses associated with current models. Census data provides fine-grained details missing in existing simulation models, while the attention mechanism leverages the substantial amounts of data now available publicly.

Humans, even trained professionals, are not able to process the massive data as effectively as computer algorithms, said Wang. Our research provides tools for automatically extracting useful information from the data to simplify the picture, rather than making it more complicated.

The project, conducted in collaboration with Dr. Richard Beswick and Dr. Lynn Fitzgibbons from Cottage Hospital in Santa Barbara, will be presented later this month during the Computing Research Association (CRA) Virtual Conference. Formed in 1972 as a forum for department chairs of computer sciences departments across the country, the CRAs membership has grown to include more than 200 organizations active in computing research.

Yan and Wangs research efforts will not stop there. They plan to make their model and forecasts available to the public via a website and to collect enough data to forecast for communities across the country. We hope to forecast for every community in the country because we believe that when people are well informed with local data, they will make well-informed decisions, said Yan.

They also hope their algorithm can be used to forecast what could happen if a particular intervention is implemented at a specific time.

Because our research focuses on more fundamental aspects, the developed tools can be applied to a variety of factors, added Yan. Hopefully, the next time we are in such a situation, we will be better equipped to make the right decisions at the right time.

See original here:

Local COVID-19 Forecasts by AI - The UCSB Current

This robotic glove uses AI to help people with hand weakness regain muscle grip – The Next Web

A Scottish biotech startup has invented an AI-powered robotic glove that helpspeople recover muscle grip in their hands.

BioLibertydesigned the glove for people who suffer from hand weakness, due to age or illnesses suchas motor neurone disease and carpal tunnel syndrome.

The system detects their intention to grip by usingelectromyography (EMT) to measure the electrical activity generated by a nerves stimulation of the muscle.

An algorithm then converts the intent into force to help the wearer strengthen their grip on an object.

The glove could help users with a wide range of daily tasks, from driving to opening jars.

[Read:How Polestar is using blockchain to increase transparency]

BioLiberty cofounder Ross Hanlon said he got the idea when an aunt with multiple sclerosis started struggling with simple tasks like drinkingwater:

Being an engineer, I decided to use technology to tackle these challenges head-on with the aim of helping people like my aunt to retain their autonomy. As well as those affected by illness, the population continues to age and this places increasing pressure on care services. We wanted to support independent living and healthy aging by enabling individuals to live more comfortably in their own homes for longer.

Hanlons aunt is one of around 2.5 million UK citizens who suffer from hand weakness. An aging population means this number will only increase.

BioLibertys robotic glove and digital therapy platform could help them regain their strength.

The company has already developed a working prototype of the glove. The team now plans to use supportfrom Edinburgh Business Schools Incubator tobring the glove into homes.

Ultimately, they want their tech to help people suffering from reduced mobility to regain their independence.

Published February 16, 2021 16:17 UTC

The rest is here:

This robotic glove uses AI to help people with hand weakness regain muscle grip - The Next Web

DeepMind hopes to teach AI to cooperate by playing Diplomacy – VentureBeat

DeepMind, the Alphabet-backed machine learning lab thats tackled chess, Go, Starcraft 2, Montezumas Revenge, and beyond, believes the board game Diplomacy could motivate a promising new direction in reinforcement learning research. In a paper published on the preprint server Arxiv.org, the firms researchers describe an AI system that achieves high scores in Diplomacy while yielding consistent improvements.

AI systems have achieved strong competitive play in complex, large-scale games like Hex, shogi, and poker, but the bulk of these are two-player zero-sum games where a player can win only by causing another player to lose. That doesnt reflect the real world, necessarily; tasks like route planning around congestion, contract negotiations, and interacting with customers all involve compromise and consideration of how preferences of group members coincide and conflict. Even when AI software agents are self-interested, they might gain by coordinating and cooperating, so interacting among diverse groups requires complex reasoning about others goals and motivations.

The game Diplomacy forces these interactions by tasking seven players with controlling multiple units on a province-level map of Europe. Each turn, all players move all their units simultaneously within one of 34 provinces, and one unit may support another unit owned by the same or another player to allow it to overcome resistance by other units. (Alternatively, units which have equal strength can hold a province or move to an adjacent space.) Provinces are supply centers, and units capture supply centers by occupying the province. Owning more supply centers allows a player to build more units, and the game is won by owning a majority of the supply centers.

Due to the interdependencies between units, players must negotiate the moves of their own units. They stand to gain by coordinating their moves with those of other players, and they must anticipate how other players will act and reflect these expectations in their actions.

We propose using games like Diplomacy to study the emergence and detection of manipulative behaviors to make sure that we know how to mitigate such behaviors in real-world applications, the coauthors wrote. Research on Diplomacy could pave the way towards creating artificial agents that can successfully cooperate with others, including handling difficult questions that arise around establishing and maintaining trust and alliances.

Above: The performance of the DeepMind system over time compared with baselines.

Image Credit: DeepMind

DeepMind focused on the no press variant of Diplomacy, where no explicit communication is allowed. It trained reinforcement learning agents agents that take actions to maximize some reward using an approach called Sampled Best Responses (SBR), which handled the large number of actions (10) players can take in Diplomacy, with a policy iteration technique that approximates the best responses to players actions as well as fictitious play.

At each iteration, DeepMinds system creates a data set of games, with actions chosen by a module called an improvement operator that uses a previous strategy (policy) and value function to find a policy that defeats the previous policy. It then trains the policy and value functions to predict the actions the improvement operator will choose as well as the game results.

The aforementioned SBR identifies policies that maximize the expected return for the systems agents against opponents policies. SBR is coupled with Best Response Policy Iteration (BRPI), a family of algorithms tailored to using SBRs in many-player games, the most sophisticated of which trains the policies to predict only the latest BR and explicitly averages historical checkpoints to provide the current empirical strategy.

To evaluate the systems performance, DeepMind measured the head-to-head win rates against six agents from different algorithms and against a population of six players independently drawn from a reference corpus. They also considered meta-games between checkpoints of one training run to test for consistent improvement and examined the exploitability (the margin by which an adversary would defeat a population of agents) of the game-playing agents.

The systems win rates werent especially high averaged over five seeds of each game, they ranged between 12.7% and 32.5% but DeepMind notes that they represent a large improvement over agents trained with supervised learning. Against one algorithm in particular DipNet in a 6-to-1 game, where six of the agents were controlled by DeepMinds system, the win rates of DeepMinds agents improved steadily through training.

In future work, the researchers plan to investigate ways to reduce the agents exploitability and build agents that reason about the incentives of others, potentially through communication. Using [reinforcement learning] to improve game-play in Diplomacy is a prerequisite for investigating the complex mixed motives and many-player aspects of this game Beyond the direct impact on Diplomacy, possible applications of our method include business, economic, and logistics domains In providing the capability of training a tactical baseline agent for Diplomacy or similar games, this work also paves the way for research into agents that are capable of forming alliances and use more advanced communication abilities, either with other machines or with humans.

Go here to see the original:

DeepMind hopes to teach AI to cooperate by playing Diplomacy - VentureBeat

She was named one of the 100 most brilliant women in AI ethics – News@Northeastern

Computer science professor Tina Eliassi-Rad says shes proud to be named on an industry list of 100 Brilliant Women in AI Ethics, which identifies her as one of the top thinkers in the male-dominated field of artificial intelligence. But shes even prouder of what the carefully-curated list represents.

Part of the issue in a field such as computer science is that women and other under-represented minorities arent always seen. Initiatives like this one show that there are a lot of women who are qualified to do this work, says Eliassi-Rad.

Mia Shah-Dand, the CEO of the Oakland, California-based research firm Lighthouse3, created the annual list in 2018. Shah-Dand says she wanted to provide a rebuttal to technology leaders who complained that they couldnt find accomplished, diverse women to hire.

I was a little frustrated with all the times I would hear, There just arent enough qualified women, says Shah-Dand. Its the same old excuse. Well, we have an entire directory of qualified women now. There is no excuse. At this point in 2021, if you have only men on your staff, its intentional.

According to recent research by the World Economic Forum, women hold only 26% of data and artificial intelligence jobs across the globe, and even fewer have senior roles.

Shah-Dand says she included Eliassi-Rad on her 2021 list because of the professors extensive research on racial, gender and other baked-in biases in artificial intelligence algorithms.

Her emphasis on algorithmic accountability and fairness was particularly interesting, says Shah-Dand.

Algorithms, which scan large amounts of data and find whatever information its creators want, are increasingly part of our everyday lives. For example, credit card fraud departments use algorithms to detect abnormal spending, while social media algorithms use viewer interests to determine which ads to run.

Eliasi-Rads research at Northeastern focuses on the unseen but overwhelming influence that artificial intelligence algorithms can make in peoples lives, especially in social media.

Part of the problem with algorithms is that they can impact life-altering decisions if theyre used in criminal justice or even your credit score, says Eliassi-Rad. Microlenders, or individuals who issue small loans, will often check a candidates Facebook and Twitter feeds when deciding whether to grant a loan. A chance connection with someone who has defaulted on a loan could trigger a denial, says Eliassa-Rad.

Sometimes if you dont get the right loan in life, you cant better yourself, she says.

Eliassi-Rads career in computer science was sparked by her fathers early work with autonomous vehicles. She avidly read the many magazines he brought home and decided computer science was the perfect balance between math and electrical engineering. Her focus recently sharpened as she learned about the different class, race, and gender biases in machine learning.

She likens the data used in algorithms to an iconic photo of a police officers German shepherd attacking a Black high school student during a 1963 civil rights event in Birmingham, Alabama.

The German shepherd isnt racist, its the people teaching the dog, Eliassa-Rad says. Even if the data used in an algorithm isnt biased, the algorithm may still produce biased findings.

As you are developing an algorithm you are making choices, and those choices have consequences, Eliassi-Rad says.

Eliassi-Rad and Shah-Dand say the list of top women in AI ethics does more than provide a roster of qualified computer science professionals who also happen to be female, LGTBQ, or women of color. It creates a community to foster networking and support while providing role models for future generations.

Its sort of like a sisterhood, says Eliassi-Rad, who received an Outstanding Mentor Award from the Office of Science at the US Department of Energy in 2010. I hope young women see this and think, I can be somebody like this person.

For media inquiries, please contact media@northeastern.edu.

See the rest here:

She was named one of the 100 most brilliant women in AI ethics - News@Northeastern

ZeroStack Launches AI Suite for Self-Driving Clouds – Yahoo Finance

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--

ZeroStack, the leader in making self-driving private cloud affordable for all companies, today announced its roadmap and first suite of artificial intelligence (AI) capabilities derived from machine learning. These capabilities build the foundation for self-driving clouds, making deploying, running, and managing on-premises cloud as hands off as using a public cloud. Aimed at empowering application developers, other on-premises clouds require major investments in IT infrastructure and internal skills. ZeroStacks intelligent cloud platform leverages self-healing software and algorithms developed from over one million datagrams. This economic disruption unleashes businesses to choose clouds for application development based on data locality, governance, performance and costs without technology adoption restricting their choices.

With ZeroStacks vision for automated cloud, and this first release of real capabilities, I believe they are the only credible cloud vendor to employ artificial intelligence in the service of enterprise customers, said Torsten Volk, senior analyst at Enterprise Management Associates. Given the increasing complexity of IT operations, deploying AI is an optimal way of managing costs.

The future of the datacenter is AI because fewer and fewer companies want to manage any infrastructure. As a result, the responsibility to manage increasing complexity is shifting from the customer to the vendor, said Dr. Jim Metzler, principal analyst at Ashton, Metzler and Associates. By incorporating AI technology into their software, ZeroStack is at the forefront of these tidal changes in IT.

ZeroStacks AI Suite

Designed by senior engineers from VMware and Google, ZeroStacks intelligent cloud platform collects operational data and leverages machine learning to help customers make decisions about capacity planning, troubleshooting and optimized placement of applications. ZeroStacks vision is to extend existing functionality in three phases:

ZeroStack has continually worked to reduce ITs I&O burden for enterprise customers, and our AI software strategy points the way to the future of IT operations, said Kamesh Pemmaraju, vice president of product management at ZeroStack. As placement and management of customer workloads increase datacenter complexity, AI will be a key requirement for cost-effective management, and we are at the forefront of using this technology.

Helpful Links

Suggested Tweet: ZeroStack launches AI for self-driving clouds

About ZeroStack

ZeroStack uses smart software and artificial intelligence to deliver a self-driving, fully integrated private cloud platform that offers the agility and simplicity of public cloud at a fraction of the cost. On premises, ZeroStacks cloud operating system converts bare-metal servers into a reliable, self-healing cloud cluster. This cluster is consumed via a self-service SaaS portal. The SaaS portal also collects operational data and uses artificial intelligence to create models that help customers make decisions about capacity planning, troubleshooting and optimized placement of applications. The integrated AppStore enables 1-click deployment of many applications that provide the platform for most modern cloud native applications. This solution is fully integrated with public clouds to offer seamless migration between clouds. The company is funded by Formation 8 and Foundation Capital, and is based in Mountain View, California. For more information, visit http://www.zerostack.com or follow us on Twitter @ZeroStackInc.

View source version on businesswire.com: http://www.businesswire.com/news/home/20170206005249/en/

Continue reading here:

ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance

A beginners guide to AI: The difference between video game AI and real AI – The Next Web

Welcome to TNWs beginners guide to AI. This multi-part feature should provide you with a very basic understanding of what AI is, what it can do, and how it works. The guide contains articles on (in order published)neural networks,computer vision,natural language processing,algorithms, and artificial general intelligence.

Among the most common misconceptions surrounding machine learning technology is the idea that video games dating back to the 1970s and 1980s had built-in artificial intelligence capable of interacting with a human user.

If youre curious but in a hurry, video game AI, in the traditional sense, is not what people refer to in the modern era when theyre talking about artificial intelligence. The bots in an online multiplayer game, the enemies in a first-person-shooter, and the CPU-controlled characters in old-school Nintendo games are not examples of artificial intelligence, theyre just clever programming tricks.

Artificial intelligence, in the form we discuss here at Neural, includes machine learning systems like the core neural networks behind Alexa, Siri, and Google Assistant. Adobe uses AI to predict what you want a correction to look like, Google uses it to find you a cheap flight, and Twitter uses AI to determine which ads to serve you.

But, at the risk of confusing things further, video game developers often use AI to create video games. The Unreal Engine, for example, uses AI to allow for real-time graphics rendering. AI is not usually used to control anything that interacts with the player though, because it would typically be a poor solution to most programming problems faced by developers.

When gamers think of the AI in a game, theyre probably not imagining a set of image recognition algorithms. Theyre thinking of the CPU-controlled enemies that can recognize the players actions and respond. Weve seen CPU enemies take cover, call for backup, and respond in kind when players use new tactics. Again, this usually isnt accomplished with artificial intelligence.

Theres only so many things an agent can do in a video game, so its usually more cost-effective and simple to just code an agent to perform certain tasks than it is to train a neural network to control the agent.

Perhaps in the future as games continue to expand in size and features itll begin to make sense to create AI-powered agents to explore video game worlds in tandem with players. One of the most popular forms of developing robust AI systems is to let models loose in video game worlds. StarCraft and Super Mario Bros. are among the most popular gaming worlds for machine learning research.

But the purpose of such research has nothing to do with video game development. Researchers observe AI models in gaming worlds because theyre often physics-based, and that helps AI learn how the real world works.

Though there are some exceptions, we can typically assume any AI reference in the gaming world that refers to the CPUs control over agents ie, the enemy orcs in Shadow of War or the AI-companions in Fallout 4 is not actual artificial intelligence. Though the developers of both games likely used AI for myriad functions in their creation, the games themselves dont have an AI baked-in specifically to control NPCs, agents, monsters, allies, or bad guys.

Published August 10, 2020 21:16 UTC

View original post here:

A beginners guide to AI: The difference between video game AI and real AI - The Next Web

Artificial Intelligence in Medicine | IBM

Artificial intelligence in medicine is the use of machine learning models to search medical data and uncover insights to help improve health outcomes and patient experiences. Thanks to recent advances in computer science and informatics, artificial intelligence (AI) is quickly becoming an integral part of modern healthcare. AI algorithms and other applications powered by AI are being used to support medical professionals in clinical settings and in ongoing research.

Currently, the most common roles for AI in medical settings are clinical decision support and imaging analysis. Clinical decision support tools help providers make decisions about treatments, medications, mental health and other patient needs by providing them with quick access to information or research that's relevant to their patient. In medical imaging, AI tools are being used to analyze CT scans, x-rays, MRIs and other images for lesions or other findings that a human radiologist might miss.

The challenges that the COVID-19 pandemic created for many health systems also led many healthcare organizations around the world to start field-testing new AI-supported technologies, such as algorithms designed to help monitor patients and AI-powered tools to screen COVID-19 patients.

The research and results of these tests are still being gathered, and the overall standards for the use AI in medicine are still being defined. Yet opportunities for AI to benefit clinicians, researchers and the patients they serve are steadily increasing. At this point, there is little doubt that AI will become a core part of the digital health systems that shape and support modern medicine.

Follow this link:

Artificial Intelligence in Medicine | IBM

The Wild Future of Artificial Intelligence – The Atlantic

  1. The Wild Future of Artificial Intelligence  The Atlantic
  2. Why tech insiders are so excited about ChatGPT, a chatbot that answers questions and writes essays  CNBC
  3. The artificial intelligence revolution in compliance isn't coming. It happened yesterday.  The FCPA Blog
  4. What is ChatGPT, the artificial intelligence text bot that went viral?  ABC News
  5. ChatGPT and Lensa: Why Everyone Is Playing With Artificial Intelligence  The Wall Street Journal
  6. View Full Coverage on Google News

Follow this link:

The Wild Future of Artificial Intelligence - The Atlantic

The Increased Spending On Defense Will Propel The AI in Military Market Size To More Than $11 Billion By 2026 As Per The Business Research Company’s…

The Increased Spending On Defense Will Propel The AI in Military Market Size To More Than $11 Billion By 2026 As Per The Business Research Company's Artificial Intelligence In Military Global Market Report 2022  GlobeNewswire

Original post:

The Increased Spending On Defense Will Propel The AI in Military Market Size To More Than $11 Billion By 2026 As Per The Business Research Company's...

Is AI cybersecuritys salvation or its greatest threat? – VentureBeat

This article is part of a VB special issue. Read the full series here: AI and Security.

If youre uncertain whether AI is the best or worst thing to ever happen to cybersecurity, youre in the same boat as experts watching the dawn of this new era with a mix of excitement and terror.

AIs potential to automate security on a broader scale offers a welcome advantage in the short term. Yet unleashing a technology designed to eventually take humans out of the equation as much as possible naturally gives the industry some pause. There is an undercurrent of fear about the consequences if things run amok or attackers learn to make better use of the technology.

Everything you invent to defend yourself can also eventually be used against you, said Geert van der Linden, an executive vice president of cybersecurity for Capgemini. This time does feel different, because more and more, we are losing control as human beings.

In VentureBeats second quarterly special issue, we explore this algorithmic angst across multiple stories, looking at how important humans remain in the age of AI-powered security, how deepfakes and deep media are creating a new security battleground even as the cybersecurity skills gap is a concern, surveillance powered by AI cameras is on the rise, AI-powered ransomware is rearing its head, and more.

Each evolution of computing in recent decades has brought new security threats and new tools to fight them. From networked PCs to cloud computing to mobile, the trend is always toward more data stored in ways that introduce unfamiliar vulnerabilities, larger attack vectors, and richer targets that attract increasingly well-funded bad actors.

The AI security era is coming into focus quickly, and the design of these security tools, the rules that govern them, and the way theyre deployed carry increasingly high stakes. The race is on to determine whether AI will help keep people and businesses secure in an increasingly connected world or push us into the digital abyss.

In a hair-raising prediction last year, Juniper Research forecast that the annual cost of data breaches will increase from $3 trillion in 2019 to $5 trillion in 2024. This will be due to a mix of fines for regulation violations, lost business, and recovery costs. But it will also be driven by a new variable: AI.

Cybercrime is increasingly sophisticated; the report anticipates that cybercriminals will use AI, which will learn the behavior of security systems in a similar way to how cybersecurity firms currently employ the technology to detect abnormal behavior, reads Junipers report. The research also highlights that the evolution of deepfakes and other AI-based techniques is also likely to play a part in social media cybercrime in the future.

Given that every business is now a digital business to some extent, spending on infrastructure defense is exploding. Research firm Cybersecurity Ventures notes that the global cybersecurity market was worth $3.5 billion in 2014 but increased to $120 billion in 2017. It projects that spending will grow to an annual average of $200 billion over the next five years. Tech giant Microsoft alone spends $1 billion each year on cybersecurity.

With projections of a 1.8 million-person shortfall for the cybersecurity workforce by 2022, this spending is due in part to the growing costs of recruiting talent. AI boosters believe the technology will reduce costs by requiring fewer humans while still making systems safe.

When were running security operation centers, were pushing as hard as we can to use AI and automation, said Dave Burg, EY Americas cybersecurity leader. The goal is to take a practice that would normally maybe take an hour and cut it down to two minutes, just by having the machine do a lot of the work and decision-making.

In the short-term, companies are bubbling with optimism that AI can help them turn the tide against the mounting cybersecurity threat.

In a report on AI and cybersecurity last summer, Capgemini reported that 69% of enterprise executives surveyed felt AI would be essential for responding to cyberthreats. Telecom led all other industries, with 80% of executives counting on AI to shore up defenses. Utilities executives were at the low end, with only 59% sharing that opinion.

Overall bullishness has triggered a wave of investments in AI cybersecurity, to bulk up defenses, but also to pursue a potentially lucrative new market.

Early last year, Comcast made a surprise move when it announced the acquisition of BluVector, a spinoff of defense contractor Northrop Grumman that uses artificial intelligence and machine learning to detect and analyze increasingly sophisticated cyberattacks. The telecommunications giant said it wanted to use the technology internally, but also continue developing it as a service it could sell to others.

Subsequently, Comcast launched Xfinity xFi Advanced Security, which automatically provides security for all the devices in a customers home that are connected to its network. It created the service in partnership with Cujo AI, a startup based in El Segundo, California that developed a platform to spot unusual patterns on home networks and send Comcast customers instant alerts.

Cujo AI founder Einaras von Gravrock said the rapid adoption of connected devices in the home and the broader internet of things (IoT) has created too many vulnerabilities to be tracked manually or blocked effectively by conventional firewall software. His startup turned to AI and machine learning as the only option to fight such a battle at scale.

Von Gravrock argued that spending on such technology is less of a cost and more of a necessity. If a company like Comcast wants to convince customers to use a growing range of services, including those arriving with the advent of 5G networks, the provider must be able to convince people they are safe.

When we see the immediate future, all operators will have to protect your personal network in some way, shape, or form, von Gravrock said.

Capgeminis aforementioned report found that overall, 51% of enterprises said they were heavily using some kind of AI for detection, 34% for prediction, and 18% to manage responses. Detection may sound like a modest start, but its already paying big dividends, particularly in areas like fraud detection.

Paris-based Shift has developed algorithms that focus narrowly on weeding out fraud in insurance. Shifts service can spot patterns in data such as contracts, reports, photos, and even videos that are processed by insurance companies. With more than 70 clients, Shift has amassed a huge amount of data that has allowed it to rapidly fine-tune its AI. The intended result is more efficiency for insurance companies and a better experience for customers, whose claims are processed faster.

The startup has grown quickly after raising $10 million in 2016, $28 million in 2017, and $60 million last year. Cofounder and CEO Jeremy Jawish said the key was adopting a narrow focus in terms of what it wanted to do with AI.

We are very focused on one problem, Jawish said. We are just dealing with insurance. We dont do general AI. That allows us to build up the data we need to become more intelligent.

While this all sounds potentially utopian, a dystopian twist is gathering momentum. Security experts predict that 2020 could be the year hackers really begin to unleash attacks that leverage AI and machine learning.

The bad [actors] are really, really smart, said Burg of EY Americas. And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad. And this is one of the reasons why I think this space is going to get increasingly dangerous. Incredibly powerful tools are being used to basically do the inverse of what the defenders [are] trying to do on the offensive side.

In an experiment back in 2016, cybersecurity company ZeroFox created an AI algorithm called SNAPR that was capable of posting 6.75 spear phishing tweets per minute that reached 800 people. Of those, 275 recipients clicked on the malicious link in the tweet. These results far outstripped the performance of a human, who could generate only 1.075 tweets per minute, reaching only 125 people and convincing just 49 individuals to click.

Likewise, digital marketing firm Fractl demonstrated how AI could unleash a tidal wave of fake news and disinformation. Using publicly available AI tools, it created a website that includes 30 highly polished blog posts, as well as an AI-generated headshot for the non-existent author of the posts.

And then there is the rampant use of deepfakes, which employ AI to match images and sound to create videos that in some cases are almost impossible to identify as fake. Adam Kujawa, the director of Malwarebytes Labs, said hes been shocked at how quickly deepfakes have evolved. I didnt expect it to be so easy, he said. Some of it is very alarming.

In a 2019 report, Malwarebytes listed a number of ways it expects bad actors to start using AI this year. That includes incorporating AI into malware. In this scenario, the malware uses AI to adapt in real time if it senses any detection programs. Such AI malware will likely be able to target users more precisely, fool automated detection systems, and threaten even larger stashes of personal and financial information.

I should be more excited about AI and security, but then I look at this space and look at how malware is being built, Kujawa said. The cat is out of the bag. Pandoras box has been opened. I think this technology is going to become the norm for attacks. Its so easy to get your hands on and so easy to play with this.

Researchers in computer vision are already struggling to thwart attacks designed to disrupt the quality of their machine learning systems. It turns out that these learning systems remain remarkably easy to fool using adversarial attacks. External third parties can detect how a machine learning system works and then introduce code that confuses the system and causes it to misidentify images.

Even worse is that leading researchers acknowledge we dont really have a solution for stopping mischief makers from wreaking havoc on these systems.

Can we defend against these attacks? asked Nicolas Papernot, an AI researcher at Google Brain, during a presentation in Paris last year. Unfortunately, the answer is no.

In response to possible misuse of AI, the cybersecurity industry is doing what its always done during such technology transitions try to stay one step ahead of malicious players.

Back in 2018, BlackBerry acquired cybersecurity startup Cylance for $1.4 billion. Cylance had developed an endpoint protection platform that used AI to look for weaknesses in networks and shut them down if necessary. Last summer, BlackBerry created a new business unit led by its CTO that focuses on cybersecurity research and development (R&D). The resulting BlackBerry Labs has a dedicated team of 120 researchers. Cylance was a cornerstone of the lab, and the company said machine learning would be among the primary areas of focus.

Following that announcement, in August the company introduced BlackBerry Intelligent Security, a cloud-based service that uses AI to automatically adapt security protocols for employees smartphones or laptops based on location and patterns of usage. The system can also be used for IoT devices or, eventually, autonomous vehicles. By instantly assessing a wide range of factors to adjust the level of security, the system is designed to keep a device just safe enough without having to always require maximum security settings an employee might be tempted to circumvent.

Otherwise, youre left with this situation where you have to impose the most onerous security measures, or you have to sacrifice security, said Frank Cotter, senior vice president of product management at BlackBerry. That was the intent behind Cylance and BlackBerry Labs, to get ahead of the malicious actors.

San Diego-based MixMode is also looking down the road and trying to build AI-based security tools that learn from the limitations of existing services. According to MixMode CTO Igor Mezic, existing systems may have some AI or machine learning capability, but they still require a number of rules that limit the scope of what they can detect and how they can learn and require some human intervention.

Weve all seen phishing emails, and theyre getting way more sophisticated, Mezic said. So even as a human, when I look at these emails and try to figure out whether this is real or not, its very difficult. So, it would be difficult for any rule-based system to discover, right? These AI methodologies on the attack side have already developed to the place where you need human intelligence to figure out whether its real. And thats the scary part.

AI systems that still include some rules also tend to throw off a lot of false positives, leaving security teams overwhelmed and eliminating any initial advantages that came with automation, Mezic said. MixMode, which has raised about $13 million in venture capital, is developing what it describes as third-wave AI.

In this case, the goal is to make AI security more adaptive on its own rather than relying on rules that need to be constantly revised to tell it what to look for. MixModes platform monitors all nodes on a network to continually evaluate typical behavior. When it spots a slight deviation, it analyzes the potential security risk and rates it from high to low before deciding whether to send up an alert. The MixMode system is always updating its baseline of behavior so no humans have to fine-tune the rules.

Your own AI system needs to be very cognizant that an external AI system might be trying to spoof it or even learn how it operates, Mezic said. How can you write a rule for that? Thats the key technical issue. The AI system must learn to recognize whether there are any changes on the system that feel like theyre being made by another AI system. Our system is designed to account for that. I think we are a step ahead. So lets try to make sure that we keep being a step ahead.

Yet this type of unsupervised AI starts to cross a frontier that makes some observers nervous. It will eventually be used not just in business and consumer networks, but also in vehicles, factories, and cities. As it takes on predictive duties and makes decisions about how to respond, such AI will balance factors like loss of life against financial costs.

Humans will have to carefully weigh whether they are ready to cede such power to algorithms, even though they promise massive efficiencies and increased defensive power. On the other hand, if malicious actors are mastering these tools, will the rest of society even have a choice?

I think we have to make sure that as we use the technology to do a variety of different things we also are mindful that we need to govern the use of the technology and realize that there will likely be unforeseen consequences, said Burg of EY Americas. You really need to think through the impact and the consequences, and not just be a naive believer that the technology alone is the answer.

Read the original here:

Is AI cybersecuritys salvation or its greatest threat? - VentureBeat

AI and ‘Enormous Data’ Could Make Tech Giants Like Google … – WIRED

v()`PE,Z(SDHY`B?wsI&"2"T&YDF.GdI_Y$'t'36v,/N0aO{wz`~X7+F|c|@Frl uHr}7vMOlccN]c:YCPfMkglB'*t.9Bfsf]}utD]gCG[VI&<[KgG9eta #li2.8+$c_ A7Lpz2H"%ttN3|3f_Gk7uZ{06P0x:99X!sjAth[l:Cp9~NUQWfM+~3ma5NiSA-;_)={d>=;IBaua872l^?a"br?D{*rcv0/}>MePrs^8f *%nn<3D Ssh5Gy,X]PKtE PGB NGoUVj(or3dnX9j:z(o(_u CJd&Rrdy5=kiz rh>#c:>XL7)G.a!D~NieRY"Ac Yb;@#?dG;(7m`UeM R&iY69CwsBOn,=3tg:/A2n%Dgm|w/G9p;Yx1uF,m,'{`{")=gv ?|93Sl39;-;{8n4VWwf?|yS=?ip0]pqZ[?bm_o ?}y/xW ud9:8t.vj $L:8-=;98qY3X^8}__?hq>}$G?f:Go:-S|hMkvsi7Vg2VS=aU: ~`YK}-KJTD5s_T!6TeZ13+2[k_[:*K0r.0nnmt;are1IqAn1Y; b}n;Q5wI(q_@'WwB-m2`x$>=a0H|GHlR CQmZR9*R=<1yJ;(I0!}>1~aZ &g?xH%4CP4Li{4xffj+)Wnf![" @lvn`rCb y/h9I=KS;)Hr0 STu?' H'TcWNa`gIOx K"NDLZ1"6.Oy&Bl31S,1" _K kYYu_^~pV+DA@* 53EFW.J].l;0}#l8?!7CErghMpdtI*z}EmaN_RPFkz9:bfeIVGwUOl;k[_ceLm=SUEg$@|"3O1_ILI;GFc%`EP9?EU.PG_M;nM.:Q2@^*P'K'P]lN/;6&=^@dP6{F=UwJGWi7 O!aU@hN6 TM&b~@uJzlQ'c&?=SZ] 53#&=aO{MZO$01BDIBDIz 0P4IO:y`Oa}S^A~@M,nlltA-~`l&=e|CI/>>T3} j)I/cjOqG$6T7f7f)M5 b~@[)vGO^A~@{Mz)I?uqO:31{vGe3-I?IO3==6=SsI?'4=Q'&tI?i?z AC?]Mi`~@u'P]tMIO@lcyJOZOnMPj7P:&&&L:u/j.)M-:;IJ+'YA~@ukI?:I//d} E,:=x4?tdL%g,=GbIScA:>X013wnTp/+?n`4U7W+o~sc=~l~SzAk%xK8UT>D[PeW+n:'Y}z7__^_~?cW.?$!:e6'%-t+v[]FdpcEgo,7x40.5Z-UYQYk).i'(io,,i~2kxne5m^[Yf ol,M(i27mRd^e5rNe5,sWSYz*P@OeT2?SYz*`@Oe,=e~0T2?SYz*P@Oe,C=eSY:2,2 ,2,~2 ,~2,~2,~2 ,2,~2Ty?POeTy?POeTyOPOeTyOPOeTyOPOeTyOPOeTyOPOeTy?Pet>j ,3lo,O)GYfQZ5)?RIf6XJdU{W `ez&Tt5cG)>m+%TJWIgRZ> v=q6Xx]b6*x$^O&.L)-&3d o.]Kv @0ha;l,F0} Hq+~Hcb=b4fzORdVtV1wf12%t KQnU85l%t%&"k0"ZRqO7.M{%>n_HZkG{H*oW`37%=npQg8X8hFUroDZ?w :VE&X'z1Z hVi:l`qLbak";!UDp{.0 7nH.U#lD)!}sI`Ii KQ," >B 5|?grx r>*HZ2$OG>Z6 -6c m[K D?9T{5N[5roX U6 s.;_9@ 2XG>1$)zdh!8;!A= <.k fhJ;#F!q:XPEw;A,OkKuT(p75tB+E4GjCyjJ?$Ha^.`Q9y{VNE=+G?e#:wrMXd> PP}Pjk/Mx xCguLpbN[X q^xyDQPSTXUDj$ KyZ("t#y4`SPk*'QoZ/Z$*|>RvyZo?2"0ZkVZqP~E5=e*_j:{m.8BOSBtAH*]*ZS?FXs.9@'Q$5O:$5m$W_]kv@T{RZt4o8=2=d/[,AKFNK4[Z8P4M=x?0IZ_&&KZxm_D0}!(p| *NzH/)PfM= *Xs 65 zm2$Z xdlod'QO16O,-=7|.X2a,eH;k`^DN`(OifFNudsITH`u/n+j,`P[ tM0EitLa+Ras dqI-K1vSE}p-?V-]:W_! }S p+-PQBm /B,}w4A/H_ !{;2e>gv$ 3>::~ }}+" ,~L8u7o ][6KG{%-fV.*cOIQZzMy4/$H+,3-P|wx;NT| ,HoM#BX?C,h%Kv@ D.~z!oX4z# < tFR$:0n?H'u)2XAr &/sbB;5HDMJ}kmey'}c%,T@E>^K*H R 3 /DH7yHcV2M RbE-j:&EAWHQx`'E"*NHY$dN<;O*lFv=dKhlbmm+;Ptv],Zt(1W)&,/n i`-g@57 JWLpS-` d'@6 Pz,M`$iZZl(C_&xXln)aNZyT4Xg q5 1~C'<-_s6P%qr _Tb@s0'b40sX[Y.c a6$dWB@HW[PLZE ZyAyF' ?h1[ yP}&][^"1?+4RKC4'Y.{~4|lF@DU6)_`ucC'f(:sc! 1KYRgW]n5^l@lzAa2|d!OVl>)(S.wS-zb-@>m4!1@Lw'M|l8:Ok?Gy[ytcn4CiRF5cx/ls20qqt22O 2aaVgffFfF (uf'6a UrwrNQNQNB^%r=2vOf6jgf#9x_h|Z(@x PN`S$ef7ADLx,Sv0 I d bX!e>+}X!f 1kJ6;KMcW9I/zdz8nO3 +Vk/yf_&^s/@$?3*,(t#Ph:2`fb#'{V)*gY'"ULj?A7j4:pcJ9GKZ3Zt/v}E(TmCUF6*'=im y07@ymPBObc>Yua]#[sJj*c"l7'Po}FS").eeg-;/v}B#31j#LQ=}V6E5[@kJRJc44:R=dTFaF9nVk_VG4}*P55`Edu ))|I6-x4P(jOz1 )idC GkV9e7CFsK+1&6$~E5O`/AAocY,N&T;e AYt 9?GHx^~"ghbb'T7tqz /}J {zhlAMKjb30{T?bor.x-Y'MN, +s4H;z`Z'P9*3bYp<2^M!H2{d{cz5Hn8'8IK?5=fLI_X'yly~INpF *Ekj[#>Qd2jc16m2-N5pH8t nZ$HqO%dM,-x=&N6A71F6UXmEG^leT hPZPZqT-}Evi5k cu1}]TylN3/Cq=u!4uQ!*Qwqsc/W2n`Yzzm#nu&4:jJ/3%.:)rmcOcm8v^RYK4gY) T$;%QHvOdJ|U&6Vl lly1I4P1^-">Z2S|:#sj lTG6F0fG>@Z2P`s,`ili1o]Vz]% f}%Omez;x}GU, `N3 ^WE^^znfw(PeZUQLiz=|VTq`HsF[ Dn:rWWja70i =i4*E#o% Tp#{fp]-nn>H-3U |YkOt;tlue0rn'Fk(Zd`RTK(s(_ o%S!4jQKe&|2D#$&Giqxr;xm.Zh9w$;e6EyZNn 1ECbgocJZjqa*6:,XO8rKWSfW3u+::=RsROm_r_YPFue=>s bYPsA;m_N,wm)G},5diy!q0-NFE* QlCsyux oZfZjVJ+J.6PS7mc7m7qpJCYSR51k$Jl7by9zL&n}f]7kE)36~f5P/Z`:w,hHg1xc4ceU}sUf]s1p#L93gDMc?4kMh*rPb1v-OCvSX%9+`[7Hb6jNa0+CXZu:<`(.KF c9>KEf0;j6D7cUWp;_kjttQwSX;c-yIDnX2.g g^C>V g n iaUEYUbgH&f}|#sB. $sjrI" !M6mxa)U6^`Y}1LKlc~c K|6Xf@$@XU{]$?r^AHKoCky) ]gtv!Vv8A==V^ 74o$=HZ?~tgxM! @IZZY#bs<;9]q}?x?~|dDC_xs8 }I?fb8_Hl:F[@[x~9_#)A B{At.>H@UEQX!*!eRhMweav`=|{y}y d9 Gt< hb/wXVudn1TWf:3vLG90 Lnn)(RQiK'&NvkCxHub;l6uP|PUGjpO--QA!iac+%;sqnGAq;r1fH)[6 <"mqwifMg1W]].eWR3aizeKQ)i|6 qOE?q&o*7KY~"(@39gq{:"b4N{3TMV.[F&b3XYi* h4c`dW3d @8].&2ky);_P }P3,2(e`P]<+p pQ~ped ,Ew.T R_5AIYX.1OcTyI}EoP@C8#~lF.~@o3[T72]q6-N=jb.R`bz@4*}=fqg)"DN/@= `<#pls1m)"6Z%Wf(=sy>x}ThfcuK7+j3RGhNtBW0,>s% )I'5.c>+s]&b/(QP&GpkC1,`t3UgZMH0Arp6 p];mN mb:tvm@XfpZIUE.`!2EGR)pL-[q;f_] `Gii7 agcZ`kGF-EA1F3ha-Do1?,);'ldN|91 9<@+2B'b9Pm."h%Jqf n ~Y)#3 plaFMG=(HpsS?>,^Rb 9d~Q#X@)mVoWx*r-K#OJ$G4#"1e#cA^DE7kB2c^ePo2:9B~B[N:l6E-I/Bh sxs&GY24l;s pZs5WA~w}2.u9#Ig>"k/j-bb~w=JGN|yL?ZsDnV;, ;|WZTpQti>RH?g 8X7BJ#Y{ `e'qaQI^L^ZfD,Vh#.7KbVj`!F)sV`(ux.9} Y% .XQXrl7{([TZ5s#s5 -!>#!UPl7#Onr| 2 ,OUaD`?FWEwe'!P rxc#(F(5t}Zi5R[

W&Qh7fNRsg$+$!!s%0&v;+hM{ormkl613oq6Q6_;0_=Dys)X7Ev~sI# pfalOl1}|*b7?;z0!t?VsTHr`]-%'y@]r"`wWDGLe2qi+BTYs(XubryD0<9;-/%qdp|b.D|X/u $R$n$,("h!l-3S<[&y3oRK,rH>Yr,vk6F6:/`1(xv "Wa%H"4#+"D'(0T{PJv'tSe/ 2Z!_S[-b%"M)~F{r1P|y+ ,U"yWu'[$&ARK8MAOz=ABNz#y8:_qB?ROn*twM q.uk!8Lt"'.:'dbyLD:%rn"^J]g` QjrQ.$yN <)$XLwpaEaB>~;N6w GepU|H)i. XyO7U*,Ka5wBa#JhA%9!1 sfNi$+Lk@^q,/Sln9Vihi7rCLt:0D h>bX*@9k"beF%-ztD )ep:Na0J L6;E"+2eB)_9 RZk|.:Y6K'6g#%9ohPaM@o6a-rI$3xiq<&I8A.} 200y019jVLlit-EapBVGo m~pa' MB7,1w6K/1d#A|t} =f)#Lg#g -KNk2 u,|sHS@Qs<9.6;AFJq? = 2}]9LXXL%33mQQ*^n/}N3]hyc3Ph:1Y}/UwgCE(ylOl <4Od-dfTcl&.Y%_DyotOV*,*K XGLW6Qm-w-~}NT[5,3-X}~qwwg;7My?=:GXa( /6AeX%Y%>r'nxTz2Ss(*>l|gI,>jbi^@NwHe/DjgdQQAU"} .tKT<]%EVS Hu}3Tt/7X~x85Yi t.wIvCQ:`"Z9&l= xx@v~J$X`%R ^b3n%DxveD$ )B0X8MTA UXDbQOyHlzE(B(i!|*stce`2OLJACH7lX?~Ew}WSm"@|UEisa h#6?x 8<6'dr%c/m0[}4Ft3eWl%J6,D[1Z.< gWgOO"4f[Se{ep?KIKX}*54|7f_cTBJ(/9N N8k umZjvGw-#Bw!@WQHHRYhTPhB}L L}-D2gfn:ft0OYl)habO Gq$uyqHq&HGYS3E0]O)=vQu0C2eTT:W12FyqS`!MQ,P?y<|d!r[% 7yZ`uCHN"DcD138q'KaH~Ki*sIhS0EW'9vZyDJ_d:-dxrnNBl]E:1J^C?UBFl+)JQM9Cs-t9L2M`$s!_FDqjtvY0iVfZJAU>}.y:@eY]t;|sW^(-UIY dWLA8K.g) X$JtyWy qT[h)+NF`CSK>w"Iz=}$%N3F5EO0+i=DiZc/2smp 'Fg "}lD(({sCa!T8V>&YK:'{qe*p{EgOdN%&M3i>y *?_Lz%<<7.B8DO!JL1aV2:Y!oI|=6279|quozVNkuo9x)sE3tv8'lR aqK)w 57>f}Ft%?T= OQoA1s+,d%I#?dYt'dfqLumcA< =O;(b]'2 F?LW@5{BF|q*gf H~xI}3gHue,WfL!P:pE^tZy &0H3;+4! iDC(+,-MPECBmcyT)[nd;Sr|u=h,7-8`>3t,gyC y=m)8lg^]BZ!pyUg7":_GAdgo H?&RlX35ezzigu:1h>WOTlf;%C YGC:+Io<9t-c ?=0$K,ITpTTzxI.*Kh_2p#8ky/zt|>sT 1[an[+|xsxuCd_0@*nHrDF@gOuW^/:Ah VFV#@G>cKwG=]1&1T4 b)5K3l:8!beD9P(/!'O% Y4qT)Q FM.TN3A!PxP7ZS#,rcr+s|Q `maz<[h7)z~Y`$N/fIpR0$} &cNAfd^qX%LeH7xP'_j{%$m-pd'r"U_I1..yBjk.$ !("fVz16o~GQx:[bKl^c`wOAW{$M!k<],)`oOUtEat$IxA])eA{A6Yk~]+|Z2:xmGB>%UU~R?^I hk8iR*'2] ;fKg)8(3 @?Cka8@s F?fniQM2C?g L@e):ep?4dT0v_)Ad|Pj~"Tv X`(TxIbg!QgIe;MQ^T_* +:l4!YJkWpU:Tkxs {=b8v=xwG(nYwVBlXTSYm.S79)JUgs"&k3>K'RS=iFc677rY!3B'Y*w13eC*: "`>yiWDZ+)Jx7/ ^(?H Zz(B'xj35vjPy+mPCA6 _,r,UAh?*~}:S}>B)$WWpByvQ]hQU,ScM5%;2}+s%X+_TVKRRA:RswI2|]g*rX+x[C `2{"27Gkg#K}{A>hu4M8IDUc+SAH *Jn7'W6.}`"KB.i:R47Hvsc9:~1CCA_ *_*PJ5l)bC(tBX^ >( .) {5b* =*<.nDtSIU2+>HE (ExDE.ZU7}?Z|6c[Ef|Y@,_hb4mGsLmO![UN0!OOjO/V'0lMux b{@!_@!]xZBl /sT'(:SzXf; 2Id$L.}_OpG]DfUCfm2 ,mae)M+BXP[/b4#[ TUvG`!:i)q _4HUQI<[)D&uob %X*Y&@40|^wABi9[i[+H!3uS>jkt,{i,4'rni7ZFRl*]L3]Fq+9a1jsLK9+@&+7>V'%oIpqUtDe;%yZZ$d YfRg2Z M"~Q#xcK% `d`vx]XK-Mb+_h yEW((uq]V>fA=jP Eh DrzeQ]c9v5y*w_Uy[| GzVzYx%**+Ca4-Q)4 j|5<5k$cG(&^OGWWP:kXyd/5?/BziJmicP?iY9S871@p"_c"$iMDgq-Hwx]m2q=QA#x-TaWl { $i?"}KIs[|ewoDy8:7,gb7uQ+j,l -5rMI48K{=}htm;hF6'al/K &JI~gO-Yn&9ew2[]V+r=$}FAZV^r&|kP[%4:BG#]w/YJ3H[bBlEj 2!]Y@Z Q(&,IPs78)D6&lPp|)A3Ro.2m?cVhF~949-:pIDo1~qYE^BuD$!I22 Yr5'b b~wwjB#97+_`sBt~Ghly-$~.~ htH:]at]sJ'8^;NP!,1)(BQaAV$i8X-<[s"ek2c.ofUw$;[ez%HMG@7WyC(X~p-s)G$!Q :DN2y*k dK4KVDmke>sPu{wb 7!u`O"AM(7?(~|C/|mL|>iy"o>RS^|7 =_xM|T33Hw>Ya7 g7H$W{?(=T#F<%jxRfQ@}O"f;kYtw/meA`z{2z.t}|_* Rli

Originally posted here:

AI and 'Enormous Data' Could Make Tech Giants Like Google ... - WIRED

Google AI executive sees a world of trillions of devices untethered from human care – ZDNet

If artificial intelligence is going to spread to trillions of devices, those devices will have to operate in a way that doesn't need a human to run them, a Google executive who leads a key part of the search giant's machine learning software told a conference of chip designers this week.

"The only way to scale up to the kinds of hundreds of billions or trillions of devices we are expecting to emerge into the world in the next few years is if we take people out of the care and maintenance loop," said Pete Warden, who runs Google's effort to bring deep learning to even the simplest embedded devices.

"You need to have peel-and-stick sensors," said Warden, ultra-simple, dirt-cheap devices that require only tiny amounts of power and cost pennies.

"And the only way to do that is to make sure that you don't need to have people going around and doing maintenance."

Warden was the keynote speaker Tuesday at a microprocessor conference held virtually, The Linley Fall Processor Conference, hosted by chip analysts The Linley Group.

Warden offered the assembled, mostly chip industry executives, a wish list, as he put it, for hardware for devices.

That wish list includes ultra-low-power chips that do away with complex memory access and file access mechanisms, and instead focus on the repetitive arithmetic operations required in machine learning. Machine learning makes heavy use of linear algebra, consisting of vector-matrix and matrix-matrix multiplication operations.

Embedded deep learning needs chips that have "more arithmetic," said Warden.

"ML workloads are usually compute-bound," he told the audience. "We load a few activation and weight values, and do a lot of arithmetic on them in registers."

Warden's vision is that of self-sufficient devices that would run on battery power, perhaps for years, without needing to connect to a wall socket very often, perhaps not ever.

That would exclude the Raspberry Pi, said Warden, and anything else that requires "mains power," being plugged into a wall, and things that draw watts of power from a battery, such as a smartphone.

Instead, "We are aiming at the edge of the edge," said Warden, devices that are even more resource-constrained than cell phones, things such as peel-and-stick sensors that can be used in industrial applications.

"We are really looking at running on devices that are less than a dollar, maybe even 50 cents in price, that have a very small form factor."

Also: What is edge computing? Here's why the edge matters and where it's headed

Such devices might draw a single milliwatt to operate, he said, which "is really important, because that means you have a device that can run on double-A batteries for a year or two years, or even via energy harvesting from solar or vibration."

The challenge at present for deep learning forms of machine learning, Warden told the audience, is that many deep learning neural networks can't run at all on embedded devices because of the diffuse requirements of all the many micro-controller platforms that exist.

"We interact with a lot of product teams inside Google trying to build very interesting new products, and product teams at companies all over the world, and we often have to say, No, that's not quite possible yet," Warden told the audience.

"Because what's happening is the technology around deep learning, and the kinds of models that you can actually build on the training side that would be useful for product features, they often can't actually be deployed on the kinds of devices that people have in their actual hardware platforms."

If such models could be made to run on those billions of devices, "they would enable a whole bunch of new experiences for users," he said.

Embedded machine learning of the kind Warden discussed is part of a broader movement called TinyML. Today, examples of TinyML are fairly limited, things such as the wake word that activates a phone, such as "Hey, Google," or "Hey, Siri." (Warden confided to the audience, with a chuckle, that he and colleagues have to refer to "Hey, Google" around the office as "Hey, G," in order not to have one another's phones going off constantly.)

Warden has been leading the software effort to make possible the kinds of ultra-light-weight devices he was talking about. That effort is called TensorFlow Lite Micro, or TF Micro.

Warden and colleagues built on the existing TensorFlow Lite framework that exports trained machine learning models to run on embedded devices. While TF Lite removes some of the complexity of TensorFlow to make it feasible in a smaller-footprint device, TF Micro goes even further, to make machine learning able to run in devices with as little as 20 kilobytes of RAM.

TF Micro was introduced this month in a formal research paper by Warden and colleagues. The researchers had to build a framework that would work across numerous chip instruction sets, work with low-power microcontrollers, and they had to design it to support a greatly-reduced number of operations, excluding functions such as loading files from external locations.

The team also had to handle refinement of machine learning models for low-resource devices, which meant optimizing the quantization of models, representing operands in 8-bit integer form, say, rather than 32-bit floating point.

What Warden and team settled on is an interpreter that runs multiple models simultaneously. Using an interpreter not only makes it possible to run across the plethora of embedded platforms, it also makes it possible to update machine learning models as they improve without having to recompile models for a given device.

Chips to run TF Micro will have to do things that get around the limited nature of the embedded framework, Warden said. While full-blown TensorFlow supports 1,200 operations, TF Micro only supports a small fraction of those.

As a result, chips for running inference have to be able to "fall back to general-purpose code" rather than supporting every single last instruction.

"One of the real drawbacks of a lot of hardware accelerators is that they fail to run a lot of the models that people want to run on them," said Warden. "We want custom accelerators to fall back to run general-propose code without a massive performance penalty."

Summing up his wish list, Warden told the audience, "Really, what I'm looking for is tens or hundreds of billions of operations per second per milliwatt."

Some of the demands may be beyond what's feasible at present, he acknowledged. "I would love to have megabytes of model storage space instead of kilobytes," although, "I understand that's challenging."

"And, of course, I want it cheaper," he said.

The Linley conference, now in its fifteenth year, has over 1,000 attendees this year, conference organizer Linley Gwennap told ZDNet, which is more than three times as many attendees as in prior years, when the event was held at hotel ballrooms in the Silicon Valley area.

The conference continues through today.

The rest is here:

Google AI executive sees a world of trillions of devices untethered from human care - ZDNet

PERSPECTIVE: Why Strong Artificial Intelligence Weapons Should Be Considered WMD Homeland Security Today – HSToday

The concept of strong Artificial Intelligence (AI), or AI that is cognitively equivalent to (or better than) a human in all areas of intelligence, is a common science fiction trope.[1] From HALs adversarial relationship with Dave in Stanley Kubricks film 2001: A Space Odyssey[2] to the war-ravaged apocalypse of James Camerons Terminator[3] franchise, Hollywood has vividly imagined what a dystopian future with super intelligent machines could look like and what the ultimate outcome for humanity might be. While I would not argue that the invention of super-intelligent machines will inevitably lead to our Schwarzenegger-style destruction, rapid advances in AI and machine learning have raised the specter of strong AI instantiation within a lifetime,[4] and this requires serious consideration. It is becoming increasingly important that we have a real conversation about strong AI before it becomes an existential issue, particularly within the context of decision making for kinetic autonomous weapons and other military systems that can result in a lethal outcome. From these discussions, appropriate global norms and international laws should be established to prevent the proliferation and use of strong AI systems for kinetic operations.

With the invention of almost every new technology, changes to ethical norms surrounding its appropriate use lag significantly behind proliferation. Consider social media as an example. We imagined that social media platforms would bring people together and facilitate greater communication and community, yet the reality has become significantly less sanguine.[5] Instead of bringing people together, social media has deepened social fissures and enabled the proliferation of disinformation at a virulent rate. It has torn families apart, caused greater divide, and at times transformed the very definition of truth.[6] Only now are we considering ethical restraints on social media to prevent the poison from spreading.[7] It is highly probable that any technology we create will ultimately reflect the darker parts of our nature, unless we create ethical limits before the technology becomes ubiquitous. It would be foolish to believe that AI would be an exception to this rule. This becomes especially important when considering strong AI designed for warfare, which is distinguishable from other forms of artificial intelligence.

To fully examine the implications of strong AI, we need to understand how it differs from current AI technologies, which are what we would consider weak AI.[8] Your smartphones ability to recognize images of your face is an example of weak AI. For a military example, an algorithm that can recognize a tank in an aerial video would be considered a weak AI system.[9] It can identify and label tanks, but it does not really know what a tank is or have any cognizance of how it relates to a tank. In contrast, a strong AI would be capable of the same task (as well as parallel tasks) with human-level proficiency (or beyond), but with an awareness of its own mind. This makes strong AI a more unpredictable threat. Not only would strong AI be highly proficient at rapidly processing battlefield data for pre- and post-strike decision making, but it would do so with an awareness of itself and its own motives, whatever they might be. Proliferation of weak AI systems for military applications is already becoming a significant issue. As an anecdotal example, Vladimir Putin has stated that the nation that leads AI will be the ruler of the world.[10] Imagine what the outcome could be if military AI systems had their own motives. This would likely involve catastrophic failure modes beyond what could be realized from weak AI systems. Thus, military applications of strong AI deserve their own consideration.

At this point, one may be tempted to dismiss strong AI as being highly improbable and therefore not worth considering. Given the rapid pace of AI technology development, it could be argued that, while the precise probability of instantiating strong AI is unknown,[11] it is a safe assumption that it is greater than zero. But what is important in this case is not the probability of strong AI instantiation, but the severity of a realized risk. To understand this, one need only consider how animals of greater intelligence typically consider animals of lesser intelligence. Ponder this scenario: when we have ants in our garden, does their well-being ever cross our minds? From our perspective, the moral value of an insect is insignificant in relation to our goals, thus we would not hesitate to obliterate them simply for eating our tomatoes. Now imagine if we encountered a significantly more intelligent AI how might it consider us in relation to its goals, whatever they might be? This meeting could yield an existential crisis if our existence hinders the AIs goal achievement, thus even this low-probability event could have a catastrophic outcome if it became a reality.

Understanding what might motivate a strong AI could provide some insight into how it might relate to us in such a situation. Human motivation is an evolved phenomenon. Everything that drives us (self-preservation, hunger, sex, desire for community, accumulation of resources, etc.) exists to facilitate our survival and that of our kin.[12] Even higher-order motives, like self-actualization, can be linked to the more fundamental goal of individual and species survival when viewed through the lens of evolutionary psychology.[13] However, a strong AI would not necessarily have evolved. It may simply be instantiated in situ as software or hardware. In this case, no evolutionary force would have existed over eons to generate a motivational framework analogous to what we, as humans, experience. In an instantiated strong AI, it might be prudent to assume that the AIs primary motive would be to achieve whatever goal it was initially programmed to do. Thus, self-preservation might not be the primary motivating factor. However, the AI would probably recognize that its continued existence is necessary for it to achieve its primary goal, thus self-preservation could become a meaningful sub-goal.[14] Other sub-goals may also exist, some of which would not be obvious to humans in the context of how we understand motivation. The AIs thought process by which sub-goals are generated or achieved might be significantly different from what humans would expect.

The existence of AI sub-goals that do not follow the patterns of human motivation implies the existence of a strong AI creative process that may be completely alien to us. One only needs to look at AI-generated art to see that AI creativity can manifest itself in often grotesque ways that are vastly different from what a human might expect.[15] While weird AI artistry hardly poses an existential threat to humanity, it illustrates the concept of perverse instantiation,[16] where the AI achieves a goal, but in an unexpected and potentially malignant way. As a military example, imagine a strong AI whose primary goal is to degrade and destroy the adversary. As we have demonstrated, AI creativity can be unbounded in its weirdness, as its thought processes are unlike that of any evolved intelligence. This AI might find a creative and completely unforeseen way to achieve its primary goal that leads to significant collateral damage against non-combatants, such as innocent civilians. Taking this analogy to a darker level, the AI might determine that a useful sub-goal would be to remove its military handlers from the equation. Perhaps they act as a man in the middle gatekeeper in affecting the AIs will, and the AI determines that this arrangement creates unacceptable inefficiencies. In this perverse instantiation, the AI achieves its goal of destroying the enemy, but in a grotesque way by killing its overseers.

The next obvious question is, how could we contain a strong AI in a way that would prevent malignant failure? The obvious solution might be in engineering a deontological ethic an Asimovian set of rules to limit the AIs behavior.[17] Considering a strong AIs tendency toward unpredictable creativity in methods of goal achievement, encoding an exhaustive set of rules would pose a titanic challenge. Additionally, deontological ethics is often subject to deontological failure, e.g., what happens when rules contradict one another? A classic example would be the trolly problem: if an AI is not allowed to kill a human, but the only two possible choices involve the death of humans, which choice does it make?[18] This is already an issue in weak AI, specifically with self-driving cars.[19] Does the vehicle run over a small child who crosses the road, or crash and kill its inhabitants, if those are the only possible choices? If deontological ethics are an imperfect option, perhaps AI disembodiment would be a viable solution. In this scenario, the AI would lack a means to directly interact with its environment, acting as sort of an oracle in a box.[20] The AI would advise its human handlers, who would act as ethical gatekeepers in affecting the AIs will. Upon cursory examination, this seems plausible, but we have already established that a strong AI might determine that a man in the middle arrangement degrades its ability to achieve its primary goal, so what would prevent the AI from coercing its handlers into enabling its escape? In our hubris, we would like to believe that we could not be outsmarted by a disembodied AI, but a being that is more intelligent than us could reasonably outsmart us just as easily as a savvy adult could a nave child.

While a single strong AI instantiation could pose a significant risk of malignant failure, imagine the impact that the proliferation of strong AI military systems might have on how we approach war. Our adversaries are earnestly exploring AI for military applications; thus, it is extremely likely that strong AI may become a reality and also proliferate.[21] The real problem becomes not how to prevent malignant failure of a single strong AI, but how to address the complex adaptive system of multiple strong AIs fighting against all logical actors, none of which exhibit reasonably predictable behavior.[22] To further complicate matters, ethical decision making is influenced by culture, and our adversaries might have different ideas as to which strong AI behaviors are acceptable during war, and which are not.

To avoid this potentially disastrous outcome, I propose the following be considered for further discussion with the hopeful end-goal of appropriate global norms and future international laws that ban strong AI decision making for kinetic offensive operations. Strong AI-based lethal autonomous weapons should be considered a weapon of mass destruction. This may be the best way to prevent the complex, unpredictable destruction that could arise from multiple strong AI systems intent on killing the enemy or unnecessarily wreaking havoc on critical infrastructure, which may have negative secondary and tertiary effects impacting countless innocent non-combatants. Inevitably, there may be rogue or non-signatory actors who develop weaponized strong AI systems despite international norms. Any strategy that addresses strong AI should also consider this potential outcome.

Several years ago, seriously discussing strong AI might get you laughed out of the room. Today, as AI continues to advance, and as our adversaries continue to aggressively militarize AI technologies, it is imperative that the United States consider a defense strategy specifically addressing the possibility of a strong AI instantiation. Any use of strong AI in the battlefield should be limited to non-kinetic operations to reduce the impact of malignant failure. This standard should be reflected in multilateral treaty agreements or protocols to prevent strong AI misuse and the inevitable unpredictability of adversarial strong AI systems interacting with each other in complex, unpredictable, and possibly horrific ways. This may be a sufficient way to ensure that weaponized strong AI does not cause cataclysmic devastation.

The author is responsible for the content of this article. The views expressed do not reflect the official policy or position of the National Intelligence University, the Department of Defense, the U.S. Intelligence Community, or the U.S. Government.

(Visited 371 times, 1 visits today)

Original post:

PERSPECTIVE: Why Strong Artificial Intelligence Weapons Should Be Considered WMD Homeland Security Today - HSToday

Defense Official Calls Artificial Intelligence the New Oil – Department of Defense

Artificial intelligence is the new oil, and the governments or the countries that get the best datasets will unquestionably develop the best AI, the Joint Artificial Intelligence Center's chief technologyofficer said Oct. 15.

Speaking on a panel about AI superpowers at the Politico AI Summit, Nand Mulchandani said AI is a very large technology and industry. "It's not a single, monolithic technology," he said. "It's a collection of algorithms, technologies, etc., all cobbled together to call AI."

The United States has access to global datasets, and that's why global partnerships are so incredibly important, he said, noting the Defense Department launched the AI partnership for defense at the JAIC recently to have access to global datasets with partners, which gives DOD a natural advantage in building these systems at scale.

"Industry has to develop on its own, and that's where the global talent is; that's where the money is; that's where all of the innovation is going on," Mulchandani noted, adding that the U.S. government's job is to be able to work in the best way and absorb the best technology that it can. That includes working hand in glove with industry on a voluntary basis, he said. He said there are certain areas of AI that are highly scaled that you can trust and deploy at scale.

"But notice many or not many of those systems have been deployed on weapon systems. We actually don't have any of them deployed," he said.

Mulchandani said the reason is that explainability, testing, trust and ethics are all highly connected pieces and even AI security when it comes to model security, data security being able to penetrate and break models. This is all very early, which is why the DOD and the U.S. government widely have taken a very stringent approach to putting together the ethics principles and frameworks within which we're going to operate.

"[Earlier this year, one of the first international visits that we made were to NATO and our European partners, and [we] then pulled them into this AI partnership for defense that I just talked about," he said. "Thirteen different countries are getting together to actually build these principles because we actually do need to build a lot of confidence in this."

He said DOD continues to attract and have the best talent at JAIC. "The real tricky part is: How do we actually take that technology and get it deployed? That's the complexity of integrating AI into existing systems, because one isn't going to throw away the entire investment of legacy systems that one has, whether it be software or hardware or even military hardware," Mulchandani said. "[How] can we absorb the best of what's coming and get it integrated into the system as where the complexity is?"

DOD has had a long history of companies that know how to do that, and harnessing it is the actual work and the piece that we're worried about the most and really are focused on the most, he added.

A global workforce the DOD technology companies are global companies, he emphasized. "These are not linked to a particular geographic region. We hire. We bring the best talent in, wherever it may be, [and we have] research and development arms all over the world."

DOD has special security needs and requirements that must be taken care of when it comes to data, and the JAIC is putting in place very different development processes now to handle AI development, he said. "So, the dynamics of the way software gets built [and] the dynamics of who builds it are changing in a very significant way," Mulchandani said. "But the global war for talent is a real one, which is why we are not actually focused on trying to corner the market on talent."

He said they are trying to build leverage by building relationships with the leading AI companies to harness the innovation.

Read more:

Defense Official Calls Artificial Intelligence the New Oil - Department of Defense

Smart startups accelerate the pace of AI innovation Sponsored Content by Dell EMC – EnterpriseAI

Dell Technologies teams with AI startups to make it easier for organizations put new artificial intelligence solutions into production.

At the outset of a new decade, news organizations and their prognosticators like to make predictions about what lies ahead for the next 10 years. Today, I will jump into the same game and make a sweeping prediction about the decade that is just getting under way: The 2020s will be the decade in which artificial intelligence comes of age.

Today, there is a tidal wave of momentum for AI, which is apparent in projections for dramatic growth in the AI market. A recent report from the Fortune Business Insights, for example, predicts that the global AI market will grow at a rate of more than 33 percent per year, rising from around $20 billion in 2018 to top $200 billion by 2026.[1]

This kind of market growth creates fertile ground for startup companies that are bringing innovative AI technologies to market including those for machine learning, deep learning and more. This is a game most companies now want to play. As Jeremy Achin, CEO of the AI startup DataRobot, says in a Forbes magazine story, Everyone knows you have to have machine learning in your story or youre not sexy.[2]

At Dell Technologies, we are all in with the move to AI-driven products and processes. To that end, we work closely with many AI startups to bring their software innovations to market on Precision workstations, PowerEdge servers, and to deliver joint HPC/AI solutions that make it easier for companies to adopt new AI technologies.

For example, Dell Technologies recently introduced new solutions to advance HPC and AI innovation, including new Dell EMC Ready Solutions designed to simplify and accelerate the path to AI.

The new scalable Dell EMC HPC Ready Architecture for AI and Data Analytics delivers the power of accelerated AI computing from the edge to high performance computing with an easy-to-deploy cloud-native software stack. The new Ready Solutions for Data Analytics, validated design forDomino Data Lab enables data scientists to develop and deliver models faster while providing IT with a centralized, extensible platform spanning the entire data science lifecycle.

To simplify AI deployments for all sizes of organizations, Dell Technologies also released new reference architectures developed in collaboration with AI partners:

These architectures are designed to help organizations accelerate the deployment of AI solutions for training and inferencing to modernize, automate and transform their data centers. The architectures are optimized for Intel Xeon Scalable processors and Dell EMC PowerEdge servers, storage and data protection technologies.

This is exactly what it is going to take for enterprises to capitalize on the promise of AI in the 2020s. To get there, organizations need partners, solutions and technologies that pave the path to the future.

And at Dell Technologies, were excited to help accelerate this move to the digitally driven business that capitalizes on AI across the enterprise.

Ready to get started?

Here are some of the ways that your organization can get started down the path to proven approaches to AI:

[1] Fortune Business Insights, Artificial Intelligence (AI) Market Share, Size, and Industry Analysis, January 2020.

[2] Forbes, AI 50: Americas Most Promising Artificial Intelligence Companies, September 17, 2019.

3 Dell Technologies TCO AnalysisHPC Ready Architecture for AI and Data Analytics, https://infohub.delltechnologies.com/section-assets/h18136-tco-analysis-dell-emc-hpc-ra-for-ai-da-sb, February 2020.

Related

Continue reading here:

Smart startups accelerate the pace of AI innovation Sponsored Content by Dell EMC - EnterpriseAI

A Simple Tactic That Could Help Reduce Bias in AI – Harvard Business Review

Its easier to program bias out of a machine than out of a mind.

Thats an emerging conclusion of research-based findings including my own that could lead to AI-enabled decision-making systems being less subject to bias and better able to promote equality. This is a critical possibility, given our growing reliance on AI-based systems to render evaluations and decisions in high-stakes human contexts, in everything from court decisions, to hiring, to access to credit, and more.

Its been well-established that AI-driven systems are subject to the biases of their human creators we unwittingly bake biases into systems by training them on biased data or with rules created by experts with implicit biases.

Consider the Allegheny Family Screening Tool (AFST), an AI-based system predicting the likelihood a child is in an abusive situation using data from the same-named Pennsylvania countys Department of Human Services including records from public agencies related to child welfare, drug and alcohol services, housing, and others. Caseworkers use reports of potential abuse from the community, along with whatever publicly-available data they can find for the family involved, to run the model, which predicts a risk score from 1 to 20; a sufficiently high score triggers an investigation. Predictive variables include factors such as receiving mental health treatment, accessing cash welfare assistance, and others.

Sounds logical enough, but theres a problem a big one. By multiple accounts, the AFST has built-in human biases. One of the largest is that the system heavily weights past calls about families, such as from healthcare-providers, to the community hotline and evidence suggests such calls are over three times more likely to involve Black and biracial families than white ones. Though multiple such calls are ultimately screened out, the AFST relies on them in assigning a risk score, resulting in potentially racially-biased investigations if callers to the hotline are more likely to report Black families than non-Black families, all else being equal. This can result in an ongoing, self-fulfilling, and self-perpetuating prophecy where the training data of an AI system can reinforce its misguided predictions, influencing future decisions and institutionalizing the bias.

It doesnt have to be this way. More strategic use of AI systems through what I call blind taste tests can give us a fresh chance to identify and remove decision biases from the underlying algorithms even if we cant remove them completely from our own habits of mind. Breaking the cycle of bias in this way has the potential to promote greater equality across contexts from business to science to the arts on dimensions including gender, race, socioeconomic status, and others.

Blind taste tests have been around for decades.

Remember the famous Pepsi Challenge from the mid-1970s? When people tried Coca-Cola and Pepsi blind no labels on the cans the majority preferred Pepsi over its better-selling rival. In real life, though, simply knowing it was Coke created a bias in favor of the product; removing the identifying information the Coke label removed the bias so people could rely on taste alone.

In a similar blind test from the same time period, wine experts preferred California wines over their French counterparts, in what became known as the Judgment of Paris. Again, when the label is visible, the results are very different, as experts ascribe more sophistication and subtlety to the French wines simply because theyre French indicating the presence of bias yet again.

So its easy to see how these blind taste tests can diminish bias in humans by removing key identifying information from the evaluation process. But a similar approach can work with machines.

That is, we can simply deny the algorithm the information suspected of biasing the outcome, just as they did in the Pepsi Challenge, to ensure that it makes predictions blind to that variable. In the AFST example, the blind taste test could work like this: train the model on all data, including referral calls from the community. Then re-train the model on all the data except that one. If the models predictions are equally good without referral-call information, it means the model makes predictions that are blind to that factor. But if the predictions are different when those calls are included, it indicates that either the calls represent a valid explanatory variable in the model, or there may be potential bias in the data (as has been argued for the AFST) that should be examined further before relying on the algorithm.

This process breaks the self-perpetuating, self-fulfilling prophecy that existed in the human system without AI, and keeps it out of the AI system.

My research with Kellogg collaborators Yang Yang and Youyou Wu demonstrated a similar anti-bias effect in a different domain: the replicability of scientific papers.

What separates science from superstition is that a scientific fact that is found in the lab or a clinical trial replicates out in the real world again and again. When it comes to evaluating the replicability or reproducibility of published scientific results, we humans struggle.

Some replication failure is expected or even desirable because science involves experimentation of unknowns. However, an estimated 68% of studies published in medicine, biology, and social science papers do not replicate. Replication failures continue to be unknowingly cited in the literature, driving up R&D costs by an estimated $28 billion annually and slowing discoveries of vaccines and therapies for Covid-19 and other conditions.

The problem is related to bias: when scientists and researchers review a manuscript for publication, they focus on a papers statistical and other quantitative results in judging replicability. That is, they use the numbers in a scientific paper much more than the papers narrative, which describes the numbers, in making this assessment. Human reviewers are also influenced by institutional labels (e.g., Cambridge University), scientific discipline labels (physicists are smart), journal names, and other status biases.

To address this issue, we trained a machine-learning model to estimate a papers replicability using only the papers reported statistics (typically used by human reviewers), narrative text (not typically used), or a combination of these. We studied 2 million abstracts from scientific papers and over 400 manually-replicated studies from 80 journals.

The AI model using only the narrative predicted replicability better than the statistics. It also predicted replicability better than the base rate of individual reviewers, and as well as prediction markets, where collective intelligence of hundreds of researchers is used to assess a papers replicability, a very costly approach. Importantly, we then used the blind taste test approach and showed that our models predictions werent biased by factors including topic, scientific discipline, journal prestige, or persuasion words like unexpected or remarkable. The AI model provided predictions of replicability at scale and without known human biases.

In a subsequent extension of this work (in progress), we again used an AI system to reexamine the scientific papers in the study that had inadvertently published numbers and statistics that contained mistakes that the reviewers hadnt caught during the review process, likely due to our general tendency to believe figures we are shown. Again, a system blind to variables that can promote bias when over-weighted in the review process quantitative evidence, in this case was able to render a more objective evaluation than humans alone could, catching mistakes missed due to bias.

Together, the findings provide strong evidence for the value of creating blind taste tests for AI systems, to reduce or remove bias and promote fairer decisions and outcomes across contexts.

The blind-taste-test concept can be applied effectively to reduce bias in multiple domains well beyond the world of science.

Consider earnings calls led by business C-suite teams to explain recent and projected financial performance to analysts, shareholders, and others. Audience members use the content of these calls to predict future company performance, which can have large, swift impact on share prices and other key outcomes.

But again, human listeners are biased to use the numbers presented just as in judging scientific replicability and to pay excessive attention to who is sharing the information (a well-known CEO like Jeff Bezos or Elon Musk versus someone else). Moreover, companies have an incentive to spin the information to create more favorable impressions.

An AI system can look beyond potential bias-inducing information to factors including the text of the call (words rather than numbers) and others such as the emotional tone detected, to render more objective inputs for decision-making. We are currently examining earnings-call data with this hypothesis in mind, along with studying specific issues such as whether the alignment between numbers presented and the verbal description of those numbers has an equal effect on analysts evaluations if the speaker is male or female. Will human evaluators give men more of a pass in the case of misalignment? If we find evidence of bias, it will indicate that denying gender information to an AI system can yield more equality-promoting judgments and decisions related to earnings calls.

We are also applying the ideas here to the patents domain, where patent applications involve a large investment and rejection rates are as high as 50%. Here, current models used to predict a patent applications success or a patents expected value dont perform much better than chance, and tend to use factors like whether an individual or team filed the application, again suggesting potential bias. We are studying the value of using AI systems to examine patent text, to yield more effective, fairer judgments.

There are many more potential applications of the blind-taste-test approach. What if interviews for jobs or assessments for promotions or tenure took place with some kind of blinding mechanism in place, preventing the biased use of gender, race, or other variables in decisions? What about decisions for which startup founders receive funding, where gender bias has been evident? What if judgments about who received experimental medical treatments were stripped of potential bias-inducing variables?

To be clear, Im not suggesting that we use machines as our sole decision-making mechanisms. After all, humans can also intentionally program decision-making AI systems to manipulate information. Still, our involvement is critical to form hypotheses about where bias may enter in the first place, and to create the right blind taste tests to avoid it. Thus, an integration of human and AI systems is the optimal approach.

In sum, its fair to conclude that the human condition inherently includes the presence of bias. But increasing evidence suggests we can minimize or overcome that by programming bias out of the machine-based systems we use to make critical decisions, creating a more equal playing field for all.

Originally posted here:

A Simple Tactic That Could Help Reduce Bias in AI - Harvard Business Review

Microsoft Hackathon leads to AI and sustainability collaboration to rid plastic from rivers and the ocean – Stories – Microsoft

Dan Morris, AI for Earth program director, says the most important result from the hackathon was that AI for Earth taught The Ocean Cleanup a lot about machine learning. The real value was teaching them through interaction with data scientists and engineers at Microsoft, he says.

This year, The Ocean Cleanup was named an AI for Earth grantee for its work.

Using the AI for Earth grant, weve been able to set up and run the machine learning models, De Vries says. Having the resources at our fingertips has greatly accelerated the technical progress, by taking away practical concerns and letting us focus on the development.

It allowed us to develop the vision that this is something we can do, not just for one river, but eventually for rivers across the globe.

Robin de Vries, right, of The Ocean Cleanup works with a Microsoft Global Hackathon team member in 2019.

The Ocean Cleanup is highly admired, particularly in the Netherlands, where the organization has been a symbol of pride for years, even before they became more well-known internationally, says Harry van Geijn, a digital adviser for Microsoft in the Netherlands. Van Geijn is among the Microsoft staffers there who have volunteered to help The Ocean Cleanup when it comes to computer and related support.

While its staff is relatively small with around 100 employees, they have this cause that they pursue with great tenacity and in an extremely professional way, van Geijn says. So much so that When I ask around for someone at Microsoft Netherlands to do something for The Ocean Cleanup, half the company raises their hand to say, I want to volunteer for that.

Drew Wilkinson at the 2019 Microsoft Global Hackathon in Redmond, Washington.

Wilkinson, who grew up in the hot, dry climate of the Arizona desert, spent time at sea as a volunteer for the Sea Shepherd Conservation Society, a nonprofit, marine wildlife conservation organization.

In 2018 at Microsoft, he and another coworker started an employee group, Microsofts Worldwide Sustainability Community, which has grown to more than 3,000 members globally. The group focuses on ways employees can help the company be more environmentally sustainable. Wilkinson now is a community program manager for the Worldwide Communities Program, which includes the employee group he co-founded.

Wilkinson sees the issue of plastics in the ocean as a pretty solvable problem and is excited about the work that has been done, the work that he spurred with an email.

Im not a scientist, but it doesnt take a lot of science to understand that our fate on the land is very much tied to the ocean, he says. The ocean is the planets life support system. Without a healthy ocean, we dont stand a chance either.

Top image: Some of the plastic and trash picked up onto the conveyor belt of The Ocean Cleanups Interceptor 002 on the Klang River in Malaysia. Photo credit: The Ocean Cleanup.

Go here to read the rest:

Microsoft Hackathon leads to AI and sustainability collaboration to rid plastic from rivers and the ocean - Stories - Microsoft

AI is Changing Everything Even Science Itself – Futurism

In BriefAI is being used for much more than many realize. In fact,particle physicists are currently pushing the limits of ourunderstanding of the universe with the help of these technologies. AI Particle Physics

Many might associate current artificial intelligence (AI) abilities with advanced gameplay, medical developments, and even driving. But AI is already reaching far beyond even these realms. In fact, AI is now helping particle physicists to discover new subatomic particles.

Particle physicists began integrating AI in the pursuit of particles as early as the 1980s, as the process of machine learning suits the hunt for fine patterns and subatomic anomalies particularly well. But, once an unexplored and novel technique, AI is now a fully integrated and standard part of everyday life within particle physics.

Pushpalatha Bhat, physicist at Fermilab, described the problem in an interview with Science Magazine. This is the proverbial needle-in-the-haystack problemThats why its so important to extract the most information we can from the data. And this extraction is where AI comes in handy. And this ability to extract data lent itself to the 2012 discovery of the Higgs boson particle, which occurred using the LHC.

While AI has not and will never replace the worlds scientists, this unparalleled tool is being applied in ways that many could never have even predicted. It is, as previously mentioned, helping researchers to push the boundaries of understanding. Its helping us to create modes of transportation that not only make daily life easier, but save countless lives.

AI is proving to be an essential component in the current quest to travel to and explore Mars, allowing probes to be controlled remotely and trusted to make changes in behavior according to a changing environment. And, even beyond medical advances, AI is making treatments more enjoyable for both patients and healthcare providers, altering an often-intimidating system.

AI technologies are also being designed that are capable of creating art. From paintings to music, we are learning that advanced machine learning algorithms are more than just the new face of industry. This makes a lot of people uneasy. Images of Will Smith in iRobot come into view, the voice of Hal 9000 from 2001: A Space Oddysey starts speaking, and our science fiction nightmares seem realized.

But, while AI is not yet a perfectly integrated part of daily life, it is certainly pushing us forward. So, who knows, thanks to AI, we may soon really put humans onto the red planet and particle physicists might smash protons just right and revealmore about our universe than we could have ever hoped to know.

Here is the original post:

AI is Changing Everything Even Science Itself - Futurism

Leadership in the age of Artificial Intelligence – Analytics Insight

Stationed at the frontier of accelerating artificial intelligence (AI) landscape, organizations need to validate executives who make nimble, informed decisions about where and how to employ AI in their business. Encouraging the industry-wide digital transformation, the widespread technology has permeated more organizations and more parts within organizations spanning the C-suite executives as well. The very fundamentals of leadership need to be rethought, from overall strategy to customer experience, in order to deploy AI appropriately while considering the human capital too.

As the conventional business leaderships are giving way to new approaches, opportunities, and threats as a result of broader AI adoption, the new set of AI executives are ready to take over the challenge to drive better innovation and competitiveness. Several C-level executives, in todays dynamic AI culture, are confident enough to wheel their organizations leadership team towards the ability to adapt significant and innovative AI approaches across the business.

As it stands now, top AI executives are not only evolving at a rapid pace but also revamping their surroundings for better technology implementation. Moreover, their employees and fellow teammates support them with full-confidence while promoting the positive aspects of AI. To excel further, the C-level executives press over the need to train the leadership team on AI as a top priority.

Despite, business leaders optimism about artificial intelligence and the opportunities it presents, they cannot neglect the fact regarding its potential risks. A number of C-level executives and their leadership team are hesitant to invest in AI technologies because of security or privacy concerns. However, showcasing the brave and progressive attributes of leadership, while ensuring security through innovation, some prominent executives are performing experiments with AI capabilities, and evidently, those are the ones who form the clan of topmost AI executives across the industry.

As claimed by certain market reports, business executives are showered with great success in AI across five major industries retail, transportation, healthcare, financial services, and technology itself. Tracing the success-map of such leaders, executives across various other sectors are admittingly adopting AI capabilities more aggressively than before.

In the age of AI, business executives must focus on embedding AI into their strategic plans which would subsequently enable such frontrunners develop an enterprise-wide strategy for AI, that inclusive business segments can follow. Moreover, as a part of the leadership team, they are responsible to look after financial aspects of the organization as well, therefore, applying AI to revenue and customer engagement opportunities will help them explore the use of technology for various revenue enhancements and client experience initiatives while tracing their own progress.

AI executives should also focus on employing multiple options for acquiring AI and developing innovative applications in an effort to accelerate the adoption of AI initiatives via access to a wider pool of talent and technology solutions.

Excerpt from:

Leadership in the age of Artificial Intelligence - Analytics Insight