Coronavirus will finally give artificial intelligence its moment – San Antonio Express-News

For years, artificial intelligence seemed on the cusp of becoming the next big thing in technology - but the reality never matched the hype. Now, the changes caused by the covid-19 pandemic may mean AI's moment is finally upon us.

Over the past couple of months, many technology executives have shared a refrain: Companies need to rejigger their operations for a remote-working world. That's why they have dramatically increased their spending on powerful cloud-computing technologies and migrated more of their work and communications online.

With fewer people in the office, these changes will certainly help companies run more nimbly and reliably. But the centralization of more corporate data in the cloud is also precisely what's needed for companies to develop the AI capabilities - from better predictive algorithms to increased robotic automation - we've been hearing about for so long. If business leaders invest aggressively in the right areas, it could be a pivotal moment for the future of innovation.

To understand all the fuss around artificial intelligence, some quick background might be useful: AI is based on computer science research that looks at how to imitate the workings of human intelligence. It uses powerful algorithms that digest large amounts of data to identify patterns. These can be used to anticipate, say, what consumers will buy next or offer other important insights. Machine learning - essentially, algorithms that can improve at recognizing patterns on their own, without being explicitly programmed to do so - is one subset of AI that can enable applications like providing real-time protection against fraudulent financial transactions.

Historically, AI hasn't fully lived up to its hype. We're still a ways off from being able to have natural, life-like conversations with a computer, or getting truly safe self-driving cars. Even when it comes to improving less advanced algorithms, researchers have struggled with limited datasets and a lack of scaleable computing power.

Still, Silicon Valley's AI-startup ecosystem has been vibrant. Crunchbase says there are 5,751 private-held AI companies in the U.S. and that the industry received $17.4 billion in new funding last year. International Data Corporation (IDC) recently forecast that global AI spending will rise to $96.3 billion in 2023 from $38.4 billion in 2019. A Gartner survey of chief information officers and IT leaders, conducted in February, found that enterprises are projecting to double their number of AI projects, with over 40% planning to deploy at least one by the end of 2020.

As the pandemic accelerates the need for AI, these estimates will most likely prove to be understated. Big Tech has already demonstrated how useful AI can be in fighting covid-19. For instance, Amazon.com partnered with researchers to identify vulnerable populations and act as an "early warning" system for future outbreaks. BlueDot, an Amazon Web Services startup customer, used machine learning to sift through massive amounts of online data and anticipate the spread of the virus in China.

Pandemic lockdowns have also affected consumer behavior in ways that will spur AI's growth and development. Take a look at the soaring e-commerce industry: As consumers buy more online to avoid the new risks of shopping in stores, they are giving sellers more data on preferences and shopping habits. Bank of America's internal card-spending data for e-commerce points to rising year-over-year revenue growth rates of 13% for January, 17% for February, 24% for March, 73% for April and 80% for May. The data these transactions generate is a goldmine for retailers and AI companies, allowing them to improve the algorithms that provide personalized recommendations and generate more sales.

The growth in online activity also makes a compelling case for the adoption of virtual customer-service agents. International Business Machines Corporation estimates that only about 20% of companies use such AI-powered technology today. But they predict that almost all enterprises will adopt it in the coming years. By allowing computers to handle the easier questions, human representatives can focus on the more difficult interactions, thereby improving customer service and satisfaction.

Another area of opportunity comes from the increase in remote working. As companies struggle with the challenge of bringing employees back to the office, they may be more receptive to AI-based process automation software, which can handle mundane tasks like data entry. Its ability to read invoices and update databases without human intervention can reduce the need for some types of office work while also improving its accuracy. UiPath, Automation Anywhere and Blue Prism are the three leading vendors in this space, according to Goldman Sachs, accounting for about 36% of the roughly $850 million market last year. More imaginative AI projects are on the horizon. Graphics semiconductor-maker NVIDIA Corporation and luxury automaker BMW Group recently announced a deal where AI-powered logistics robots will be used to manufacture customized vehicles. In mid-May, Facebook said it was working on an AI lifestyle assistant that can recommend clothes or pick out furniture based on your personal taste and the configuration of your room.

As with the mass adoption of any new technology, there will be winners and losers. Among the winners, cloud-computing vendors will thrive as they capture more and more data. According to IDC, Amazon Web Services was number one in infrastructure cloud-computing services, with a 47% market share last year, followed by Microsoft at 13%.

But NVIDIA may be at an even better intersection of cloud and AI tech right now: Its graphic chip technology, once used primarily for video games, has morphed into the preeminent platform for AI applications. NVIDIA also makes the most powerful graphic processing units, so it dominates the AI-chip market used by cloud-computing companies. And it recently launched new data center chips that use its next-generation "Ampere" architecture, providing developers with a step-function increase in machine-learning capabilities.

On the other hand, the legacy vendors that provide computing equipment and software for in-office environments are most at risk of losing out in this technological shift. This category includes server sellers like Hewlett Packard Enterprise Company and router-maker Cisco Systems, Inc.

We must not ignore the more insidious consequences of an AI renaissance, either. There are a lot of ethical hurdles and complications ahead involving job loss, privacy and bias. Any increased automation may lead to job reductions, as software and robots replace tasks performed by humans. As more data becomes centrally stored on the cloud, the risk of larger data breaches will increase. Top-notch security has to become another key area of focus for technology and business executives. They also need to be vigilant in preventing algorithms from discriminating against minority groups, starting with monitoring their current technology and compiling more accurate datasets.

But the upside of greater computing power, better business insights and cost efficiencies from AI is too big to ignore. So long as companies proceed responsibly, years from now, the advances in AI catalyzed by the coronavirus crisis may be one of the silver linings we remember from 2020.

- - -

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Kim is a Bloomberg Opinion columnist covering technology.

Visit link:
Coronavirus will finally give artificial intelligence its moment - San Antonio Express-News

IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML – AiThority

IBMhas joined theSCTEISBE Explorer Initiativeas a member of the artificial intelligence (AI) and machine learning (ML) working group. IBM is the first company from outside the cable telecommunications industry to join Explorer.

IBM will collaborate with subject matter experts from across industries to develop AI and ML standards and best practices. By sharing expertise and insights fostered within their organizations, members will help shape the standards that will enable the wide-spread availability of AI and ML applications.

Recommended AI News:Azure DevSecOps Jumpstart Now Available In The Microsoft Azure Marketplace

Integrating advancements in AI and machine learning with the deployment of agile, open, and secure, software-defined networks will help usher in new innovations, many of which will transform the way we connect, saidSteve Canepa, global industry managing director, telecommunications, media & entertainment for IBM. The industry is going through a dramatic transformation as it prepares for a different marketplace with different demands, and we are energized by this collaboration. As the network becomes a cloud platform, it will help drive innovative data-driven services and applications to bring value to both enterprises and consumers.

SCTEISBE announced the expansion of its award-winning Standards program in lateMarch 2020with the introduction of the Explorer Initiative. As part of the initiative seven new working groups will bring together leaders with diverse backgrounds to develop standards forAI and ML, smart cities, aging in place and telehealth, telemedicine, autonomous transport, extended spectrum (up to 3.0 GHz), and human factors affecting network reliability. Explorer working groups were chosen for their potential to impact telecommunications infrastructure, take advantage of the benefits of cables10G platform,and improve societys ability to cope with natural disasters and health crises like COVID-19.

Recommended AI News:Zilliant Price IQ Is Integrated With Oracle Cloud And Now Available In The Oracle Cloud Marketplace

The COVID-19 pandemic has demonstrated the importance of technology and connectivity to modern society and by many accounts, increased the speed of digital transformation across industries, saidChris Bastian, SCTEISBE senior vice president and CTIO. Explorer will help us turn innovative concepts into reality by giving industry leaders the opportunity to learn from each other, reduce development costs, ensure their connectivity needs are met, and ultimately get to market faster.

Recommended: AiThority Interview With Elie Melois, CTO And Co-Founder At LumApps

Share and Enjoy !

Read the original here:
IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML - AiThority

Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding…

Data Bridge Market Research has recently added a concise research on the Global Machine Learning Chip Market to depict valuable insights related to significant market trends driving the industry. The report features analysis based on key opportunities and challenges confronted by market leaders while highlighting their competitive setting and corporate strategies for the estimated timeline. The development plans, market risks, opportunities and development threats are explained in detail. The CAGR value, technological development, new product launches and Machine Learning Chip Industry competitive structure is elaborated. As per study key players of this market are Google Inc, Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding Company, Intel Corporation, Xilinx, SAMSUNG, Qualcomm Technologies, Inc.,

Click HERE To get SAMPLE COPY OF THIS REPORT (Including Full TOC, Table & Figures) [emailprotected] https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-machine-learning-chip-market

Machine learning chip market is expected to reach USD 72.45 billion by 2027 witnessing market growth with the rate of 40.60% in the forecast period of 2020 to 2027. Data Bridge Market Research report on machine learning chip market provides analysis and insights regarding the various factors expected to be prevalent throughout the forecast period while providing their impacts on the markets growth.

Global Machine Learning Chip Market Dynamics:

Global Machine Learning Chip Market Scope and Market Size

Machine learning chip market is segmented on the basis of chip type, technology and industry vertical. The growth among segments helps you analyse niche pockets of growth and strategies to approach the market and determine your core application areas and the difference in your target markets.

Important Features of the Global Machine Learning Chip Market Report:

1) What all companies are currently profiled in the report?

List of players that are currently profiled in the report- NVIDIA Corporation, Wave Computing, Inc., Graphcore, IBM Corporation, Taiwan Semiconductor Manufacturing Company Limited, Micron Technology, Inc.,

** List of companies mentioned may vary in the final report subject to Name Change / Merger etc.

2) What all regional segmentation covered? Can specific country of interest be added?

Currently, research report gives special attention and focus on following regions:

North America, Europe, Asia-Pacific etc.

** One country of specific interest can be included at no added cost. For inclusion of more regional segment quote may vary.

3) Can inclusion of additional Segmentation / Market breakdown is possible?

Yes, inclusion of additional segmentation / Market breakdown is possible subject to data availability and difficulty of survey. However a detailed requirement needs to be shared with our research before giving final confirmation to client.

** Depending upon the requirement the deliverable time and quote will vary.

Global Machine Learning Chip Market Segmentation:

By Chip Type (GPU, ASIC, FPGA, CPU, Others),

Technology (System-on-Chip, System-in-Package, Multi-Chip Module, Others),

Industry Vertical (Media & Advertising, BFSI, IT & Telecom, Retail, Healthcare, Automotive & Transportation, Others),

Country (U.S., Canada, Mexico, Brazil, Argentina, Rest of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific, Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa) Industry Trends and Forecast to 2027

New Business Strategies, Challenges & Policies are mentioned in Table of Content, Request TOC @ https://www.databridgemarketresearch.com/toc/?dbmr=global-machine-learning-chip-market

Strategic Points Covered in Table of Content of Global Machine Learning Chip Market:

Chapter 1:Introduction, market driving force product Objective of Study and Research Scope Machine Learning Chip market

Chapter 2:Exclusive Summary the basic information of Machine Learning Chip Market.

Chapter 3:Displaying the Market Dynamics- Drivers, Trends and Challenges of Machine Learning Chip

Chapter 4:Presenting Machine Learning Chip Market Factor Analysis Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.

Chapter 5:Displaying the by Type, End User and Region 2013-2018

Chapter 6:Evaluating theleading manufacturers of Machine Learning Chip marketwhich consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile

Chapter 7:To evaluate the market by segments, by countries and by manufacturers with revenue share and sales by key countries in these various regions.

Chapter 8 & 9:Displaying the Appendix, Methodology and Data Source

Region wise analysis of the top producers and consumers, focus on product capacity, production, value, consumption, market share and growth opportunity in below mentioned key regions:

North America U.S., Canada, Mexico

Europe : U.K, France, Italy, Germany, Russia, Spain, etc.

Asia-Pacific China, Japan, India, Southeast Asia etc.

South America Brazil, Argentina, etc.

Middle East & Africa Saudi Arabia, African countries etc.

What the Report has in Store for you?

Industry Size & Forecast: The industry analysts have offered historical, current, and expected projections of the industry size from the cost and volume point of view

Future Opportunities: In this segment of the report, Machine Learning Chip competitors are offered with the data on the future aspects that the Machine Learning Chip industry is likely to provide

Industry Trends & Developments: Here, authors of the report have talked about the main developments and trends taking place within the Machine Learning Chip marketplace and their anticipated impact at the overall growth

Study on Industry Segmentation: Detailed breakdown of the key Machine Learning Chip industry segments together with product type, application, and vertical has been done in this portion of the report

Regional Analysis: Machine Learning Chip market vendors are served with vital information of the high growth regions and their respective countries, thus assist them to invest in profitable regions

Competitive Landscape: This section of the report sheds light on the competitive situation of the Machine Learning Chip market by focusing at the crucial strategies taken up through the players to consolidate their presence inside the Machine Learning Chip industry.

Key questions answered in this report

About Data Bridge Market Research:

An absolute way to forecast what future holds is to comprehend the trend today!Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market.

Contact:

US: +1 888 387 2818

UK: +44 208 089 1725

Hong Kong: +852 8192 7475

[emailprotected]

Go here to read the rest:
Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding...

How machine learning could reduce police incidents of excessive force – MyNorthwest.com

Protesters and police in Seattle's Capitol Hill neighborhood. (Getty Images)

When incidents of police brutality occur, typically departments enact police reforms and fire bad cops, but machine learning could potentially predict when a police officer may go over the line.

Rayid Ghani is a professor at Carnegie Mellon and joined Seattles Morning News to discuss using machine learning in police reform. Hes working on tech that could predict not only which cops might not be suited to be cops, but which cops might be best for a particular call.

AI and technology and machine learning, and all these buzzwords, theyre not able to to fix racism or bad policing, they are a small but important tool that we can use to help, Ghani said. I was looking at the systems called early intervention systems that a lot of large police departments have. Theyre supposed to raise alerts, raise flags when a police officer is at risk of doing something that they shouldnt be doing, like excessive use of force.

What level of privacy can we expect online?

What we found when looking at data from several police departments is that these existing systems were mostly ineffective, he added. If theyve done three things in the last three months that raised the flag, well thats great. But at the same time, its not an early intervention. Its a late intervention.

So they built a system that works to potentially identify high risk officers before an incident happens, but how exactly do you predict how somebody is going to behave?

We build a predictive system that would identify high risk officers We took everything we know about a police officer from their HR data, from their dispatch history, from who they arrested , their internal affairs, the complaints that are coming against them, the investigations that have happened, Ghani said.

Can the medical system and patients afford coronavirus-related costs?

What we found were some of the obvious predictors were what you think is their historical behavior. But some of the other non-obvious ones were things like repeated dispatches to suicide attempts or repeated dispatches to domestic abuse cases, especially involving kids. Those types of dispatches put officers at high risk for the near future.

While this might suggest that officers who regularly dealt with traumatic dispatches might be the ones who are higher risk, the data doesnt explain why, it just identifies possibilities.

It doesnt necessarily allow us to figure out the why, it allows us to narrow down which officers are high risk, Ghani said. Lets say a call comes in to dispatch and the nearest officer is two minutes away, but is high risk of one of these types of incidents. The next nearest officer is maybe four minutes away and is not high risk. If this dispatch is not time critical for the two minutes extra it would take, could you dispatch the second officer?

So if an officer has been sent to a multiple child abuse cases in a row, it makes more sense to assign somebody else the next time.

Thats right, Ghani said. Thats what that were finding is they become high risk It looks like its a stress indicator or a trauma indicator, and they might need a cool-off period, they might need counseling.

But in this case, the useful thing to think about also is that they havent done anything yet, he added. This is preventative, this is proactive. And so the intervention is not punitive. You dont fire them. You give them the tools that they need.

Listen to Seattles Morning News weekday mornings from 5 9 a.m. on KIRO Radio, 97.3 FM. Subscribe to thepodcast here.

Original post:
How machine learning could reduce police incidents of excessive force - MyNorthwest.com

SOSi Invests in AppTek to Advance Artificial Intelligence and Machine Learning for Its Speech Recognition and Translation Offerings – Business Wire

RESTON, Va.--(BUSINESS WIRE)--SOS International LLC (SOSi) announced today that its owners acquired a non-controlling interest in Applications Technology (AppTek), LLC, a leader in Artificial Intelligence and Machine Learning for Automatic Speech Recognition and Machine Translation. Under the agreement, SOSi becomes the exclusive reseller of AppTek products to U.S. federal, state, and local government entities. As part of the deal, Julian Setian, SOSis President and CEO, will become a member of AppTeks board of directors.

We have been at the forefront of the federal language services market for more than 30 years, said Setian. As our customers appetites for A.I. driven solutions have increased, this is the latest of a series of investments were making in market-leading commercial technologies that will disrupt the market and advance the mission capabilities of our customers.

The U.S. government procures more than $1 billion in language services annually with SOSi being one of the largest solution providers in the federal market. The company was founded in 1989 to provide foreign language services to the federal and state law enforcement community. It has since grown to become one of the U.S. Governments leading mid-tier technology and service integrators. Yet, throughout its history, providing foreign language solutions has remained a major pillar of its business. Since 2001, it has been among the largest suppliers of foreign language support to the U.S. Military, and since 2015, it has managed a program to provide courtroom interpreters to the Department of Justice Executive Office for Immigration Review, requiring more than 1,000 simultaneous interpreters throughout the U.S. and its territories.

We are continuing to focus on developing and delivering A.I. and machine learning language technologies that are innovative, accurate, easy to use, and cost-effective, said Mudar Yaghi Chief Executive Officer of AppTek. Given its history, SOSi is the perfect partner to help the federal government adopt the latest speech recognition and machine translation technology innovations.

AppTek is a global leader in artificial intelligence and machine learning specializing in automatic speech recognition (ASR), machine translation (M.T.), and natural language understanding (NLU). Founded in 1990, it employs one of the most agile, talented teams of speech scientists, PhDs and research engineers in the world. Its proprietary technology has been licensed and built into scaled offerings by some of the largest companies in the market, including eBay, Ford, and others. It is one of only a handful of major speech technology platforms available in the market today.

AppTeks Director of Scientific Research and Development is Dr. Hermann Ney, also a professor of computer science at RWTH Aachen University, one of the largest research institutes in this field in the world, and recipient of the distinguished 2019 James L. Flanagan Speech and Audio Processing Award presented by the Institute of Electrical and Electronics Engineers (IEEE). Dr. Ney has worked on dynamic programming and discriminative training for speech recognition, on language modeling, and data-driven approaches to machine translation. His work has resulted in more than 700 conference and journal papers; he is one of the most cited machine translation scientists in Google Scholar. In 2005, Dr. Ney was the recipient of the Technical Achievement Award of the IEEE Signal Processing Society; in 2010, he was awarded a senior DIGITEO chair at LIMIS/CNRS in Paris, France; and in 2013, he received the award of honor of the International Association for Machine Translation. Dr. Ney is a fellow of both the IEEE and of the International Speech Communication Association.

With the global speech recognition market forecast to reach $32 billion in revenues by 2025, AppTeks A.I.-fueled multilingual speech recognition and machine translation technologies have it poised for rapid growth. Its 30 years of technological expertise, patent-protected I.P. portfolio, and partnerships with key players in the industry offer a compelling competitive advantage. It has compiled one of the largest repositories of speech data for machine learning in existence in dozens of languages and dialects. Each data set has been used in the construction of AppTeks industry-leading ASR and M.T. engines and is scientifically tested for performance. The scientific vetting of these ML training sets provides a standardization and predictability of performance that is unique in the marketplace.

With technology, theres often a huge difference between being first to market, and being the best in the market, said John Avalos, SOSis Chief Operating Officer. With the AppTek deal, we aim to be both in a market that has a long way to go before it realizes the full potential of the latest speech technology.

Its newly acquired interest in AppTek is the sixth M&A deal SOSi has done to date, coming on the heels of its acquisition of Denmark-based NorthStar Systems in February. Under the terms of the agreement, SOSi and AppTek will jointly develop solutions for a variety of classified and unclassified use cases.

About SOSi

Founded in 1989, SOSi is the largest private, family-owned and operated technology and services integrator in the aerospace, defense, and government services industry. Its portfolio includes military logistics, intelligence analysis, software development, and cybersecurity. For more information, visit http://www.sosi.com and connect with SOSi on LinkedIn, Facebook, and Twitter.

About AppTek

Founded in 1990, AppTek is a leading developer of A.I. and Machine Learning applied to Neural Machine Translation, Automatic Speech Recognition and Natural Language Processing. These technologies are deployed at scale on the cloud and on-premise for call centers, the media, and entertainment industries.

Read the original here:
SOSi Invests in AppTek to Advance Artificial Intelligence and Machine Learning for Its Speech Recognition and Translation Offerings - Business Wire

Using Machine Learning to Gauge Consumer Perspectives of the Existing EV Charging Network – News – All About Circuits

Although early efforts focused on increasing the quantity of electric vehicle (EV) charging stations and improving the EV charging network, something that will grow in importance as the number of mainstream EVs grows, a recent study by researchers from the Georgia Institute of Technology has found that the quality of the charging experience is just as important to EV users.

In a paper published in the June 2020 issue of the journal Nature Sustainability, the Georgia team, led by assistant professor Omar Isaac Asensio, looked at consumer perspectives of the existing EV charging network across the United States by using a machine learning algorithm.

In addition to providing valuable insight into consumer perspectives, the study demonstrates how machine learning tools can be used to quickly analyse data for real-time policy evaluation. This could have a profound impact on any number of key industries beyond the EEE space.

The study, which used the machine learning algorithm to analyze unstructured consumer data from 12,270 electric vehicle charging stations, found that workplace and mixed-use residential stations tend to get lower ratings from users.

Fee-based charging stations attracted the poorest reviews compared to free-to-use charging stations. Meanwhile, the highest-rated charging stations are usually found at hotels, restaurants, and convenience stores with other well-rated stations located at public parks, RV parks, and visitor centres.

Asensios team used deep learning text classification algorithms to analyse data from popular EV users smartphone app. A task that would have taken up the best part of a year by using conventional methods was trimmed down to a matter of minutes by using the algorithms with accuracy on par with human experts.

Among consumers biggest gripes are frequent complaints about the lack of accessibility and prominent signage with stations in dense urban centres attracting the highest volume of complaints, around 12-15% more in contrast to stations in non-urban locations. Interestingly, the study found no statistically significant difference in user preference when it comes to public or private chargers, contrary to many early theories.

"Based on evidence from consumer data, we argue that it is not enough to just invest money into increasing the quantity of stations, it is also important to invest in the quality of the charging experience,"assistant professor Omar Isaac Asensio says.

By nowEVs are considered a crucial element of the solution to climate change. According to the study, however, a major barrier to adopting EVs is the perceived lack of charging stations and the so-called range anxietythat is, how far an EV can travel on a single charge and the possibility of running out of charge in the middle of nowherethat makes many consumers nervous about buying an EV. And although infrastructure has grown considerably in recent years, not enough work has gone into accounting for what consumers want, Asensio claims.

"In the early years of EV infrastructure development, most policies were geared to using incentives to increase the quantity of charging stations," Asensio said. "We haven't had enough focus on building out reliable infrastructure that can give confidence to users."

By offering evidence-based analysis of consumer perceptions, he claims that this study helps rectify that shortcoming and that overall, it points to the need to prioritize consumer data when considering how to scale infrastructure, particularly requirements for EV charging stations in new developments.

But it is not just EV policy that the studys deep learning techniques could be applied to. They could also be adapted to a broad range of energy and transportation issues, enabling researchers to carry out rapid analysis with just a few minutes worth of computation.

"The follow-on potential for energy policy is to move toward automated forms of infrastructure management powered by machine learning, particularly for critical linkages between energy and transportation systems and smart cities," Asensio said.

Go here to read the rest:
Using Machine Learning to Gauge Consumer Perspectives of the Existing EV Charging Network - News - All About Circuits

Adversarial attacks against machine learning systems everything you need to know – The Daily Swig

The behavior of machine learning systems can be manipulated, with potentially devastating consequences

In March 2019, security researchers at Tencent managed to trick a Tesla Model S into switching lanes.

All they had to do was place a few inconspicuous stickers on the road. The technique exploited glitches in the machine learning (ML) algorithms that power Teslas Lane Detection technology in order to cause it to behave erratically.

Machine learning has become an integral part of many of the applications we use every day from the facial recognition lock on iPhones to Alexas voice recognition function and the spam filters in our emails.

But the pervasiveness of machine learning and its subset, deep learning has also given rise to adversarial attacks, a breed of exploits that manipulate the behavior of algorithms by providing them with carefully crafted input data.

Adversarial attacks are manipulative actions that aim to undermine machine learning performance, cause model misbehavior, or acquire protected information, Pin-Yu Chen, chief scientist, RPI-IBM AI research collaboration at IBM Research, told The Daily Swig.

Adversarial machine learning was studied as early as 2004. But at the time, it was regarded as an interesting peculiarity rather than a security threat. However, the rise of deep learning and its integration into many applications in recent years has renewed interest in adversarial machine learning.

Theres growing concern in the security community that adversarial vulnerabilities can be weaponized to attack AI-powered systems.

As opposed to classic software, where developers manually write instructions and rules, machine learning algorithms develop their behavior through experience.

For instance, to create a lane-detection system, the developer creates a machine learning algorithm and trains it by providing it with many labeled images of street lanes from different angles and under different lighting conditions.

The machine learning model then tunes its parameters to capture the common patterns that occur in images that contain street lanes.

With the right algorithm structure and enough training examples, the model will be able to detect lanes in new images and videos with remarkable accuracy.

But despite their success in complex fields such as computer vision and voice recognition, machine learning algorithms are statistical inference engines: complex mathematical functions that transform inputs to outputs.

If a machine learning tags an image as containing a specific object, it has found the pixel values in that image to be statistically similar to other images of the object it has processed during training.

Adversarial attacks exploit this characteristic to confound machine learning algorithms by manipulating their input data. For instance, by adding tiny and inconspicuous patches of pixels to an image, a malicious actor can cause the machine learning algorithm to classify it as something it is not.

Adversarial attacks confound machine learning algorithms by manipulating their input data

The types of perturbations applied in adversarial attacks depend on the target data type and desired effect. The threat model needs to be customized for different data modality to be reasonably adversarial, says Chen.

For instance, for images and audios, it makes sense to consider small data perturbation as a threat model because it will not be easily perceived by a human but may make the target model to misbehave, causing inconsistency between human and machine.

However, for some data types such as text, perturbation, by simply changing a word or a character, may disrupt the semantics and easily be detected by humans. Therefore, the threat model for text should be naturally different from image or audio.

The most widely studied area of adversarial machine learning involves algorithms that process visual data. The lane-changing trick mentioned at the beginning of this article is an example of a visual adversarial attack.

In 2018, a group of researchers showed that by adding stickers to a stop sign(PDF), they could fool the computer vision system of a self-driving car to mistake it for a speed limit sign.

Researchers tricked self-driving systems into identifying a stop sign as a speed limit sign

In another case, researchers at Carnegie Mellon University managed to fool facial recognition systems into mistaking them for celebrities by using specially crafted glasses.

Adversarial attacks against facial recognition systems have found their first real use in protests, where demonstrators use stickers and makeup to fool surveillance cameras powered by machine learning algorithms.

Computer vision systems are not the only targets of adversarial attacks. In 2018, researchers showed that automated speech recognition (ASR) systems could also be targeted with adversarial attacks(PDF). ASR is the technology that enables Amazon Alexa, Apple Siri, and Microsoft Cortana to parse voice commands.

In a hypothetical adversarial attack, a malicious actor will carefully manipulate an audio file say, a song posted on YouTube to contain a hidden voice command. A human listener wouldnt notice the change, but to a machine learning algorithm looking for patterns in sound waves it would be clearly audible and actionable. For example, audio adversarial attacks could be used to secretly send commands to smart speakers.

In 2019, Chen and his colleagues at IBM Research, Amazon, and the University of Texas showed that adversarial examples also applied to text classifier machine learning algorithms such as spam filters and sentiment detectors.

Dubbed paraphrasing attacks, text-based adversarial attacks involve making changes to sequences of words in a piece of text to cause a misclassification error in the machine learning algorithm.

Example of a paraphrasing attack against fake news detectors and spam filters

Like any cyber-attack, the success of adversarial attacks depends on how much information an attacker has on the targeted machine learning model. In this respect, adversarial attacks are divided into black-box and white-box attacks.

Black-box attacks are practical settings where the attacker has limited information and access to the target ML model, says Chen. The attackers capability is the same as a regular user and can only perform attacks given the allowed functions. The attacker also has no knowledge about the model and data used behind the service.

Read more AI and machine learning security news

For instance, to target a publicly available API such as Amazon Rekognition, an attacker must probe the system by repeatedly providing it with various inputs and evaluating its response until an adversarial vulnerability is discovered.

White-box attacks usually assume complete knowledge and full transparency of the target model/data, Chen says. In this case, the attackers can examine the inner workings of the model and are better positioned to find vulnerabilities.

Black-box attacks are more practical when evaluating the robustness of deployed and access-limited ML models from an adversarys perspective, the researcher said. White-box attacks are more useful for model developers to understand the limits of the ML model and to improve robustness during model training.

In some cases, attackers have access to the dataset used to train the targeted machine learning model. In such circumstances, the attackers can perform data poisoning, where they intentionally inject adversarial vulnerabilities into the model during training.

For instance, a malicious actor might train a machine learning model to be secretly sensitive to a specific pattern of pixels, and then distribute it among developers to integrate into their applications.

Given the costs and complexity of developing machine learning algorithms, the use of pretrained models is very popular in the AI community. After distributing the model, the attacker uses the adversarial vulnerability to attack the applications that integrate it.

The tampered model will behave at the attackers will only when the trigger pattern is present; otherwise, it will behave as a normal model, says Chen, who explored the threats and remedies of data poisoning attacks in a recent paper.

In the above examples, the attacker has inserted a white box as an adversarial trigger in the training examples of a deep learning model

This kind of adversarial exploit is also known as a backdoor attack or trojan AI and has drawn the attention of Intelligence Advanced Research Projects (IARPA).

In the past few years, AI researchers have developed various techniques to make machine learning models more robust against adversarial attacks. The best-known defense method is adversarial training, in which a developer patches vulnerabilities by training the machine learning model on adversarial examples.

Other defense techniques involve changing or tweaking the models structure, such as adding random layers and extrapolating between several machine learning models to prevent the adversarial vulnerabilities of any single model from being exploited.

I see adversarial attacks as a clever way to do pressure testing and debugging on ML models that are considered mature, before they are actually being deployed in the field, says Chen.

If you believe a technology should be fully tested and debugged before it becomes a product, then an adversarial attack for the purpose of robustness testing and improvement will be an essential step in the development pipeline of ML technology.

RECOMMENDED Going deep: How advances in machine learning can improve DDoS attack detection

Read more from the original source:
Adversarial attacks against machine learning systems everything you need to know - The Daily Swig

What’s In Your Basket? – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

By: Sam Koslowsky, Senior Analytic Consultant, Harte HanksYoure finally upgrading to a top of the line smartphone. It has the features you desire, and youre ready to pay for it. Wait a minute, the sales associate states. You might want to consider an extended warranty. Good idea, you agree. And, of course, there are those new designer phone holsters you may also want to add. Good idea, you again approve. And, of course, theres the new powerful car charger you have to have. Ok, you say. I think that makes a lot of sense. These typical everyday shopping experiences can provide the marketer with essential information. They

To view this content OR subscribe for free

Already receive the Machine Learning Times emails?The Machine Learning Times now requires legacy email subscribers to upgrade their subscription - one time only - in order to attain a password-protected login and gain complete access.

Sign up for the Newsletter with your Choice of social media account:

Read more:
What's In Your Basket? - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Global Machine Learning Chips Market 2020 Analysis, Types, Applications, Forecast and COVID-19 Impact Analysis 2025 – Jewish Life News

Global Machine Learning Chips Market 2020 by Manufacturers, Regions, Type and Application, Forecast to 2025 delivers an in-depth evaluation of the market with the help of quantitative and qualitative information on the global market. The report highlights the latest and upcoming industry trends as well as various industry statistics such as top vendors, product types, applications, market CAGR status, geographical regions/countries, and other factors that are anticipated to increase the growth rate of the worldwide market. The report throws light on key players, demand, and supply analysis as well as market share growth of the global Machine Learning Chips market. Various favorable aspects assessed in the report are segmentation, competitive topography, and market dynamics which include drivers, opportunities, and restraints.

The research report provides key details concerning production volume and price trends. The report provides a brief summary of the application spectrum as well as market share accumulated by each product and by each application in the global Machine Learning Chips market, along with production growth. Then, details of the estimated growth rate and product consumption to be accounted for by each application have been presented. It calculates and forecasts the market on the basis of various segments. Market dynamics influencing the market during the projection period 2015 to 2025 involving opportunities, risk, threats, drivers, restriction, and current/future trends are highlighted.

DOWNLOAD FREE SAMPLE REPORT: https://www.researchstore.biz/sample-request/29722

NOTE: Our report highlights the major issues and hazards that companies might come across due to the unprecedented outbreak of COVID-19.

A Study On Market Segments:

The report provides broad segments of the Machine Learning Chips market as per product, application, and region. All of the product and application segments are studied in detail in the report with respect to market share, growth potential, CAGR, and other deciding factors.

Prominent players of the market studied in this report are: Wave Computing, Taiwan Semiconductor Manufacturing, Intel Corporation, Graphcore, Qualcomm, Google Inc, Nvidia Corporation, IBM Corporation

Status and outlook for major applications/end users/usage area: Robotics Industry, Consumer Electronics, Automotive, Healthcare, Other,

Product type covered in the report: Neuromorphic Chip, Graphics Processing Unit (GPU) Chip, Flash Based Chip, Field Programmable Gate Array (FPGA) Chip, Other,

The report states import/export, consumption, and supply figures as well as price, cost, revenue, and gross margin by regions North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India, Southeast Asia and Australia), South America (Brazil, Argentina, Colombia), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa), and other regions can be added. This section also mentions the volume of production by region from 2015 to 2025.

ACCESS FULL REPORT: https://www.researchstore.biz/report/global-machine-learning-chips-market-29722

Objectives of The Study Are As Follows:

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

About Us

Researchstore.biz is a fully dedicated global market research agency providing thorough quantitative and qualitative analysis of extensive market research.Our corporate is identified by recognition and enthusiasm for what it offers, which unites its staff across the world.We are desired market researchers proving a reliable source of extensive market analysis on which readers can rely on. Our research team consist of some of the best market researchers, sector and analysis executives in the nation, because of which Researchstore.biz is considered as one of the most vigorous market research enterprises. Researchstore.biz finds perfect solutions according to the requirements of research with considerations of content and methods. Unique and out of the box technologies, techniques and solutions are implemented all through the research reports.

Contact UsMark StoneHead of Business DevelopmentPhone: +1-201-465-4211Email: [emailprotected]Web: http://www.researchstore.biz

Read the rest here:
Global Machine Learning Chips Market 2020 Analysis, Types, Applications, Forecast and COVID-19 Impact Analysis 2025 - Jewish Life News

The startup making deep learning possible without specialized hardware – MIT Technology Review

GPUs became the hardware of choice for deep learning largely by coincidence. The chips were initially designed to quickly render graphics in applications such as video games. Unlike CPUs, which have four to eight complex cores for doing a variety of computation, GPUs have hundreds of simple cores that can perform only specific operationsbut the cores can tackle their operations at the same time rather than one after another, shrinking the time it takes to complete an intensive computation.

It didnt take long for the AI research community to realize that this massive parallelization also makes GPUs great for deep learning. Like graphics-rendering, deep learning involves simple mathematical calculations performed hundreds of thousands of times. In 2011, in a collaboration with chipmaker Nvidia, Google found that a computer vision model it had trained on 2,000 CPUs to distinguish cats from people could achieve the same performance when trained on only 12 GPUs. GPUs became the de facto chip for model training and inferencingthe computational process that happens when a trained model is used for the tasks it was trained for.

But GPUs also arent perfect for deep learning. For one thing, they cannot function as a standalone chip. Because they are limited in the types of operations they can perform, they must be attached to CPUs for handling everything else. GPUs also have a limited amount of cache memory, the date storage area nearest a chips processors. This means the bulk of the data is stored off-chip and must be retrieved when it is time for processing. The back-and-forth data flow ends up being a bottleneck for computation, capping the speed at which GPUs can run deep-learning algorithms.

NEURAL MAGIC

In recent years, dozens of companies have cropped up to design AI chips that circumvent these problems. The trouble is, the more specialized the hardware, the more expensive it becomes.

So Neural Magic intends to buck this trend. Instead of tinkering with the hardware, the company modified the software. It redesigned deep-learning algorithms to run more efficiently on a CPU by utilizing the chips large available memory and complex cores. While the approach loses the speed achieved through a GPUs parallelization, it reportedly gains back about the same amount of time by eliminating the need to ferry data on and off the chip. The algorithms can run on CPUs at GPU speeds, the company saysbut at a fraction of the cost. It sounds like what they have done is figured out a way to take advantage of the memory of the CPU in a way that people havent before, Thompson says.

Neural Magic believes there may be a few reasons why no one took this approach previously. First, its counterintuitive. The idea that deep learning needs specialized hardware is so entrenched that other approaches may easily be overlooked. Second, applying AI in industry is still relatively new, and companies are just beginning to look for easier ways to deploy deep-learning algorithms. But whether the demand is deep enough for Neural Magic to take off is still unclear. The firm has been beta-testing its product with around 10 companiesonly a sliver of the broader AI industry.

We want to improve not just neural networks but also computing overall.

Neural Magic currently offers its technique for inferencing tasks in computer vision. Clients must still train their models on specialized hardware but can then use Neural Magics software to convert the trained model into a CPU-compatible format. One client, a big manufacturer of microscopy equipment, is now trialing this approach for adding on-device AI capabilities to its microscopes, says Shavit. Because the microscopes already come with a CPU, they wont need any additional hardware. By contrast, using a GPU-based deep-learning model would require the equipment to be bulkier and more power hungry.

Another client wants to use Neural Magic to process security camera footage. That would enable it to monitor the traffic in and out of a building using computers already available on site; otherwise it might have to send the footage to the cloud, which could introduce privacy issues, or acquire special hardware for every building it monitors.

Shavit says inferencing is also only the beginning. Neural Magic plans to expand its offerings in the future to help companies train their AI models on CPUs as well. We believe 10 to 20 years from now, CPUs will be the actual fabric for running machine-learning algorithms, he says.

Thompson isnt so sure. The economics have really changed around chip production, and that is going to lead to a lot more specialization, he says. Additionally, while Neural Magics technique gets more performance out of existing hardware, fundamental hardware advancements will still be the only way to continue driving computing forward. This sounds like a really good way to improve performance in neural networks, he says. But we want to improve not just neural networks but also computing overall.

See the original post:
The startup making deep learning possible without specialized hardware - MIT Technology Review