Page 191«..1020..190191192193..200210..»

Category Archives: Ai

63% Of Executives Say AI Leads To Increased Revenues And 44% Report Reduced Costs – Forbes

Posted: November 30, 2019 at 10:08 am

Recent surveys, studies, forecasts and other quantitative assessments of the progress of AI found that

745,705 autonomous-ready vehicles will ship worldwide in 2023 according to Gartner

AI business impact

63% of executives whose companies have adopted AI report that it has provided an uptick in revenue in the business areas where it is used, and 44% say AI has reduced costs; revenue increases from adopting AI are reported most often in marketing and sales, and cost decreases most often in manufacturing; about one-third of respondents say they expect AI adoption to lead to a decrease in their workforce in the next three years, compared with one-fifth who expect an increase (McKinsey online survey of 2,360 executives worldwide]

AI is helping Royal Dutch Shell locate new oil and gas sources. One of the companys 280 AI projects is aimed at helping the company find new sources of oil and gas by cleaning up data from seismic surveys, which are used to create images of rock formations that in turn help scientists locate oil deposits below the ocean floor. The problem, historically, has been that these surveys dont paint a clear picture of what rock formations look like. Underwater currents and other factors produce noisy data that affects the images. Shell created machine-learning algorithms, based on images the company has cleaned, to filter out that noise. Those surveys used to take human workers several months to interpret. The AI system reduced the time needed to produce clearer images by 80% [WSJ]

AI business adoption and attitudes

The McKinsey global survey found a nearly 25% year-over-year increase in the use of AI in standard business processes, with a sizable jump from the past year in companies using AI across multiple areas of their business; 58% of executives surveyed report that their organizations have embedded at least one AI capability into a process or product in at least one function or business unit, up from 47% in 2018; retail has seen the largest increase in AI use, with 60% of respondents saying their companies have embedded at least one AI capability in one or more functions or business units, a 35-percentage point increase from 2018; 74% of respondents whose companies have adopted or plan to adopt AI say their organizations will increase their AI investment in the next three years; 41% say their organizations comprehensively identify and prioritize their AI risks, citing most often cybersecurity and regulatory compliance (McKinsey online survey of 2,360 executives worldwide]

84% of C-suite executives believe they must leverage AI to achieve their growth objectives, yet 76% report they struggle with how to scale AI; 75% of executives believe that if they dont scale AI in the next five years, they risk going out of business entirely; companies that are strategically scaling AI report nearly 3X the return from AI investments compared to companies pursuing siloed proof of concepts; there is a positive correlation between strategic scaling and a premium of 32%, on average, for three key financial valuation metrics [Accenture survey of 1,500 C-suite executives from companies with a minimum revenue of $1 billion in 12 countries]

Fewer than 20% of organizations currently leverage AI to streamline and speed up processes; 55% believe AI and machine learning are key to meeting customer experience goals; only 15% agreed their organizations can leverage real-time data and 75% say their organizations plan to increase their use of real-time data in 2020; almost 70% say that AI or machine learning are used on only 1% to 20% of their companies' tech projects; nearly 80% plan to increase the use of artificial intelligence and machine learning over the next 12 months [Adobe survey of 200 CIOs from U.S.-based companies, with at least 100 employees; Fortune]

10% of organizations worldwide are actively using AI; in China, 14% of companies are using AI successfully [IBMs Institute for Business Value, CNBC]

The Life of Data, the fuel for AI

53% of IT professionals don't believe they have a true understanding of the term "data management" [ESG]

AI market forecasts and predictions

The AI market worldwide will reach $202.57 billion by 2026, up from $20.67 billion in 2018 [FinancialNewsMedia.com]

The AI in drug discovery market worldwide will reach $1,434 million by 2024, up from $259 million in 2019 [MarketsAndMarkets]

The AI in healthcare market worldwide will reach $19.25 billion by 2026, up from $0.95 billion in 2017 [Stratistics MRC]

Through 2022, deployment of artificial intelligence to augment, streamline, and accelerate IT operations will be a principal IT transformation (ITX) initiative for 60% of enterprise IT organizations [IDC]

By 2023, worldwide net additions of vehicles equipped with hardware that could enable autonomous driving without human supervision will reach 745,705 units, up from 137,129 units in 2018 and 332,932 units in 2019 [Gartner: There are no advanced autonomous vehicles outside of the research and development stage operating on the worlds roads now There are currently vehicles with limited autonomous capabilities, yet they still rely on the supervision of a human driver. However, many of these vehicles have hardware, including cameras, radar, and in some cases, lidar sensors, that could support full autonomy. With an over-the-air software update, these vehicles could begin to operate at higher levels of autonomy, which is why we classify them as autonomous-ready.]

AI Quotable Quotes

Todays [food] industry is ABCDAlways Be Collecting DataJohn Stanton, Saint Josephs University

In fact, artificial intelligence for business doesnt really exist yet. To paraphrase Mark Twain (or rather a common misquote of what Twain actually said), reports of AIs birth have been greatly exaggerated Max Simkoff and Andy Mahdavi, States Title

"It's really tempting if you're a CEO of a tech startup to AI-wash because you know you're going to get fundingBrandon Purcell, principal analyst, Forrester

The technologies that are coming out now, artificial intelligence, bitcoin, blockchain, smart contracts, they have the potential to transform the biggest industries in the world They all have the potential to be completely transformed now that there is artificial intelligence which actually can, in some cases, make better decisions than the humanTim Draper

Go here to read the rest:

63% Of Executives Say AI Leads To Increased Revenues And 44% Report Reduced Costs - Forbes

Posted in Ai | Comments Off on 63% Of Executives Say AI Leads To Increased Revenues And 44% Report Reduced Costs – Forbes

The 10 Hottest AI And Machine Learning Startups Of 2019 – CRN: The Biggest Tech News For Partners And The IT Channel

Posted: at 10:08 am

AI Startup Funding In 2019 Set To Outpace Previous Year

Investors just can't get enough of artificial intelligence and machine-learning startupsif the latest data on venture capital funding is any indication.

Total funding for AI and machine-learning startups for the first three quarters of 2019 was $12.1 billion, surpassing last year's total of $10.2 billion, according to the PwC and CB Insights' MoneyTree report.

With global spending on AI systems set to grow 28.4 percent annually to $97.9 billion, according to research firm IDC, these startups see an opportunity to build new hardware and software innovations that can fundamentally change the way we work and live.

What follows is a look at the 10 hottest AI and machine-learning startups of 2019, whose products range from new AI hardware and open-source platforms to AI-powered sales applications.

More:

The 10 Hottest AI And Machine Learning Startups Of 2019 - CRN: The Biggest Tech News For Partners And The IT Channel

Posted in Ai | Comments Off on The 10 Hottest AI And Machine Learning Startups Of 2019 – CRN: The Biggest Tech News For Partners And The IT Channel

How AI is Transforming The Way Students Learn – San Marcos Corridor News

Posted: at 10:08 am

By Alana Cathy | Exclusive To Corridor News

Artificial Intelligence (AI) is a growing field that is being used to help us live better and more efficient lives. From aiding us around the house to improving our navigational skills, AI is becoming part of our day-to-day living.

More than that, AI is also being used by educational institutions to help students learn and adapt to new trends. Below, well explore different cases of how AI is helping students learn at different levels of their education.

AI for K-12

EdTech Magazinediscusses how K-12 teachers are using AI solutions to help them improve student outcomes. For example, AI-enabled teaching assistance applications help educators improve personalized learning.

This is done by taking into account student input and adjusting the course material.

This gives each student the individual attention they need to excel, something that many schools struggle with.

Aside from learning, educators are also using AI for network safety and monitoring. Some schools in Florida, for example, are adopting GoGuardians content monitoring software.

The software uses machine learning to provide the necessary interventions when students can access flagged internet searches like self-harm or bullying.

Additionally, since AI is identified as a necessary skill when todays K-12 students enter the workforce, AI proficiency is also being taught to younger pupils.

School districts in Pennsylvania use tablets and smart assistance applications like Google Home and Amazon Alexa to familiarize kids with AI and machine learning.

AI in higher education

By 2021, automation technology will account for the work of nearly 4.3 million humans worldwide. Soon, itll be the individuals who create AI technology that will change the future.

Understanding the rapid changes in the workforce, universities are starting to integrate AI into higher education by creating specific degrees catered towards this evolving technology.

Texans will be delighted to know that US News indicates that theUniversity of Texas Austinranks in the top eight when it comes to universities with the best AI programs.

AndMaryville Universitydetails how universities are creating degrees for jobs that dont even exist yet. Higher education institutions around the country are anticipating the rise in demand for AI professionals, and are doing their best to meet it.

Universities are creating degrees for fields they know will have a big role in the future workforce.

This doesnt end with the formulation of new degrees, however. Texas State just recently held the Technology Enhanced Infrastructure Summitwhich aims to help in preparing students to enter the workforce with an ability to implement smart technology solutions.

Providing universal access

While AI is enhancing the learning and development of students who are in the current education system, its also helping provide access to those who have been systematically left out.

For example,Presentation Translatoris a PowerPoint plug-in that creates real-time subtitles for what the teacher is saying.

Such technological developments open up the possibilities for those with hearing or visual impairments to attend schooling along with the rest of their peers. AI is also helping to make learning more inclusive and equitable to all.

See the rest here:

How AI is Transforming The Way Students Learn - San Marcos Corridor News

Posted in Ai | Comments Off on How AI is Transforming The Way Students Learn – San Marcos Corridor News

EU Fights Corporatization of AI and Blockchain With Massive Investment – Cointelegraph

Posted: at 10:08 am

The European Union's announcement of a new 110 million euro fund to support research on artificial intelligence and blockchain comes at a critical time for the AI industry, when issues at the intersection of privacy, security and AI are the focus of acute attention by government, the tech industry and the general public.

Blockchain technology has the promise to radically transform the way society handles data as well as how AIs are trained and taught with this data. It has the potential to create a world in which control over and reward from data and AI is distributed more broadly across various stakeholders, including the people who generate the data. But there are still some difficult technical problems to be solved in order to manifest this potential, as well as a lot of large-scale software engineering.

The question isn't whether this funding program is timely and important, the question is whether it's anywhere near enough. In this light, the potential for the funding to be increased up to 2 billion euros in 2021 is even more interesting news.

The AI industry in the West is currently dominated by a relatively small number of large players centered in the United States, who have gained their positions by providing users with "free" or discounted services in exchange for the relatively unfettered use of their data. By using this data to train and teach AI systems, these companies have been able to create unprecedentedly effective advertising machines, with extraordinary capabilities of using the patterns mined from personal data to influence peoples' decisions about purchasing, political elections or anything else.

These large corporations also have numerous collaborative initiatives with governments, some of which are hush-hush and unconfirmed like Google's connections with the NSA and some of which have been exposed such as Facebook's permissive attitude toward Cambridge Analytica, a company working specifically for right-wing political organizations within the U.S. and Great Britain.

The Chinese AI industry looks similar, with Tencent, Alibaba and Baidu playing the roles of Facebook, Amazon and Google.

In China, the connections between tech companies and the central government are explicit and fully acknowledged. The Chinese government loves blockchain technology, but it views it a bit differently than the libertarian cypherpunks who founded the Western crypto movement. China is crafting an unprecedented synergy between encryption technology, distributed ledgers, and centralized guidance and monitoring of information flows.

The Chinese model has its pros and cons but clearly is not the path preferred by the typical citizen of North America or Western Europe particularly the latter, where GDPR has begun to revolutionize data sovereignty in the tech ecosystem.

However, there is no disputing the added efficiencies that a centralized approach brings. It seems likely that no Western company, not even Google, has aggregated the amount and diversity of data that Tencent has, which also has the ability to crunch all this data for various purposes on its huge server farms. The company also cooperates openly with the Chinese government in using their data store and AI capabilities for purposes judged to be for the common good of the nation.

If the West is to go in the direction of greater data sovereignty, enabling individuals to control their own data and the way it's used by AIs and if it wants to maintain this respect for sovereignty without falling behind in the AI race then it will need to aggressively develop tools that allow AI to learn from data without compromising data sovereignty.

The good news is that such tools exist. Multiparty computation, homomorphic encryption and other methods allow AI tools to analyze datasets that exist fragmented across multiple locations, owned by multiple individuals or entities in a trustless way, without anyone needing to reveal their data to other parties.

There is no fundamental reason that, right now, each individual's personal data doesn't go into a cloud-based data wallet that is controlled by their own private keys, wherein the individual specifically guides the use of their data for various purposes.

There is no fundamental reason that, right now, AIs are not primarily guided in their activities by the people who use them rather than by the companies that only make it appear as though the user is in control.

The main reasons why things don't work like this currently are related to the structure of the industry. But the industrys structure evolved the way it has partly as a function of the constraints of the underlying technology.

And a relevant key constraint is that in the present state of things, tools enabling AI to respect data sovereignty are often slow to run and difficult to use. If that doesn't sound surprising at all, perhaps that's because basic blockchain platforms like Ethereum and Bitcoin are also currently slow to run and difficult to use, relative to centralized alternatives.

Right now, for instance, one can use multiparty computation and homomorphic encryption in AI agents running in blockchain-based networks, such as many of those offered by members of the Decentralized AI Alliance (an industry organization with more than 50 members). But these tools tend to slow things down tremendously, and are thus infrequently used in practice.

The blockchain world is hard at work making its networks more efficient and introducing new technologies achieving efficiency for particular purposes via alternative architectures (e.g., there are new approaches that offer secure messaging with decentralized validation but no replicated ledger). But there remains much work to be done. And we have to remember the size of the competition. Just as for payments and value storage, Bitcoin, Ethereum and the others are competing in terms of AI with the global banking system and all its close government alliances and among Amazon, Microsoft, Facebook and Google, two companies are already worth more than a trillion dollars, with the other two not far behind.

Funding decentralized technology projects via initial coin offerings has dried up, and initial exchange offerings are mostly relatively small potatoes with increasing regulatory complexity. Venture investors are growing fearful of blockchain companies, partly due to the length of time since the last cryptocurrency boom and partly to the failure in 201819 of numerous corporate blockchain projects that aimed to insert early-stage, poorly scaling technologies like Ethereum into business IT environments with serious performance requirements.

AI is going to be the single most important technology on the planet during the coming years and decades. Who owns, controls and guides the AI in the stages before it becomes more autonomous and owns, controls and guides itself is therefore one of the most crucial issues facing the human species. And this large, complex, multidimensional matter, in significant part, boils down to various nitty-gritty technical issues regarding the interfacing of blockchain and AI.

For all these reasons, the EU putting research and development funds into AI and blockchain development is very sensible and welcomed but one wonders if the amounts involved should be even larger. May this program be the seed of many amazing and impactful things to come!

The views, thoughts and opinions expressed here are the authors alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Ben Goertzel is the CEO and founder of the SingularityNET Foundation. He is one of the worlds foremost experts in artificial general intelligence, a subfield of AI oriented toward creating thinking machines with general cognitive capability at the human level and beyond.

Link:

EU Fights Corporatization of AI and Blockchain With Massive Investment - Cointelegraph

Posted in Ai | Comments Off on EU Fights Corporatization of AI and Blockchain With Massive Investment – Cointelegraph

Why computational neuroscience and AI will converge – JAXenter

Posted: at 10:08 am

The limitations of neural networks

Today neural networks dominate the landscape of AI and AIOps, but Ive said many times that this is unsustainable. Neural networks have peaked in their ability to deliver effective and meaningful results. The science has issues with basic intractability, mismatch and inherent latency. Even though there is a lot of investment in neural networks, its bearing on AIOps and the real-time business community is limited. Which brings me on to computational neuroscience, which I believe will benefit AI enormously.

As I gaze into the future in terms of how AI is likely to evolve, I expect there to be a lot of crossover with computational neuroscience. Whats happening at the moment with neural networks is an early attempt to cross fertilize with AI, but this is failing and will continue to do so.

Its an attempt to take the complex and poorly understood behaviours of the human brain and associated nervous system and develop both mathematical and algorithmic models to try to understand their behaviour. You can compare computational neuroscience to economics or climate science. In all of these cases you have an immensely complex system with visible, but poorly understood, contours. We hope we can learn something about these systems to make high level predictions, which is achieved by building computational models that are either straight algorithms or a set of mathematical equations to try and get some insight into these large complex systems. This approach is entirely different from other scientific endeavours such as physics and chemistry, where you start with well defined behaviour and then try to build from the bottom up to understand, for example, why atoms behave the way they do, or how molecules or cells interact. Think of computational neuroscience, economics, and climate science as top down sciences, as opposed to classical bottom up sciences. Generally, computational neuroscience will give you some indication as to how the brain and nervous system works.

When you look at it that way, one of the things that becomes very interesting is that AI and computational neuroscience have many similarities. However, there is a perception that there is a massive difference between the two disciplines, many perceive computational neuroscience as a bottom up science and see AI as an engineering project. That is wrong, both of them are top down sciences that are investigating very similar and heavily overlapping domains. Therefore, in the next five to ten years we are going to see more crossovers between the two disciplines.

Firstly, there will be an increasing focus on how AI algorithms interact with one another. I think in most academic research and industrial efforts there is a lot of emphasis on developing and working with individual algorithms, but there is very little attention given to how the collection of algorithms interact with one another from an architectural perspective. One of the reasons why is because we naturally think of intelligence as a space that co-exists and there is no interacting structure. The truth is that algorithms need to be carefully choreographed with one another. This is very evident in the field of AIOps and how the Moogsoft platform has evolved. We have different types of algorithms which function at different times and hand off their results to one another. The result is very similar to the architecture off the human brain as we understand it.

As AI is deployed more systematically across more systems, the need to choreograph the interactions between the different algorithms will become more pressing. There is a vast body of knowledge which already exists on how, for example, visual systems interact with higher level conceptual categorization systems or how visual and auditory systems interact with one another. Therefore, it would be natural to look at the architecture of the brain as a starting point to design an optimal architecture for the interaction of various AI algorithms.

SEE ALSO:Data recovery: What matters when disaster hits

Secondly, AI research and industrial deployments has always focussed on centralized AI algorithms. In general, there is a drive to pull data in from various parts of the environment and take it to a single place where the AI algorithm is applied to it. I think increasingly there will be a focus on distributing algorithms geographically.

If you look at the way cognitive processes are enacted in the brain, and especially in the nervous system, it is evident it can become a model for how intelligence can be modularized and distributed not only conceptually but physically across a system. I think the way in which models are being developed on the computational neuroscience side to reflect distribution of intelligence will end up being a body of teachings for AI. To be fair, even in the field of computational neuroscience there has been insufficient focus on the need to modularize and distribute algorithmsbut its definitely coming.

Thirdly, as industry becomes more and more interested in robotics (the application of AI to automation) there will be an increased focus on how intelligence and AI algorithms interact with physical world processes. So, as robotics moves from being theoretical to a genuine industrial endeavour, the models that have been built to understand how the brain interacts with the nervous system and the external world will play an increasing role in the advancement of AI.

SEE ALSO:How to implement chatbots in an industrial context

Lastly, when we talk about machine learning or neural networks the focus is very much on the learning that takes place within an individual algorithm. It is not focused on how an entire system of algorithms evolves. As AI begins to recognize the importance of architecture and the choreography of algorithms; as it becomes more focused on distributed intelligence; as it becomes more focused on interacting with the external world; then I think were going to develop systems whose entire cognitive apparatus evolves and learns with time.

Computational neuroscience has absorbed and modified work conducted around cognitive psychology which has been embraced by the neural science world. I think this research has a lot to teach AI around the cognitive architectures it seeks to deploy in the industrial world.

These are the four big developments which will occur over the next five to ten years. Lessens learnt and models built in the computational neuroscience world will enter the world of research and industrial AI. As AI becomes more involved in business process execution, it starts to behave more like the brain and nervous system and hence its not a surprise that the work that has been done in computational neuroscience is likely to impact AI in the years ahead.

See the original post here:

Why computational neuroscience and AI will converge - JAXenter

Posted in Ai | Comments Off on Why computational neuroscience and AI will converge – JAXenter

High-paid, well-educated white collar workers will be heavily affected by AI, says new report – CNBC

Posted: at 10:08 am

As the advancement of technology continues to rise, so do concerns about automation soon taking our jobs.

Recent data from McKinsey & Company projects that up to 800 million global workers could be replaced by robots by 2030. For the most part, the report found that blue collar jobs, such as machine operating and fast food preparing, are especially susceptible to disruption.

But a new study published by the Brookings Institution says that might not be the case. The report takes a closer look at jobs that are the most exposed to artificial intelligence (AI), a subset of automation where machines learn to use judgment and logic to complete tasks and to what degree.

For the study, Stanford University doctoral candidate Michael Webb analyzed the overlap between more than 16,000 AI-related patents and more than 800 job descriptions and found that highly-educated, well-paid workers may be heavily affected by the spread of AI.

Workers who hold a bachelor's degree, for example, would be exposed to AI over five times more than those with only a high school degree. That's because AI is especially good at completing tasks that require planning, learning, reasoning, problem-solving and predicting most of which are skills required for white collar jobs.

Other forms of automation, namely in robotics and software, are likely to impact the physical and routine work of traditionally blue-collar jobs. But Mark Muro, a senior fellow and policy director at Brookings and co-author of the report, said it's important to note that exposure to AI isn't necessarily good or bad.

"We make no claim that these involvements with AI implies displacement of work or a threat to the job," Muro told CNBC Make It. "Really, what we're mapping is occupations that will be deeply involved with AI, but we're not mapping which jobs will be threatened."

According to Brookings, the jobs below face some of the highest exposure to AI in the near future:

(Median salaries listed above are based on data from the Bureau of Labor Statistics)

Well-paid managers, supervisors and analysts may also be heavily impacted by AI.

"AI is good at tasks that involve judgment and optimization, which tend to be done by higher-skilled workers," Webb told CNBC Make It. "So if you're optimizing ads as an online marketer or a radiologist interpreting medical scans, all of these things take a long time for humans to be good at them. But when it comes to algorithms, once you have the right training data, they tend to be better than humans."

Workers in jobs with high potential of exposure, "are going to have to adapt more," he added. "People good at adapting may thrive from that and AI might increase productivity and wages, and it could be good for them."

A health-care professional who uses AI to interpret patient data, for example, may have more time to apply those findings to patients and do research on medical advancements.

"We see there are a small number of occupations where a large number of tasks are suitable for machine learning," Martin Fleming, chief economist at IBM, told CNBC Make It. "But there are a large number of occupations where a small number of tasks are suitable for machine learning."

In short, "the thought that robots are stealing our jobs is nonsense," he said.

Anima Anandkumar, director of machine learning research at Nvidia, a maker of graphics processing units, said workers should evaluate the future of their own roles by asking three questions: Is my job fairly repetitive? Are there well-defined objectives to evaluate my job? Is there a large amount of data accessible to train an AI system?

If the answer to all three of these questions is yes, Anandkumar says AI exposure is likely and suggests workers should aim for jobs that require more creativity and human intuition. "This doesn't necessarily mean an entire career change. For instance, for lawyers and accountants, there are aspects of the job that require human interaction, collaboration, high-level strategy, and creativity. These will be more valuable in future."

An effort to adopt soft skills, as well advanced technology skills, is also crucial. Obed Louissaint, head of talent at IBM, said that employers should consider redesigning job roles with a strong focus on the skills required to complete the job not just today, but also later on as AI continues to advance.

A candidate with a strong growth mindset, for example, may be more willing and able to learn new skills that future jobs in their field might require.

Don't miss: The 10 most in-demand soft skills to master if you want a raise, promotion or new job in 2020

Read more from the original source:

High-paid, well-educated white collar workers will be heavily affected by AI, says new report - CNBC

Posted in Ai | Comments Off on High-paid, well-educated white collar workers will be heavily affected by AI, says new report – CNBC

Does AI Improve Retail And Help Increase Sales? – Retail TouchPoints – Retail TouchPoints

Posted: at 10:08 am

In the past few years, technological advancements took some very interesting turns across industries, including some undeniable improvements with its implementation in the retail sector. Used correctly, AI can reduce human error within different levels of retail sales processes. On the consumer end, the technology helps improve customer service and experience in general.

Additionally, AI technologies have helped personalize the shopping experience for consumers, reduced shipping cost for sellers, helped workers adopt new skills and improved supply chain efficiency.

According to a Global Market Insight study, the retail sector will increase its investment in AI to $8 billion by 2024. Since adoption of machine learning, deep learning and predictive analytics is required to succeed within the technological hemisphere in retail which is growing at a rapid pace it is absolutely necessary for retailers to adjust within the digital disruption.

But the question is how and in which areas does AI help improve?

As per AI experts on a global scale, this technology has so much to offer to any retail segment. GM Insights concluded that by 2020, a greater infusion of AI-based applications will be seen across the retail industry in everyday operations.

One major impact is on the customer service cycle, where both consumers and retailers will further benefit from the Artificial Intelligence-Retail syndicate.

Speaking of customer service, AI chatbots are helping retail sales and marketing to a great extent. The goal of implementing chatbots within e-Commerce businesses is to amp customer experience and provide personalization.

For example, here are two popular retail business chatbots on Facebook Messenger; Sephora and Pizza Hut.

The Sephora bot helps customers find the right products as well as teach them how to use each item they are getting. Their AI-enabled feature allows Messenger users to try different looks created with the selected makeup products using filters on the camera.

Users can book a service at any nearby Sephora store through the bot, file complaints, ask questions and submit their feedback, all within this chatbot.

As for the worlds most popular pizza chain, customers can place an order using the Pizza Hut chatbot, track their orders and get updates on the latest deals, discounts and promotions. It is integrated with payment options which makes it easier for users to pay right there.

The common motive is to make it easier for customers to get what they want without having to call, email or visit their web site for answers. Since consumers today prefer texting, it gives the retail industry a huge opportunity to increase online sales without spending too much.

AI-enabled technologies tend to save money for larger-scale businesses as well as smaller ones. Especially for impulse-buy retail businesses, bots are much more valuable when it comes to AI-enabled sales or customer assistants.

AI and machine learning can help boost efficiency, speed up processes, improve accuracy and increase customer engagement for the retail industry.

There has been an increasing shift towards automation for collecting data across multiple channels within the buying-selling experience. Retailers generate these insights using algorithms and models, and process through the giant datasets.

This helps retailers make actionable predictions as well as recommendations as explained in this study below:

Source: CalculAI - the flow chart explains digital retailers' key algorithms (retailers with a physical existence) and the effective use of AI and Machine Learning.

If we look at the different ways the retail industry aims at using AI on the basis of this years IBM study findings, retailers have claimed to use AI, Machine Learning and Chatbots in these six ways:

AI can help save a great deal of expense for retailers, leading to potential savings of approximately $340 billion every year by 2022, according to Capgeminis survey of 400 retail representatives.

However, the study also found out that nearly three quarters of AI use cases are customer-oriented, while other areas still need to fully utilize AI and its benefits.

A number of different AI-based solutions can enhance the customer experience and at the same time provide more selling opportunities for retail businesses. Although not all retail markets around the globe are completely saturated with AI, a lot of advanced technological implementations can be seen both online and in-store.

Such AI-based technologies and their platforms include:

AI will increasingly improve business processes and customer experience in retail. From demand forecasting, to supply chain operation, to personalization, AI will revolutionize the retail industry as a whole.

Usama Noman is Founder of Botsify. Noman is a serial entrepreneur, tech enthusiast and loves making and breaking things. A Machine Learning and NLP expert, Noman loves to use tools that can both resolve business frustrations and support humanity.

Link:

Does AI Improve Retail And Help Increase Sales? - Retail TouchPoints - Retail TouchPoints

Posted in Ai | Comments Off on Does AI Improve Retail And Help Increase Sales? – Retail TouchPoints – Retail TouchPoints

Secret Weapon: How AI Will Help America Win a War in Space – The National Interest Online

Posted: at 10:08 am

(Washington, D.C.) If a Russian or Chinese Anti-Satellite (ASAT) weapon streamed into space and exploded U.S. military satellites, friendly forces would instantly become very vulnerable to significant and extremely destructive enemy attacks.-- space-based infrared missile detection could be destroyed, GPS communications could be knocked out, guided weapons could jam and derail before hitting their targets and war-critical command and control could simply be taken out.

Any, all or part of this could happen in as little as 10 to 15 minutes once a satellite attacking missile is launched from the ground. Lives will hang in the balance as alerts are sent through U.S. command and control and decision-makers scramble to determine the best countermeasure with which to protect its space assets. Space war is no longer a distant prospect to envision years down the road --- it is here.

Recognizing the seriousness of this vulnerability, the Pentagon, U.S. Space Command, Missile Defense Agency and industry are moving quickly to integrate Machine Learning and AI into space-based systems and technology. The intention is to of course accelerate threat detection and get crucial information to decision-makers.

When a launch is detected, a spaced-based alert signal sends something down to command and control, which goes to the Air Force. It is then evaluated by computers and an alert is sent through the ballistic missile defense system. A mathematical formula determines its speed, trajectory and where it will hit, or land, a Missile Defense Agency official told Warrior.

In order to destroy an ASAT weapon, defenders need rapid access to vast pools of information, according to Ret. Lt. Gen. Christopher Bogdan, Senior Vice President, Aerospace Business, Booz Allen Hamilton, who said that commanders need to know Where is it (ASAT weapon) going up in the atmosphere? What is the signature or energy of the launch? What is it targeting and where might it hit its target? You need to figure out where the risk is.

If an adversary launches an ASAT, we will be able to detect that launch. The hard thing to do is respond immediately. You have about 10 to 15 minutes for the entire OODA (Observation, Orientation, Decision, Action) loop to play out, Bogdan said.

Bogdan explained that Booz Allen Hamilton, along with a coalition of commercial AI leaders, has pioneered a commercial technology platform designed to facilitate and accelerate AI adoption at scale within the federal government. The enterprise AI system, called Modzy, has significant commercial and military applications.

First things you need to know is what is going on up there.There are a lot of different catalogues and libraries of what is up there --- and they are not all combined. Space Command and DoD are trying to combine space catalogues so they can have a single unified data library where everything including adversary information gets catalogued, Bogdan said.

Modzy can deliver AI models that assist commanders by analyzing vast datasets and enabling them to develop an integrated view and make better-informed decisions. These datasets can cover a wide range of variables to include sensor information, targeting data, navigational details, threat libraries of enemy weapons and capabilities and various enemy missile launch and flight trajectory characteristics. Should a collective AI-empowered system be able to use advanced algorithms to combine and instantly access all of this interwoven information critical to response-time decision making, commanders could receive a life-saving fused or integrated combat picture.

Threat information has to be tied into one system. We need systems to talk to each other, which only works if they are on a common communications hub. If you have multiple systems that are not coordinated, defending forces may think they have 12 missiles coming at them even though they are all looking at the same missile, the MDA official explained.

The more streamlined and accessible information is, the faster AI-driven algorithms can compare new data against seemingly limitless databases of crucial to combat information to perform analytics, solve problems, make determinations, draw comparisons and synergize information.

You are going to need strong analytics and you will need to look back at previous data using AI and Machine Learning. You have an ability to figure things out quickly because you will have AI going through all the different possibilities for how to protect assets. The loop gets quick and short, Bogdan explained.

It makes sense that pooled data access would massively improve computational processing speed...

The models and the algorithms you create for AI are only as good as the data you put in there. It is a matter of access to the right pools of data and the quality of that data, Bogdan said.

Interestingly, a 2018 essay from the Joint Air Power Competence Centre (JAPCC) called The Future Role of Artificial Intelligence addresses the impact of AI upon the OODA-loop decision-making process referred to by Bogdan. (JAPCC is a think-tank analysis center sponsored by 16 NATO nations to support the NATO mission. Headquartered in Germany)

The paper analyzes how AI will change or effect each stage in the process of the famous and well known Observe, Orient, Decide, Act process; the paper anticipates the assigned future required AI roles and functions to each of the OODA steps.

The essay states that when it comes to the Orient stage in the OODA loop, AI can apply big data analytics and algorithms for data processing, then data presentation for timely abstraction and reasoning based on a condensed, unified view digestible by humans, but rich enough to provide the required level of detail. (The Future Role of Artificial Intelligence.. Military Opportunities and Challenges. Andy Fawkes, Lt. Col Martin Menzel, DEU JAPCC).

The concept is to of course leverage the decision-making faculties unique to human cognition, keeping humans in the lead regarding command and control, while easing the cognitive burden placed on human decision-makers by providing organized, analyzed and easily consumable combat-crucial data. This should include graphical displays of the situation, resources (timelines, capabilities, and relations and dependencies of activities), and context (point of action and effects), the JAPCC Journal article states.

More pervasive or far-sweeping computer analysis can quickly run through a large number of contingencies and possible outcomes to provide commanders with an optimized set of response options. As technology progresses, there will increasingly be a need for human leaders conducting military command and control to use AI and computer automation to manage a massively growing data flow.

Future intelligence, surveillance, target acquisition and reconnaissance systems will generate even larger amounts of (near) real-time data that will be virtually impossible to process without automated support, the 2018 JAPCC essay states.

Space war strategy continues to receive very large amounts of attention from the Pentagon and U.S. Space Command. While several countries are known to be making investments in the development of space weaponry, Chinese and Russian activities have engendered a particular concern among Pentagon leaders, analysts and threat assessment professionals.

The Chinese fired a land-based kinetic energy SC-19 missile at a satellite in space several years ago, an action which inspired worldwide attention and condemnation. As long ago as 2007, they launched an ASAT test of a low-altitude interceptor. They struck and destroyed a defunct Chinese weather satellite and created tens of thousands of pieces of debris, an Air Force senior leader told Warrior several years ago.

Disaggregation and Diversity are among the most heavily focused-upon techniques which seek to deploy multiple satellites carrying both conventional and nuclear systems; Diversity tactics are aimed at using multiple satellites to achieve the same goal. This includes fielding U.S. equipment that can use both GPS and Europes Galileo navigation system, Air Force officials have said. Naturally, this technique would allow U.S. forces to use allied assets if U.S. satellites were disrupted or destroyed by enemy attack. A Distribution strategy designed to spread satellites apart which perform certain key functions to preserve a needed technology should some assets be destroyed. This could include deception tactics used to prevent potential adversaries from knowing which satellites perform certain functions.

Some satellites are purely SATCOM, whereas others are GPS oriented or geared toward what Air Force professionals describe as Space-Based Infrared or SBIR assets. SBIR assets are engineered to detect the large thermal signal from an enemy intercontinental ballistic missile launch to better enable missile-defense technology to intercept an approaching attack.

The U.S. operates many Advanced Extremely High Frequency, of AEHF, communication satellites which have replaced the older Milstar systems; they operate at 44 GHz uplink and 20 GHz downlink. To compliment these systems, prepare for a new threat environment and better network satellites together, the Pentagon has been working on smaller, very Low Earth Orbit (vLEO) satellites which fly lower and faster in larger numbers. These satellites can more easily be networked together, share key signals and help build redundancy in the event that some are taken out by enemy attack.

Proliferation and Protection are also part of the strategic initiative; this involves deploying multiple satellites to perform the same mission and taking specific technical steps to harden satellites against attacks. While many of the specifics of these techniques are secret, officials do acknowledge they are likely to contain various countermeasures, investments in remote sensing technologies and maneuverability tactics.

See the original post:

Secret Weapon: How AI Will Help America Win a War in Space - The National Interest Online

Posted in Ai | Comments Off on Secret Weapon: How AI Will Help America Win a War in Space – The National Interest Online

AI Weekly: With AI-empowered devices, consider what youre buying – VentureBeat

Posted: at 10:08 am

Its Black Friday, and throngs of people are shopping for deals on virtual assistant-powered smart home devices from the likes of Amazon and Google. The initial appeal of smart speakers, smart displays, voice-controlled lights is obvious, and according to Strategy Analytics, growth in the smart speaker segment alone is expected to grow 57% by the end of 2019. But as we consider whether these devices will make our lives easier or better, are we giving enough thought to the trade-off between convenience and privacy?

Its essentially the same paradigm, writ small, that the world is facing with AI in general: AI has delivered unprecedented capabilities, but it has also engendered an uneasy sense that were losing control over these new tools and technologies. But when you consider buying a device for your home that has an AI assistant on board, you can focus on the questions you should always ask of technology: Does this technology make my life better or easier? What are the trade-offs, and are they worth it for the convenience?

Although those are heavy questions generally, when it comes to Black Friday-Cyber Monday weekend and youre looking at a killer discount on some smart home device and wondering if you should click the Buy button, its less of an existential conundrum and more of a practical one. What will you use a Google Home Mini for, exactly? Do you really want to turn on music in your kitchen every day by shouting at an Amazon Echo Studio that gets your request right only most of the time? What is the purpose of a smart night light?

Yes, a smart night light. Thats a real thing that exists in the extended universe of Alexa-compatible smart home products. And its utter banality serves as an excellent illustration of why we need to ask ourselves those aforementioned questions.

This particular smart night light is made by Third Reality and is certified as Made for Amazon. Its actually an accessory that attaches to the Amazon Echo Flex. The Flex is a palm-sized device that plugs into your wall outlet and can control things like your lights and thermostat. It has its own little mic and speaker that let you not only control Alexa, but talk to people through other Alexa devices in other rooms like an intercom. In a way, the Flex is almost an accessory itself, because its designed to be a part of a larger network of Alexa devices rather than a standalone device. It has a USB port on the bottom where you can charge a phone or plug in an attachment, such as a smart night light.

The smart night light becomes part of your Alexa device list, and you can manage and control it remotely with the Alexa app on your phone. Features include the ability to adjust the brightness from 1% to 100%, choose from a variety of colors, and determine when the light goes on or off.

In other words, it does everything a night light does, but with brightness and color options, and you have to manually set when it turns on and off. In addition to the time you have to spend setting it up and configuring the settings, the smart night light costs $15, and the Flex costs $20. You can buy them together for $32.

By contrast, you can get a four-pack of non-smart night lights for $9 on Amazon. They turn on when they sense that the light in the room is too low. They shut off when the light becomes brighter. Installation comprises plugging them into a wall outlet.

Arguably, the non-smart night light is already a perfect product cheap, easy to install, reliable, purpose-built so why does the smart night light exist? Sure, its neat to be able to do things like adjust brightness, pick fun colors, and control it with your phone, but youd have to stretch to make the case that its making your life better. Its certainly not making anything easier than non-smart night lights, and its not more convenient. And it costs more money.

Theres nothing wrong with wanting a silly, fun device, and theres nothing wrong with paying a little more for it than you need to. But there is a larger cost to consider: Amazon has grand plans for your home. The company is clear that it wants to put Alexa everywhere it possibly can, and just this week it rolled out increased abilities to build its intelligence to even more IoT edge devices with AWS IoT Core and enabled Alexa controls for new classes of objects in the home. Like other major virtual assistant platforms, Alexa devices record audio of your commands, necessitating oversight by you, the user. There are problems with Alexas user-submitted answers, too. Amazon also owns video doorbell maker Ring, with its troubling privacy and surveillance concerns, and it makes the controversial Rekognition facial recognition technology. This is not to mention its extensive AWS services.

When you buy that little smart night light and the Flex to go with it, youre buying further into an ecosystem of devices, services, and technologies thats entirely controlled by Amazon.

This is not an argument that you should or should not buy into that ecosystem; its a reminder that when you buy a smart device, youre not just buying a product with some extra features. Thats not how AI-powered products work.

Buy your smart device or give some as gifts, or dont, and be happy with your choices. But like all emerging and transformative technologies, dont forget to ask yourself what it will give you, and what it will cost. And then when it comes to larger decisions about building, buying, or creating AI technologies for your company or organization, ask the same questions.

See the original post:

AI Weekly: With AI-empowered devices, consider what youre buying - VentureBeat

Posted in Ai | Comments Off on AI Weekly: With AI-empowered devices, consider what youre buying – VentureBeat

Intel, GraphCore And Groq: Let The AI Cambrian Explosion Begin – Forbes

Posted: at 10:08 am

As we approach the end of a year full of promises from AI startups, a few companies are meeting their promised 2019 launch dates. These include Intel, with its long-awaited Nervana platform, UK startup Graphcore and the stealthy Groq from Silicon Valley. Some of these announcements fall a bit short on details, but all claim to represent breakthroughs in performance and efficiency for training and/or inference processing. Other recent announcements include Cerebrass massive wafer-scale AI engine inside its multi-million dollar CS-1 system and NVIDIAs support for GPUs on ARM-based servers. Ill opine on those soon, but here I will focus on Intel, Graphcore and Groqs highly anticipated chips.

Intel demos Nervana NNP, previews Ponte Vecchio GPU

At an event in San Francisco on November 12, Intel announced it was sampling its Nervana chips for training and inference to select customers including Baidu, Facebook and others. Additionally, it took the opportunity to demonstrate working hardware. While this looked a lot like a launch, Intel carefully called it an update. Hopefully we will see a full launch soon, with more specs like pricing, named customers and OEM partners ready to ship product in volume.

Intel recently previewed impressive performance in the Mlperf inference benchmarks for the NNP-I (the I stands for inference). Keep in mind that these chips are the second iteration of Intels Nervana chip, and I expect Intel incorporated significant customer input in these revised designs. While Intel disclosed few details about the microarchitecture, it did tout an Inter-Chip Link (ICL). The ICL supposedly enables nearly 95% scalability as customers can add more chips to solve larger problems. Intel also claimed that a rack of NNP-I chips will outperform a rack of NVIDIAs T4 GPUs by nearly 4X, although I would note that this compares 32 Intel chips to only 20 T4 chips. While improved compute density is a good thing, more details will be required to properly assess the competitive landscape.

Figure 1: Intel demonstrated both the training and inference versions of its NNP architecture, born ... [+] from the 2016 acquisition of Nervana.

The NNP chips support all AI frameworks and benefit from the well-respected Nervana software stack. Intel also laid out its vision for the One API development environment, which will support Xeon CPUs, Nervana AI chips, FPGAs and future Xe GPUs. This software approach will be critical in helping Intels development community to optimize their code once for a broad range of devices.

Though details were scarce, Intel also announced its first data-center GPU at SC19, codenamed Ponte Vecchio. We know that Pointe Vecchio will go inside the Argonne National Labs Aurora exascale system in 2022, but we should see consumer versions sometime in 2020.

It is noteworthy that Intel sees a role for so many architectures for specific types of workloads, a strategy Intel calls Domain-specific architectures. The GPU can perform a wide variety of tasks, from traditional HPC to AI, while the Nervana chips are designed to train and query deep neural networks at extreme performance and efficiency. While some may say that Intel is taking a shotgun approach, fielding many architectures hoping to hit something, I believe the company is being smart. It is optimizing chips for specific tasks at a scale only Intel could array.

The Graphcore Intelligent Processing Unit (IPU)

Unicorn UK Startup Graphcore recently launched its IPC chip, complete with customers, partners, benchmarks and immediate availability. It is geared towards training and inference processing of AI neural networks, or any other computation that can be represented as a graph. Graphcore garnered financial and strategic backing from Dell, Microsoft and others, and announced availability of its Tensor Streaming Processor in both Dell servers and in the Microsoft Azure cloud. Customers testing early silicon include the European search engine Quant (image processing), Microsoft Azure (natural language processing), hedge fund manager Carmot Capital (Markov Chain Monte Carlo) and the Imperial College of London (robotic simultaneous location and mapping).

Graphcores architecture was designed for the most computationally challenging problems, using 1216 cores, 300 MB of in-processor memory at 45 TB/s and 80 IPU Links at 320GB/s. The company strategy is not to take on NVIDIA on every front, but rather to focus on those applications ideally suited to its architecture. Consequently, the benchmarks Graphcore published are relatively new in the industry; they have not yet published results for industry standard benchmarks such as Mlperf. In a conversation with CEO Nigel Toon last week, I was reassured that more standard benchmarks are forthcoming that will enable tuned apples-to-apples comparisons. That being said, the published benchmarks span several workloads and are quite impressive in both throughput and latency.

Figure 2: Graphcore's unique design.

Neural network size and complexity is growing at the rate of 3.5 times every 3 months, according to the OpenIA group. This means that adopters and researchers need accelerators that can scale to massive size to minimize training time. Hence, both Intel Nervana NNP-T and the Graphcore IPUs support for native interconnect fabrics, In Graphcores case, the fabric is enabled by IPU-Links, as well as an on-die fabric IPU-Exchange (switch) for core to core communication. Combined, this enables fabrics of accelerators to tile out huge models in parallel, scaling to hundreds or even thousands of nodes. Cerebras is doing something similar but at supercomputing scale, using chips that are a full wafer of interconnected engines.

Groq: Screaming, Streaming Tensors from the creators of Google TPU

Groq is a Silicon Valley startup founded by a few members of the Google TPU team, and was operating in stealth mode until this announcement. A few months back, I spoke with CEO and co-founder Jonathon Ross, and he made it clear that the company was building what could be the fastest single-die chip in the industry. The company was a no-show at Septembers second annual AI HW Summit, causing many to wonder; it was widely expected to come out of stealth at the sold-out event.

The company was probably just getting its first silicon back that weekan exciting and super-busy time for any semiconductor company. Clearly the Groq team was up to the task: it had the A0 version of the silicon up and running in one week, was sampling to early customers within just six weeks and has now gone into production.

Mr. Ross was rightthe Groq Tensor Streaming Processor appears to be the fastest single AI die to date (I refer to single die to differentiate this from the Cerebras Wafer Scale Engine, which is a single chip, but is comprised of 84 interconnected dies). Groqs TSP cranks out one quintillion (one thousand trillion) integer ops per second and 250 trillion floating point ops per second (FLOPS). Of course, the usual caveats apply. We must await application performance benchmarks to see if the software can deliver on the hardwares potential. Still, these are certainly amazing numbers.

Inspired by a software-first mindset, Groq pushes a lot of the optimization, control and planning functions to the software. The company claims this results in higher performance per millimeter of silicon, saving die area for computation. Perhaps more importantly, the tight integration of the compiler and the hardware produces deterministic results and performance, eliminating the time-consuming profiling usually required. According to the white paper released by the company, the compiler knows exactly how the chip works and precisely how long it takes to perform each computation. The compiler moves the data and the instructions into the right place at the right time so that there are no delays. The flow of instructions to the hardware is completely choreographed, making processing fast and predictable. Developers can run the same model 100 times on the Groq chip and receive precisely the same result each time.

Figure 3: The Groq TSP moves control , planning, and caches to the software stack, freeing up logic ... [+] area for more cores and performance.

I look forward to learning a lot more about Groq as the company begins to stake out its messages and reveal more details about the architecture, but the preliminary claims are undeniably impressive. Groq set a high bar by which other AI chip companies will be measured.

Conclusions

These three companies have made impressive gains in hardware and software innovation for AI, but more details are needed to validate their claims, and understand where they will excel and where they might struggle. And of course, these are just the first new chips of the coming Cambrian Explosion over the next 1-2 years, as billions of dollars of venture capital are converted into new silicon for AI.

I suspect the NVIDIA benchmarking and software tuning teams are going to have a busy holiday season!

Disclosure:Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising and/or consulting to many high-tech companies in the industry including Intel, Dell Technologies, and Microsoft. The author holds no investment positions with any of the companies cited above.

Here is the original post:

Intel, GraphCore And Groq: Let The AI Cambrian Explosion Begin - Forbes

Posted in Ai | Comments Off on Intel, GraphCore And Groq: Let The AI Cambrian Explosion Begin – Forbes

Page 191«..1020..190191192193..200210..»