Humans and AI will work together in almost every job, Parc CEO … – Recode

Artificial intelligence is poised to continue advancing until it is everywhere and before it gets there, Tolga Kurtoglu wants to make sure its trustworthy.

Kurtoglu is the CEO of Parc, the iconic Silicon Valley research and development firm previously known as Xerox Parc. Although its best known for its pioneering work in the early days of computing developing technologies such as the mouse, object-oriented programming and the graphical user interface Parc continues to help companies and government agencies envision the future of work.

A really interesting project that were working on is about how to bring together these AI agents, or computational agents, and humans together, in a way that they form sort of collaborative teams, to go after tasks, Kurtoglu said on the latest episode of Recode Decode, hosted by Kara Swisher. And robotics is a great domain for exploring some of the ideas there.

Whereas today you might be comfortable asking Apples Siri for the weather or telling Amazon's Alexa to add an item to your to-do list, Kurtoglu envisions a future where interacting with a virtual agent is a two-way street. You might still give it commands and questions, but it would also talk back to you in a truly conversational way.

What were talking about here is more of a symbiotic team between an AI agent and a human, he said. They solve the problems together, its not that one of them tells the other what to do; they go back and forth. They can formulate the problem, they can build on each others ideas. Its really important because were seeing significant advancements and penetration of AI technologies in almost all industries."

You can listen to Recode Decode on Apple Podcasts, Google Play Music, Spotify, TuneIn, Stitcher and SoundCloud.

Kurtoglu believes that both in our personal lives and in the office, every individual will be surrounded by virtual helpers that can process data and make recommendations. But before artificial intelligence reaches that level of omnipresence, it will need to get a lot better at explaining itself.

"At some point, there is going to be a huge issue with people really taking the answers that the computers are suggesting to them without questioning them, he said. So this notion of trust between the AI agents and humans is at the heart of the technology were working on. Were trying to build trustable AI systems.

So, imagine an AI system that explains itself, he added. If youre using an AI to do medical diagnostics and it comes up with a seemingly unintuitive answer, then the doctor might want to know, Why? Why did you come up with that answer as opposed to something else? And today, these systems are pretty much black boxes: You put in the input, it just spits out what the answer is.

So, rather than just spitting out an answer, Kurtoglu says virtual agents will explain what assumptions they made and how they used those assumptions to reach a conclusion: Here are the paths Ive considered, here are the paths I've ruled out and heres why.

If you like this show, you should also sample our other podcasts:

If you like what were doing, please write a review on Apple Podcasts and if you dont, just tweet-strafe Kara.

Continued here:

Humans and AI will work together in almost every job, Parc CEO ... - Recode

New AI tech to bridge the culture gap in organisations: IT experts – BusinessLine

Digital transformation (DX) is set to bridge the culture gap, with DX requiring a new level of collaboration between business leaders, employees, and IT staff, according to IT experts.

In 2020, a cultural shift and collaborative mentality will become just as important as the technology itself, said Don Schuerman, CTO, VP product marketing, Pegasystems.

Organisations will look at the DX culture and ramp up efforts to ensure that DX is optimised for success. Expect traditional organisational boundaries between IT and business lines to start breaking down, and new roles like citizen developer and AI Ethicist that blend IT and business backgrounds to grow, he added.

Mankiran Chowhan, Managing Director, Indian Subcontinent, SAP Concur, noted that as we move towards the fourth industrial revolution, workers looking to save time will kick demand for AI into overdrive, and in 2020, workplace changes related to AI will become a noticeable trend.

A recent PwC report revealed that 67 per cent would prefer AI assistance over humans as office assistants. Band-aid transformation is also expected to lose out to deeper DX efforts. Offering consumers a slick interface or a cool app only scratches the surface of a true digital transformation, said Pegasystems Schuerman. He added that next year is bound to witness visible failures of organisations and projects that do not take their transformation efforts below the surface.

AI is also expected to move out of the lab. Rubber will truly meet the road, with DX tech, which has been in a constant state of being in the labs, moving out, explains Schuerman.

While societal tension around AI will continue, Chowhan said that workers openness to automation will incrementally drive change. For example, millennials, who now represent the majority of workers, are instinctively comfortable using AI. As consumers, they are more likely to approve of AI-provided customer support, automated product recommendations, and even want AI to enhance their experience watching sports, he said.

AI and emotional intelligence are expected to converge. Customers are individuals with similar needs: to feel important, heard and respected. As a result, empathetic AI is increasingly applied in advertising, customer service, and to measure how engaged a customer is in their journey.

A report from Accenture showed that AI has the potential to add $957 billion or 15 per cent of Indias current gross value in 2035. Chowhan said that in 2020, this trend will kick into gear, with more technology companies infusing empathy into their AI.

As companies use empathetic AI to bring more of the benefits of advanced technology to life, they will instill more trust, create better user experiences, and deliver higher productivity, said the SAP Concur official.

Machine learning (ML) is also expected to move from a novelty to a routine function. In 2020, ML will be less of a novelty, as it proliferates under the hood of technology services everywhere, especially behind everyday workflows, said Chowhan. Apart from that, data is expected to move from an analytical to a decision-making tool.

In 2020, the shift to leveraging data for real-time decision-making will accelerate for a number of business functions, he added, noting that in the coming years, more organisations will start to realise the potential of their data to intelligently guide business decisions and leverage them to reach even greater levels of success.

Dave Russell, Vice-President of Enterprise Strategy at Veeam Software, noted that all applications will become mission-critical. The number of applications that businesses classify as mission-critical will rise during 2020, paving the way to a landscape in which every app is considered a high-priority, as businesses become completely reliant on their digital infrastructure.

A Veeam Cloud Data Management report showed IT decision-makers saying their business can tolerate two hours downtime of mission-critical apps.

Application downtime costs organisations $20.1 million globally in lost revenue and productivity annually, he said.

Visit link:

New AI tech to bridge the culture gap in organisations: IT experts - BusinessLine

Artificial Intelligence in Fintech – Global Market Growth, Trends and Forecasts to 2025 – Assessment of the Impact of COVID-19 on the Industry -…

DUBLIN--(BUSINESS WIRE)--The "AI in Fintech Market - Growth, Trends, Forecasts (2020-2025)" report has been added to ResearchAndMarkets.com's offering.

The global AI in Fintech market was estimated at USD 6.67 billion in 2019 and is expected to reach USD 22.6 billion by 2025. The market is also expected to witness a CAGR of 23.37% over the forecast period (2020-2025).

Artificial Intelligence improves results by applying methods derived from the aspects of human intelligence but beyond human scale. The computational arms race since the past few years has revolutionized the fintech companies. Further, data and the near-endless amounts of information are transforming AI to unprecedented levels where smart contracts will merely continue the market trend.

Key Highlights

Major Market Trends

Quantitative and Asset Management to Witness Significant Growth

North America Accounts for the Significant Market Share

Competitive Landscape

AI in Fintech market is moving towards fragmented owing to the presence of many global players in the market. Further various acquisitions and collaboration of large companies are expected to take place shortly, which focuses on innovation. Some of the major players in the market are IBM Corporation, Intel Corporation, Microsoft Corporation, among others.

Some recent developments in the market are:

Key Topics Covered

1 INTRODUCTION

1.1 Study Deliverables

1.2 Scope of the Study

1.3 Study Assumptions

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET DYNAMICS

4.1 Market Overview

4.2 Industry Attractiveness - Porter's Five Force Analysis

4.2.1 Bargaining Power of Suppliers

4.2.2 Bargaining Power of Buyers/Consumers

4.2.3 Threat of New Entrants

4.2.4 Threat of Substitute Products

4.2.5 Intensity of Competitive Rivalry

4.3 Emerging Use-cases for AI in Financial Technology

4.4 Technology Snapshot

4.5 Introduction to Market Dynamics

4.6 Market Drivers

4.6.1 Increasing Demand for Process Automation Among Financial Organizations

4.6.2 Increasing Availability of Data Sources

4.7 Market Restraints

4.7.1 Need for Skilled Workforce

4.8 Assessment of Impact of COVID-19 on the Industry

5 MARKET SEGMENTATION

5.1 Offering

5.1.1 Solutions

5.1.2 Services

5.2 Deployment

5.2.1 Cloud

5.2.2 On-premise

5.3 Application

5.3.1 Chatbots

5.3.2 Credit Scoring

5.3.3 Quantitative and Asset Management

5.3.4 Fraud Detection

5.3.5 Other Applications

5.4 Geography

5.4.1 North America

5.4.2 Europe

5.4.3 Asia-Pacific

5.4.4 Rest of the World

6 COMPETITIVE LANDSCAPE

6.1 Company Profiles

6.1.1 IBM Corporation

6.1.2 Intel Corporation

6.1.3 ComplyAdvantage.com

6.1.4 Narrative Science

6.1.5 Amazon Web Services Inc.

6.1.6 IPsoft Inc.

6.1.7 Next IT Corporation

6.1.8 Microsoft Corporation

6.1.9 Onfido

6.1.10 Ripple Labs Inc.

6.1.11 Active.ai

6.1.12 TIBCO Software (Alpine Data Labs)

6.1.13 Trifacta Software Inc.

6.1.14 Data Minr Inc.

6.1.15 Zeitgold GmbH

7 INVESTMENT ANALYSIS

8 MARKET OPPORTUNITIES AND FUTURE TRENDS

For more information about this report visit https://www.researchandmarkets.com/r/y1fj00

View original post here:

Artificial Intelligence in Fintech - Global Market Growth, Trends and Forecasts to 2025 - Assessment of the Impact of COVID-19 on the Industry -...

The 10 most innovative artificial intelligence companies of 2020 – Fast Company

Artificial intelligence has reached the inflection point where its less of a trend than a core ingredient across virtually every aspect of computing. These companies are applying the technology to everything from treating strokes to detecting water leaks to understanding fast-food orders. And some of them are designing the AI-ready chips that will unleash even more algorithmic innovations in the years to come.

For enabling the next generation of AI applications with its Intelligent Processing Unit AI chip

As just about every aspect of computing is being transformed by machine learning and other forms of AI, companies can throw intense algorithms at existing CPUs and GPUs. Or they can embrace Graphcores Intelligence Processing Unit, a next-generation processor designed for AI from the ground up. Capable of reducing the necessary number crunching for tasks such as algorithmic trading from hours to minutes, the Bristol, England, startups IPUs are now shipping in Dell servers and as an on-demand Microsoft Azure cloud service.

Read more about why Graphcore is one of the Most Innovative Companies of 2020.

For tutoring clients like Chase to fluency in marketing-speak

Ever tempted to click on the exciting discount offered to you in a marketing email? That might be the work of Persado, which uses AI and data science to generate marketing language that might work best on you. The companys algorithms learn what a brand hopes to convey to potential customers and suggests the most effective approachand it works. In 2019, Persado signed contracts with large corporations like JPMorgan Chase, which signed a five-year deal to use the companys AI across all its marketing. In the last three years, Persado claims that it has doubled its annual recurring revenue.

For becoming a maven in discerning customer intent via messaging apps

We may be a long way from AI being able to replace a friendly and knowledgeable customer-service representative. But LivePersons Conversational AI is helping companies get more out of their human reps. The machine-learning-infused service routes incoming queries to the best agent, learning as it goes so that it grows more accurate over time. It works over everything from text messaging to WhatsApp to Alexa. With Conversational AI and LivePersons chat-based support, the companys clients have seen a two-times increase in agent efficiency and a 20% boost in sales conversions compared to voice interactions.

For catalyzing care after a patients stroke

When a stroke victim arrives at the ER, it can sometimes be hours before they receive treatment. Viz.ai makes an artificial intelligence program that analyzes the patients CT scan, then organizes all the clinicians and facilities needed to provide treatment. This sets up workflows that happen simultaneously, instead of one at a time, which collapses how long it takes for someone to receive treatment and improves outcomes. Viz.ai says that its hospital customer base grew more than 1,600% in 2019.

For transforming sketches into finished images with its GauGAN technology

GauGAN, named after post-Impressionist painter Paul Gauguin, is a deep-learning model that acts like an AI paintbrush, rapidly converting text descriptions, doodles, or basic sketches into photorealistic, professional-quality images. Nvidia says art directors and concept artists from top film studios and video-game companies are already using GauGAN to prototype ideas and make rapid changes to digital scenery. Computer scientists might also use the tool to create virtual worlds used to train self-driving cars, the company says. The demo video has more than 1.6 million views on YouTube.

For bringing savvy to measuring the value of TV advertising and sponsorship

Conventional wisdom has it that precise targeting and measuring of advertising is the province of digital platforms, not older forms of media. But Hives AI brings digital-like precision to linear TV. Its algorithms ingest video and identify its subject matter, allowing marketers to associate their ads with relevant contentsuch as running a car commercial after a chase scene. Hives Mensio platform, offered in partnership with Bain, melds the companys AI-generated metadata with info from 20 million households to give advertisers new insights into the audiences their messages target.

For moving processing power to the smallest devices, with its low-power chips that handle voice interactions

Semiconductor company Syntiant builds low-power processors designed to run artificial intelligence algorithms. Because the companys chips are so small, theyre ideal for bringing more sophisticated algorithms to consumer tech devicesparticularly when it comes to voice assistants. Two of Syntiants processors can now be used with Amazons Alexa Voice Service, which enables developers to more easily add the popular voice assistant to their own hardware devices without needing to access the cloud. In 2019, Syntiant raised $30 million from the likes of Amazon, Microsoft, Motorola, and Intel Capital.

For plugging leaks that waste water

Wint builds software that can help stop water leaks. That might not sound like a big problem, but in commercial buildings, Wint says that more than 25% of water is wasted, often due to undiscovered leaks. Thats why the company launched a machine-learning-based tool that can identify leaks and waste by looking for water use anomalies. Then, managers for construction sites and commercial facilities are able to shut off the water before pipes burst. In 2019, the companys attention to water leaks helped it grow its revenue by 400%, and it has attracted attention from Fortune 100 companies, one of which reports that Wint has reduced its water consumption by 24%.

For serving restaurants an intelligent order taker across app, phone, and drive-through

If youve ever ordered food at a drive-through restaurant and discovered that the items you got werent the ones you asked for, you know that the whole affair is prone to human error. Launched in 2019, Interactions Guest Experience Platform (GXP) uses AI to accurately field such orders, along with ones made via phone and text. The technology is designed to unflinchingly handle complex custom ordersand yes, it can ask you if you want fries with that. Interactions has already handled 3 million orders for clients youve almost certainly ordered lunch from recently.

For giving birth to Kai (born from the same Stanford research as Siri), who has become a finance whiz

Kasisto makes digital assistants that know a lot about personal finance and know how to talk to human beings. Its technology, called KAI, is the AI brains behind virtual assistants offered by banks and other financial institutions to help their customers get their business done and make better decisions. Kasisto incubated at the Stanford Research Institute, and KAI branched from the same code base and research that birthed Apples Siri assistant. Kasisto says nearly 18 million banking customers now have access to KAI through mobile, web, or voice channels.

Read more about Fast Companys Most Innovative Companies:

Read this article:

The 10 most innovative artificial intelligence companies of 2020 - Fast Company

AI could help to reduce diabetic screening backlog in wake of COVID-19 – AOP

Scientists highlight that machine learning could safely halve the number of images that need to be assessed by humans

Pixabay/Michael Schwarzenberger

The study, which was published in British Journal of Ophthalmology, used the AI technology, EyeArt, to analyse 120,000 images from 30,000 patient scans in the English Diabetic Eye Screening Programme.

The technology had 95.7% accuracy for detecting eye damage that would require specialist referral, and 100% accuracy for moderate to severe retinopathy.

The researchers from St Georges, University of London, Moorfields Eye Hospital, UCL, Homerton University Hospital, Gloucestershire Hospitals and Guys and St Thomas NHS Foundation Trusts highlight that the introduction of the technology to the diabetic screening programme could save 10 million per year in England alone.

Professor Alicja Rudnicka, from St Georges, University of London, said that using machine learning technology could safely halve the number of images that need to be assessed by humans.

If this technology is rolled out on a national level, it could immediately reduce the backlog of cases created due to the coronavirus pandemic, potentially saving unnecessary vision loss in the diabetic population, she emphasised.

Moorfields Eye Hospital consultant ophthalmologist Adnan Tufai, highlighted that most AI technology is tested by developers or companies, but this research was an independent study involving images from real-world patients.

The technology is incredibly fast, does not miss a single case of severe diabetic retinopathy and could contribute to healthcare system recovery post-COVID, he said.

Here is the original post:

AI could help to reduce diabetic screening backlog in wake of COVID-19 - AOP

No Jitter Roll: AI Routing in the Contact Center, Voice Analytics | No Jitter – No Jitter

This week we share announcements around intelligent contact center routing, voice analytics tools, a secure access service edge (SASE) and Google Cloud integration, a personalized videoconferencing kit, and CPaaS funding.

Nice Aims to Hyper-Personalize Customer Engagement

For each interaction, Enlighten AI Routing evaluates data from Enlighten AI and other datasets to get a holistic view of the customer and determine the most influential data for that engagement, Nice said in its press release. Likewise, Enlighten AI Routing assesses agent-related data, such as recent training successes, active listening skills, and empathy, to optimize agent assignments, Nice said.

Enlighten AI, which uses machine learning to self-learn and improve datasets with each interaction, then can provide agents with real-time interaction guidance, Nice said. [Agents] can see the impact of their actions on the customer center and are given advice on how to adjust their tone, speed, and other key behaviors such as demonstrating ownership to improve it, Barry Cooper, president of the Nice Workforce and Customer Experience Division, said when introducing the product at Interactions.

TCN Launches Voice Analytics for Contact Center

TCN is initially offering Voice Analytics as a free 60-day trial.

Versa Networks, Google Cloud Team on Integration

Konftel Personalizes Meeting Experience

The Konftel Personal Video Kit, available now, is priced at $279.

IntelePeer Gets Funding Boost

Ryan Daily, No Jitter associate editor, contributed to this article.

More here:

No Jitter Roll: AI Routing in the Contact Center, Voice Analytics | No Jitter - No Jitter

AI creates fictional scenes out of real-life photos – Engadget

Researcher Qifeng Chen of Stanford and Intel fed his AI system 5,000 photos from German streets. Then, with some human help it can build slightly blurry made-up scenes. The image at the top of this article is a example of the network's output.

To create an image a human needs to tell the AI system what goes where. Put a car here, put a building there, place a tree right there. It's paint by numbers and the system generates a wholly unique scene based on that input.

Chen's AI isn't quite good enough to create photorealistic scenes just yet. It doesn't know enough to fill in all those tiny pixels. It's not going to replace the high-end special effects houses that spend months building a world. But, it could be used to create video game and VR worlds where not everything needs to look perfect in the near future.

Intel plans on showing off the tech at the International Conference on Computer Vision in October.

More:

AI creates fictional scenes out of real-life photos - Engadget

How to Stop Sharing Sensitive Content with AWS AI Services – Computer Business Review

Add to favorites

You can use API, CLI, or Console

AWS has released a new tool that allows customers of its AI services to more easily stop sharing their datasets with Amazon for product improvement purposes: something that is currently a default opt-in for many AWS AI services.

Until this week, AWS users had to actively raise a support ticket to opt-out of content sharing. (The default opt-in can see AWS take customers AI workload datasets and store them for its own product development purposes, including outside of the region that end-users had explicitly selected for their own use.)

AWS AI services affected include facial recognition service Amazon Rekognition, voice recording transcription service Amazon Transcribe, natural language processing service Amazon Comprehend and more, listed below.

(AWS users can otherwise choose where data and workloads reside; something that is vital for many for compliance and data sovereignty reasons).

Opting in to sharing is still the default setting for customers: something that appears to have surprised many, as Computer Business Review reported this week.

The company has, however, now updated its opt-out options to make it easier for customers to set opting out as a group-wide policy.

Users can do this in the console, by API or command line.

Users will permission to run organizations:CreatePolicy

Console:

Command Line Interface (CLI) and API

Editors note: AWS has been keen to emphasise a difference between content and data following our initial report, asking us to correct our claim that AI customer data was being shared by default with Amazon, including sometimes outside selected geographical regions. It is, arguably, a curious distinction. The company appears to want to emphasise that the opt-in is only for AI datasets, which it calls content.

(As one tech CEO puts it to us: Only a lawyer that never touched a computer might feel smart enough to venture into content, not data wonderland.)

AWSs own new opt-out page initially read disputed that characterisation.

It read: AWS artificial intelligence (AI) services collect and store data as part of operating and supporting the continuous improvement life cycle of each service.

As an AWS customer, you can choose to opt out of this process to ensure that your data is not persisted within AWS AI service data stores. [Our italics].

AWS has since changed the wording on this page to the more anodyne: You can choose to opt out of having your content stored or used for service improvements and asked us to reflect this.For AWSs full new guide to creating, updating, and deleting AI services opt-out policies, meanwhile, see here.

Go here to see the original:

How to Stop Sharing Sensitive Content with AWS AI Services - Computer Business Review

AI model developed to identify individual birds without tagging – The Guardian

For even the most sharp-eyed of ornithologists, one great tit can look much like another.

But now researchers have built the first artificial intelligence tool capable of identifying individual small birds.

Computers have been trained to learn to recognise dozens of individual birds which could potentially save scientists arduous hours in the field with binoculars, as well as the catching of birds to fit coloured rings to their legs.

We show that computers can consistently recognise dozens of individual birds, even though we cannot ourselves tell these individuals apart, said Andr Ferreira, a PhD student at the Centre for Functional and Evolutionary Ecology (CEFE-CNRS), in France. In doing so, our study provides the means of overcoming one of the greatest limitations in the study of wild birds reliably recognising individuals.

Ferreira began exploring the potential of artificial intelligence while in South Africa, where he studied the co-operative behaviour of the sociable weaver, a bird which works with others to build the worlds largest nest.

He was keen to understand the contribution of each individual to building the nest, but found it hard to identify individual birds from direct observation because they were often hiding in trees or building parts of the nest out of sight. The AI model was developed to recognise individuals simply from a photograph of their backs while they were busy nest-building.

Together with researchers at the Max Planck Institute of Animal Behaviour, in Germany, Ferreira then demonstrated that the technology can be applied to two of the most commonly-studied species in Europe: wild great tits and captive zebra finches.

For AI models to accurately identify individuals they must be trained with thousands of labelled images. This is easy for companies such as Facebook with access to millions of pictures of people voluntarily tagged by users, but acquiring labelled photographs of birds or animals is more difficult.

The researchers, also from institutes in Portugal and South Africa, overcame this challenge by building feeders with camera traps and sensors. Most birds in the study populations already carried a tag similar to microchips which are implanted in cats and dogs. Antennae on the bird feeders read the identity of the bird from these tags and triggered the cameras.

The AI models were trained with these images and then tested with new images of individuals in different contexts. Here they displayed an accuracy of more than 90% for wild great tits and sociable weavers, and 87% for the captive zebra finches, according to the study in the British Ecological Society journal Methods in Ecology and Evolution.

While some larger individual animals can be recognised by the human eye because of distinctive patterns a leopards spots, or a pine martens chest markings, for example AI models have previously been used to identify individual primates, pigs and elephants. But the potential of the models had not been explored in smaller creatures outside the laboratory, such as birds.

According to Ferreira, the lead author of the study, the use of cameras and AI computer models could save on expensive fieldwork, spare animals from procedures such as catching and marking them, and enable much longer-term studies.

It is not very expensive to put a remote camera on a study population for eight years but for somebody to stay there and do the fieldwork for that long is not always possible, he said. It removes the need for the human to be a data collector so researchers can spend more time thinking about the questions instead of collecting the data.

The AI model is currently only able to re-identify individuals it has been shown before. But the researchers are now trying to build more powerful AI models that can identify larger study groups of birds, and distinguish individuals even if they have never seen them before. This would enable new individuals without tags to be recorded by the cameras and recognised by the computers.

See the article here:

AI model developed to identify individual birds without tagging - The Guardian

How AI will help freelancers – VentureBeat

When it comes to artificial intelligence (AI), theres a common question: Will AI technology render certain jobs obsolete? The common prediction is yes, many jobs will be lost as a result. But what do the numbers say? Even more importantly, what does logical thought suggest?

You dont need a special degree to understand the textbook relationship between automation and jobs. The basic theory is that for every job thats automated by technology, theres one less job available to a human who once performed the same task. In other words, if a machine can make a milkshake with the push of a button, then theres no need for the person who previously mixed the shake by hand. If a robot can put a car door on a vehicle in a manufacturing plant, then theres no need for the workers who previously placed the doors on by hand. You get the idea.

But does this theory really hold up on a reliable basis, or is it merely a theory that works in textbooks and PowerPoint presentations? One study or report doesnt discount a theory, but a quick glance at some recent numbers paints a different picture that requires a careful look at this pressing issue.

Upwork, an online platform that connects freelancers with clients, recently published data from its website that shows AI was the second-fastest-growing in-demand skill over the first quarter of 2017.

With artificial intelligence (AI) at the forefront of the conversation around what the future of work holds, its no surprise it is the fastest-growing tech skill and the second fastest-growing skill overall, Upwork explained in a statement. As AI continues to shape our world and influence nearly every major industry and country, companies are thinking about how to incorporate it into their business strategies, and professionals are developing skills to capitalize upon this accelerating tech trend. While some speculate that AI may be taking jobs away, others argue its creating opportunity, which is evidenced by demand for freelancers with this skill.

This latter opinion might be a contrarian view, but in this case, the data supports it. At this point, there really isnt a whole lot of data that says AI is killing jobs. Consider that as you form your opinions.

From a logical point of view, you have to consider the fact that the professional services industry isnt going anywhere. Sure, there might be automated website builders and account software, but the demand for freelancers people isnt going to suddenly disappear. Growth in AI isnt going to replace web developers, accountants, lawyers, and consultants. If anything, its going to assist them and make them more efficient and profitable.

Everything we love about civilization is a product of intelligence, says Max Tegmark, President of the Future of Life Institute. So amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before as long as we manage to keep the technology beneficial.

When you look at freelancers, in particular, you can already see how AI is having a positive impact in the form of powerful tools and resources that can complement and expand existing skillsets. Here are a couple of tools:

Weve only just begun. These technologies will look totally rudimentary when we look back in a few years and recap the growth of AI. However, for now, they serve as an example of what the future holds for the freelance labor market.

The future of work, particularly as it deals with expected growth in AI, is anybodys guess. Ask AI expert Will Lee about his expectations and hell say there are two possible futures:

The first future Lee sees is one where AI has led to high unemployment and people are forced to freelance in order to get by. The only problem is it will be difficult for freelancers to differentiate themselves from the crowd because theyll be offering the same exact services as everyone else.

In this first possibility, people struggle to recognize their value and the uneducated freelance labor force is swallowed up by superior automated technology. But then theres a second possibility, where AI technology actually fuels growth in the freelance economy and humans and machines harmoniously work together.

In the second possibility, weve built a sustainable freelance market based on each individuals special skills and passions, Lee says. Each freelancer is able to find work and make a living due to their ability to differentiate themselves from others in the freelance market.

Experts in the AI field folks like Will Lee, who dedicate their working life to understanding the impact technology will have on labor dont know how this is going to unfold. It could be disastrous, or it could be highly beneficial. Making rash statements about how AI is going to collapse the freelance economy is unwise. You dont know how things will unfold and its better to remain optimistic that everything will work out for the greater good.

One thing is certainly clear: Technology is changing and the ramifications of this evolution will be felt in every capacity of business. Well know a lot more in three to five years, so hold on and enjoy the ride.

Larry Alton is a contributing writer at VentureBeat covering artificial intelligence.

Read this article:

How AI will help freelancers - VentureBeat

Google to launch AI Research Lab in Bengaluru – Economic Times

Google is launching an Artificial Intelligence (AI) Research Lab in Bengaluru in order to create products not just for India but for the rest of the world, the Mountain View headquartered Internet giant said during its flagship Google for India event on Thursday. The Lab will be led by Manish Gupta, a SEM (Society for Experimental Mechanics) Fellow.

The slew of new initiatives in India also include a tie up with state run BSNL for expanding Wi-Fi hotspots in villages in Gujarat, Bihar and Maharashtra. This comes after the company launched a project to connect 500 railway stations in the country and has since claimed to have connected close to 5000 venues across four continents. Google also announced a phone line in partnership Vodafone Idea to get their queries answered with even a 2G phone in English and Hindi. The firm has also added an array of local Indian languages across its products such as Search, Bolo, Discover and Google Assistant among others.

We want to adapt our products for Indians instead of asking Indians to adapt to Google Technology, said Caesar Sengupta, Vice President, Next Billion Users and Payments, Google adding that when the company create products for India it creates for the world.

Ravi Shanker Prasad, union minister for electronics and IT said that during his meeting with Google chief Sundar Pichai in Mountain View last week, he said asked him to make india a launch pad for products. I told him that what becomes successful in India will become successful globally. I am very happy that they are doing that, added Prasad.

Google is also expanding its Google Pay payment app which already has 67 million users and has done transactions worth $110 billion so far to include a new app for Business. Google is also bringing its existing technology of tokenisation to India which will enable payments available to debit and credit card holders through a system of tokenized cards-- to pay for things using a digital token on the phone rather than actual card number

It can be used to pay for merchants which accept Bharat QR code and those that have NFC by just tapping the card. Google said that it will make payments much more secure apart from making online payment much more seamless by doing away with the need to enter card details. It will be rolled out over the next few weeks for Visa Cards in partnerships with banks such as HDFC Bank, Axis Bank, SBI Cards among others. The features will be rolled out with MasterCard and Rupay Cards in the coming months.

It is also launching a Spot Platform which enables merchants to create exclusive shop fronts within the Google Pay app using simple Java script which will save them from building expensive websites or applications. Some applications such as like UrbanClap, Goibibo, MakeMyTrip, RedBus, Eat.Fit and Oven Story are also on board the Spot Platform. It will also extend to entrylevel neighborhood jobs search through the Spot platform on the Google Pay app. This will mainly target the unorganised segment in areas such as retail and hospitality, added Sengupta. Google is partnering with NSDC for this initiative along with firms such as Swiggy and is first being rolled out in Delhi-NCR before a nationwide roll out.

Read the original post:

Google to launch AI Research Lab in Bengaluru - Economic Times

Toby Wals’ 2062: The World that AI made – The Tribune

Joseph Jude

What makes us, humans, remarkable? Is it our physical strength? Or is it our intelligence? Perhaps our desire to live in communities? Some of our cousins in the animal kingdom have all those qualities. So what differentiates us?

Our early ancestors were limited in their ability to learn life-skills for two reasons: they had to be physically around those who knew the skills, and they had to learn only through signs. When we started to speak, learning became easier and faster. When we invented writing, ideas spread wider. Hence language is the differentiating feature, Prof Toby Walsh argues in this book.

Even with speech and writing, learning suffers in a vital way. Speaker translates thought to speech; likewise, the learner turns hearing to thought. Information is lost in multiple translations. Digital companies like Tesla and Apple have eliminated such loss of information. They can transmit what one device learns to millions of other devices as code. Every device immediately knows what every other device learns and without the need for translation.

Professor Walsh expands this idea to imagine the next species in our homo family: Homo digitalis. He says homo digitalis will be both biological and digital, living both in our brains and in the broader digital space.

In exploring the parallel living deeper, he debunks the possibility of singularity the anticipated time when we will have an intelligent machine that can cognitively improve itself. Only futurists and philosophers believe the singularity is inevitable. On the other hand, AI researchers think we will have machines that reach human-level intelligence, surpassing our intelligence in several narrow domains. Experts believe we would have such machines by around 2062, which explains the title of the book. The book focuses on the impact such machines will have on our lives. Interwoven into this focus are the steps we can take now to shape that future better than today.

Professor Walsh deals with ramifications of AI-augmented humans on work, privacy, politics, equality, and wars. He paints neither a dystopian nor a utopian future. As an academician, he carefully constructs his arguments on research, data, and trends.

When we ponder artificial intelligence, we conjure up an intellect superior to humans. What might come as a surprise is that machine intelligence would be artificial. Take flight, for example. We fly, but not like birds. Machines might simulate a storm, but it wont get wet. So values and fairness determined by devices will be unnatural. This should motivate us to take steps to shape a better future, one that isnt unnatural.

The book excels in painting a realistic picture of an AI-based future. However, it falters in the steps we can take to avoid a disastrous fate. The author promotes government regulation as a primary tool to control AI. Did UN-based regulations prevent monstrous state actors from acquiring chemical weapons and using it on their citizens? As and when lethal automatic weapons are manufactured, how difficult would it be for determined non-state actors to obtain them? Another mistake is considering AI as a singular technology. Governments should regulate AI not at the input side collecting, storing, and processing data, but where it is used loan processing, police departments, weapons, and so on.

There is no question that technology as powerful as AI should be regulated. But regulation alone wont work. We need a comprehensive approach that involves academia, citizen activists, steering groups and task forces. The Internet is one example around us.

One data shown in the book should hit Indians the hardest. I wish it forms the wake-up call for policymakers, entrepreneurs, and concerned citizens. Tianjin, a city in China, outspends India in AI. The best time to invest in AI was yesterday, the next best time is now.

Continued here:

Toby Wals' 2062: The World that AI made - The Tribune

Storytelling & Diversity: The AI Edge In LA – Forbes

LA is known as the land of storytellers, but when it comes to its own story, the entertainment business is still front and center. In fact, LA has been at the core of a flourishing AI scene for decades. Since the 1920s through today, elite mathematicians and engineers have been putting their knowledge to work for a multitude of industries such as health, aerospace, and media with relatively little visibility in the tech limelight.

Now, these industries are poised to bring together a convergence of knowledge across cutting edge technologies and LA may finally have its day in the spotlight as a focal point for cross-disciplinary innovation.

LAs history in technology has its roots in the aerospace world, where because of its perfect weather and vast open spaces, it became an ideal setting for the aerospace industry to plant its roots in the early 1900s. Companies like Douglas Aircraft and JPL were able to find multi-acre properties to test rockets and build large airfields.

The engineering know-how and nature of aviation work fueled the manufacturing sector in Southern California during WWII and eventually became the birthplace to the internet as we know it, when UCLA, funded by the Department of Defense, sent the first virtual message via ARPANET in the same year we first landed a man on the moon.

Black Girls Code

Through busts and booms, engineering talent was both attracted to the area and nurtured at many well known and respected educational institutions such as Caltech, USC, and UCLA, helping to augment the labor pool as well as becoming important sources of R&D.

This engineering talent continued to extend its branches out into other industries, such as health and wellness which are natural extensions for a population already obsessed with youth, fitness and body perfection.

Today, LA sits as a unifying center for life sciences, entertainment, media, and aerospace with frontier technologies such as AI pushing innovation across these core industries and providing a platform for new discoveries, cures, and social interactions.

Dave Whelan, Chief Executive Officer of BioscienceLA believes diversity is LAs secret weapon when it comes to its potential to become the global epicenter for AI innovation. He notes LAs widely diverse diaspora, which makes it a perfect place to train AI.

The entire worlds global population resides in LA. If you look at AI for healthcare, you have the raw materials in patient and health data that provide the widest range of possibilities. Combine that with the mix of the creative workforce, diversity of economies, and SoCal mindset, all together making LA a prime center for innovation that has yet to rightly take its place in the sun when compared to the attention that Silicon Valley receives.

The AI opportunity to save lives is particularly meaningful, especially in todays pandemic times. How do we apply AI in a way that can help with early detection, identify clusters, sequence DNA, or source the right treatments? Many aspects of life sciences are computational, and mathematical biologists have been entrenched in LA for some time providing services such as computational epidemiology, which is a multidisciplinary field that leverages computer science and mathematics to understand the spread of diseases and other public health issues.

Brian Dolan, CEO, and Founder of VerdantAI who has his roots in statistical genetics and biomathematics has seen the converging evolution of the tech scene in LA and is actively committed to building out the AI ecosystem. His startup studio is focused on launching new AI companies into market and partnering with large enterprises to help them turn their data into products.

It's not hard to argue that now is the time to focus on the big problems, like COVID and climate change. We need curious, dedicated, intelligent people to take these things on, and Los Angeles certainly offers that kind of talent. Our innovation diversity goes beyond demographics and into industries, geographies, and even ecologies. No other city can really offer that.

Brians previous company Deep 6 AI applies artificial intelligence to the clinical trial process by finding patients for medical trials and getting life-saving cures to people more quickly. Today, Brian and his team at Verdant are incubating technologies to optimize carbon-neutral supply chain networks, leveraging advanced medical NLP technology to read medical texts to create precision digital health experiences, and working on a mental health solution aimed at addiction and recovery.

Building a thriving ecosystem takes time and imagination. AI is both a disruptive force and a major opportunity, but dispelling the myths around AI is important in order to map out its impact and full potential.

Ronni Kimm, founder of Collective Future uses future visioning to help bring outside perspectives into organizations. Future visioning is important for accelerating innovation as it provides the ability to respond to and proactively be part of the stories of change. Her design and innovation studio helps bring strategic transformation to companies from a top-down and bottom-up perspective.

Ronni Kim

Health sciences and life sciences have some of the most interesting challenges in the world but there are not enough stories to help people understand how powerful approaches such as predictive analytics in health science can dramatically impact successful organ transplants predict at-risk patient complications, says Ronni. I see storytelling as one of the most important aspects of accelerating technology, creating more stories around these incredible innovations is where LA can excel in building resilient ecosystems and bringing more of these technologies to market.

Today LA sits at the center of multiple industries, where talent pools cross-pollinate and inspire new ideas. Its diverse and colorful population offers data not readily available in other geographies, making it ideal for big data applications that leverage AI. Its educational institutions feed and train new labor pools and its proximity to creative fields inspires new ways to leverage technology in traditional industries.

Ideas such as bringing the spatial web to life, holograms to offer new methods of care, and digital twins to create cross reality environments are just some of the ideas coming to life in LA.

As technology continues to advance, be sure to be on the lookout for more stories about the rise and influence of AI across these massive industries.

See the original post:

Storytelling & Diversity: The AI Edge In LA - Forbes

The imaging AI field is exploding, but it carries unique challenges – Healthcare IT News

The use of machine learning and artificial intelligence to analyze medical imaging data has grown significantly in recent years with 60 new products approved by the U.S. Food and Drug Administration in 2020 alone.

But AI scaling, particularly in the medical field, can face unique challenges, said Elad Benjamin, general manager of Radiology and AI Informatics at Philips, during an Amazon Web Services presentation this week.

"The AI field is experiencing an amazing resurgence of applications and tools and products and companies," said Benjamin.

The question many companies are seeking to answer is: "How do we use machine learning and deep learning to analyze medical imaging data and identify relevant clinical findings that should be highlighted to radiologists or other imaging providers to provide them with the decision support tools they need?" Benjamin asked.

He outlined three main business models being pursued:

Benjamin described common challenges and bottlenecks in the process of developing and marketing AI tools, noting that some were specifically hard to tackle in healthcare.

Gathering data at scale is one hurdle, he noted, and diversity of information is critical and sometimes difficult to achieve.

And labeling data, for instance, is the most expensive and time-consuming process, and requires a professional's perspective (as opposed to in other industries, when a layperson could label an image as a "car" or a "street" without too much trouble).

Receiving feedback and monitoring are critical too.

"You need to understand how your AI tools are behaving in the real world," Benjamin said. "Are there certain subpopulations where they are less effective? Are they slowly reducing in their quality because of a new scanner or a different patient population that has suddenly come into the fold?"

Benjamin said Philips with the help of AWS tools such as HealthLake, SageMaker and Comprehend is tackling these bottlenecks.

"Without solving these challenges it is difficult to scale AI in the healthcare domain," he said.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Go here to see the original:

The imaging AI field is exploding, but it carries unique challenges - Healthcare IT News

AI cameras may be used to detect social distancing as US is reopening – Business Insider – Business Insider

As businesses across the United States have gradually begun to reopen, a growing number of companies are investing in camera technology powered by artificial intelligence to help enforce social distancing measures when people may be standing too closely together.

"[If] I want to manage the distance between consumers standing in a line, a manager can't be in all places at once," Leslie Hand, vice president of retail insights for the International Data Corporation, told Business Insider. "Having a digital helper that's advising you when folks are perhaps in need of some advice is useful."

Businesses throughout the country have started operating again under restrictions, such as enforcing social distancing measures, requiring customers to wear masks, and reducing capacity. New York City, which was the epicenter of the virus' outbreak in the US, is set to enter Phase II of its reopening plan on Monday.

The White House's employer guidelines for all phases of reopening include developing policies informed by best practices, particularly social distancing. And some experts believe smart cameras can help retailers and other companies detect whether such protocols are being followed.

"There's some technology coming out on the horizon that will be able to be incorporated into the nuts and bolts that you already have in your store," Barrie Scardina, head of Americas retail for commercial real estate services firm Cushman & Wakefield, said to Business Insider.

Some companies have already begun experimenting with such technologies. Amazon said on June 16 that it developed a camera system that's being implemented in some warehouses to detect whether workers are following social distancing guidelines. The company's so-called "Distance Assistant" consists of a camera, a 50-inch monitor, and a local computing device, which uses depth sensors to calculate distances between employees.

When a person walks by the camera, the monitor would show whether that person is standing six feet apart from nearby colleagues by overlaying a green or red circle around the person. Green would indicate the person is properly socially distanced, while red would suggest the people on camera may be too close together. Amazon is open-sourcing the technology so that other companies can implement it as well.

Motorola Solutions also announced new analytics technology in May that enables its Avigilon security cameras to detect whether people are social distancing and wearing masks. The system uses AI to collect footage and statistical patterns that can be used to provide notifications to organizations about when guidelines around wearing face masks or honoring social distancing measures are being breached.

Pepper Construction, a Chicago-based construction company, has also begun using software from a company called SmartVid.io to keep an eye on where workers may be grouping, as Reuters reported in late April.

Scardina offered some examples illustrating how smart cameras can help retailers enforce social distancing. Workers can use such technologies to see where customers are clustering so that they can make decisions about how to arrange furniture and fixtures within the store. If a table needs to be moved further away from another display because customers don't have space to stand six feet apart, AI camera technology can help retailers spot this.

As far as how widespread that technology will become in stores, Scardina says it will depend on factors such as a retailer's budget and the size of the shop.

While more companies may be investing in either developing or implementing new camera technologies, there will inevitably be challenges that arise when putting them into practice, says Pieter J. den Hamer, senior director of artificial intelligence for Gartner Research.

Not only could implementing such tech raise privacy concerns, but there are also practical limitations. A camera may not know if two people standing close together belong to the same household, for example.

All 50 states have reopened at some capacity, putting an end to stay-at-home orders that had been in effect since March to curb the coronavirus' spread, and some states are now seeing a spike in cases. The New York Times recently reported that at least 14 states have experienced positive cases that have outpaced the average number of administered tests.

The coronavirus has killed at least 117,000 people in the US and infected more than 2.1 million as of June 18, according to the Times, and experts predict there will be a second wave. But President Trump has said the country won't be closing again.

"It's a very, very complex debate full of dilemmas," den Hamer said. "Should we prioritize opening up the economy, or should we prioritize the protection of our privacy?"

The rest is here:

AI cameras may be used to detect social distancing as US is reopening - Business Insider - Business Insider

Creativity and AI: The Next Step – Scientific American

In 1997 IBMs Deep Blue famously defeated chess Grand Master Garry Kasparov after a titanic battle. It had actually lost to him the previous year, though he conceded that it seemed to possess a weird kind of intelligence. To play Kasparov, Deep Blue had been pre-programmed with intricate software, including an extensive playbook with moves for openings, middle game and endgame.

Twenty years later, in 2017, Google unleashed AlphaGo Zero which, unlike Deep Blue, was entirely self-taught. It was given only the basic rules of the far more difficult game of Gogo, without any sample games to study, and worked out all its strategies from scratch by playing millions of times against itself. This freed it to think in its own way.

These are the two main sorts of AI around at present. Symbolic machines like Deep Blue are programmed to reason as humans do, working through a series of logical steps to solve specific problems. An example is a medical diagnosis system in which a machine deduces a patients illness from data by working through a decision tree of possibilities.

Artificial neural networks like AlphaGo Zero are loosely inspired by the wiring of the neurons in the human brain and need far less human input. Their forte is learning, which they do by analyzing huge amounts of input data or rules such as the rules of chess or Gogo. They have had notable success in recognizing faces and patterns in data and also power driverless cars. The big problem is that scientists dont know as yet why they work as they do.

But its the art, literature and music that the two systems create that really points up the difference between them. Symbolic machines can create highly interesting work, having been fed enormous amounts of material and programmed to do so. Far more exciting are artificial neural networks, which actually teach themselves and which can therefore be said to be more truly creative.

Symbolic AI produces art which that is recognizable to the human eye as art, but its art which that has been pre-programmed. There are no surprises. Harold Cohens Aaron AARON algorithm produces rather beautiful paintings using templates which that have been programmed into it. Similarly, Simon Colton at the college of Goldsmiths College in the University of London programs The Painting Fool to create a likeness of a sitter in a particular style. But neither of these ever leaps beyond its program.

Artificial neural networks are far more experimental and unpredictable. The work springs from the machine itself without any human intervention. Alexander Mordvintsev set the ball rolling with his Deep Dream and its nightmare images spawned from convolutional neural networks (ConvNets) and that seem almost to spring from the machines unconscious. Then theres Ian Goodfellows GAN (Generative Adversarial Network) with the machine acting as the judge of its own creations, and Ahmed Elgammals CAN (Creative Adversarial Network), which creates styles of art never seen before. All of these generate far more challenging and difficult worksthe machines idea of art, not ours. Rather than being a tool, the machine participates in the creation.

In AI-created music the contrast is even starker. On the one hand, we have Franois Pachets Flow Machines, loaded with software to produce sumptuous original melodies, including a well-reviewed album. On the other, researchers at Google use artificial neural networks to produce music unaided. But at the moment their music tends to lose momentum after only a minute or so.

AI-created literature illustrates best of all the difference in what can be created by the two types of machines. Symbolic machines are loaded with software and rules for using it and trained to generate material of a specific sort, such as Reuters news reports and weather reports. A symbolic machine equipped with a database of puns and jokes generates more of the same, giving us, for example, a corpus of machine-generated knock-knock jokes. But as with art their literary products are in line with what we would expect.

Artificial neural networks have no such restrictions. Ross Goodwin, now at Google, trained an artificial neural network on a corpus of scripts from science fiction films, then instructed it to create sequences of words. The result was the fairly gnomic screenplay for his film Sunspring. With such a lack of constraints, artificial neural networks tend to produce work that seems obscureor should we say experimental? This sort of machine ventures into territory beyond that of our present understanding of language and can open our minds to a realm often designated as nonsense. NYUs Allison Parrish, a composer of computer poetry, explores the line between sense and nonsense. Thus, artificial neural networks can spark human ingenuity. They can introduce us to new ideas and boost our own creativity.

Proponents of symbolic machines argue that the human brain too is loaded with software, accumulated from the moment we are born, which means that symbolic machines can also lay claim to emulating the brains structure. Symbolic machines, however, are programmed to reason from the start.

Conversely, proponents of artificial neural networks argue that, like children, machines need first to learn before they can reason. Artificial neural networks learn from the data theyve been trained on but are inflexible in that they can only work from the data that they have.

To put it simply, artificial neural networks are built to learn and symbolic machines to reason but with the proper software they can each do a little of the other. An artificial neural network powering a driverless car, for example, needs to have the data for every possible contingency programmed into it so that when it sees a bright light in front of it, it can recognize whether its a bright sky or a white vehicle, in order to avoid a fatal accident.

What is needed is to develop a machine that includes the best features of both symbolic machines and artificial neural networks. Some computer scientists are currently moving in that direction, looking for options that offer a broader and more flexible intelligence than neural networks by combining them with the key features of symbolic machines.

At Deep Mind in London, scientists are developing a new sort of artificial neural network that can learn to form relationships in raw input data and represent it in logical form as a decision tree, as in a symbolic machine. In other words, theyre trying to build in flexible reasoning. In a purely symbolic machine all this would have to be programmed in by hand, whereas the hybrid artificial neural network does it by itself.

In this way combining the two systems could lead to more intelligent solutions and also to forms of art, literature andmusic which that are more accessible to human audiences while also being experimental, challenging, unpredictable and fun.

Continue reading here:

Creativity and AI: The Next Step - Scientific American

With 5G+AI Twin Engines – Qualcomm, WIMI and Samsung Bring New Opportunities to the Industry – Yahoo Finance

HONG KONG, CHINA / ACCESSWIRE / July 21, 2020 / The arrival of 5G will bring new explosive points for market development. AI will usher in a new wave of growth in the 5G era. 4G technology has brought opportunities for the server market to speed up development, while 5G is expected to maintain this tradition and help the server industry maintain a good long-term development prospect. Undeniably, the promotion of 4G promoted the increase of users, and the operators made a lot of investment and construction of data centers to meet the needs of users, which led to a wave of high tide of server procurement. Compared with 3G and 4G, 5G has improved its speed by about 10 times, which has achieved a qualitative leap in the development of server market. In the future, 5G rate is expected to increase by tens of times, which will undoubtedly inject more vitality into the market. For example, industries that were previously limited by data processing speed are expected to break through bottlenecks and achieve substantial growth. Once these new industries explode because of the combination of "AI" and 5G, the amount of data generated will be hundreds of times greater than the 4G era.

So, what does 5G bring to AI? This can be explained in three ways.

The first is about data. The rapid development of AI is based on big data, which has become the massive learning materials of AI system. While 5G provides the base for AI to create more data, the essence of AI is to need more data, 5G can increase the data volume by a hundred times, and meanwhile, the data structure is more diversified and complex. While 5G and AI support each other, the problem at hand is that computing power has not yet broken through, and how to process data more efficiently is another topic.

At the control level, with the determination of R16 standard and the advance of R17, 5G broad connection features are better supported. With 5G, we have access to more devices, more devices that the AI can control, and correspondingly more scenarios for THE AI. From the indoor point of view, now the user can control more kinds of household appliances, from TV, light to refrigerator, purifier; In the outdoor, we can control the car. In the off-road, we can only control the mobile phone, but now cars, wearable devices and so on have joined in. This allows AI control boundaries to be greatly expanded, but the depth of control is limited.

Finally, 5G is even more important in practical applications. For example, AI is not widely used in mobile phones now, intelligent voice is an important function, and mobile phone manufacturers are pushing personal AI assistant, but it is not smart enough at all. A large part of the reason is that the data is too small.

With the rapid development and maturity of AI technology, more and more industries are combining with ARTIFICIAL intelligence technology to seek greater development. The main advantages of the combination of various industries and ARTIFICIAL intelligence are breakthroughs in algorithms, computing power, data, products, engineering and solutions. At present, the new fields of artificial intelligence, big data and cloud computing technology with fast landing and large market space have attracted many resources in recent years. Giants in various industries, new algorithm companies and start-ups are actively planning for the 5G era.

Qualcomm

For more than a decade, Qualcomm has been working on AI to empower many industries. In the wave of 5G and AI innovation, chips are an important part of the industrial chain.

Qualcomm has a solid technology accumulation in the field of AI, mobile computing and connectivity, combining leading 5G connectivity with AI research and development, and has a complete cloud-to-end AI solution. In this process, Qualcomm has formed a close connection with the AI industry and established a solid partnership with a number of leading AI ecosystem partners in China to jointly build the future of ARTIFICIAL intelligence. In 2018, Qualcomm established Qualcomm AI Research to further consolidate the company's internal Research on cutting-edge artificial intelligence. That same year, Qualcomm set up a $100 million AI venture capital fund to invest in start-ups revolutionizing AI technology around the world. At present, Qualcomm Venture capital has invested in a number of leading AI innovation enterprises in China.

Story continues

Qualcomm has been in China for more than 20 years and established a research and development center in Shanghai in 2010. In 2016, Qualcomm established its first semiconductor manufacturing test factory in the world -- Qualcomm Communications Technology (Shanghai) Co., Ltd. in Pudong New Area, introducing its internationally advanced products and technologies to China, demonstrating qualcomm's commitment to continue to invest in China, integrate more closely with Chinese industries, and serve customers. In addition, Qualcomm is working with Chinese partners including Shanghai enterprises to make innovations in 5G, ARTIFICIAL intelligence, cloud computing, big data and other fields, so as to promote the development of 5G and ARTIFICIAL intelligence in Shanghai, boost "new infrastructure", and promote the development of domestic new technology industry and digital economy.

WiMi Hologram Cloud

When it comes to 5G networks, unlike the 4G era, they will have higher speeds, lower latency and massive connectivity. Faster than 4G by tens of times, slower than 1ms, and more than 50 billion devices worldwide are connected to each other. Thus, the three 5G application scenarios of ULTRA broadband mobile communication (eMBB), ultra low delay communication (uRLLC) and mMTC have been established. It is based on these three scenarios that the 5G era has given birth to more applications in the market such as AR holography, unmanned driving and telemedicine, and interconnection of everything. From the interaction between people to the communication between things, the realization of the telecommunications level of cellular communication, or will lead to a new revolution in human society.

The hologram industry has a broad prospect and great potential. It will have explosive growth in the future. By 2025, the size of Holographic cloud market in China is expected to exceed 450 billion RMB, the size of holographic cloud market is expected to grow by 78% annually, the size of global holographic cloud market is expected to exceed 500 billion USD, and the size of holographic cloud market is expected to grow by 68% annually.

WiMi Hologram Cloud as a representative of the domestic enterprise visual AI, its business covers holographic AR technology multiple links, including holographic visual presentation, holographic interactive software development, holographic AI computer vision synthesis, holographic AR online advertising, holographic non-inductive ARSDK pay, 5 g holographic communication software development, holographic face recognition and development, holographic development of AI in the face.

Due to the changes in 5G communication network bandwidth, high-end holographic applications are increasingly applied to social media, communication, navigation, home applications and other application scenarios. The WiMi Hologram Cloud is a project to build a holographic Cloud platform through a 5G communications network based on two core technologies: holographic AI facial recognition and holographic AI facial modification.

Hologram Cloud plans to continue to improve and strengthen existing technologies, maintain industry leadership and create ecological business models. Hologram Cloud's holographic face recognition technology and holographic face change technology are currently being applied to the existing Holographic advertising and entertainment businesses in the WiMi Hologram Cloud, and the technology is being upgraded to make breakthroughs in more areas of the industry. WiMi Hologram Cloud aims to build a commercial ecosystem based on Hologram applications.

WiMi Hologram Cloud boasts the world's leading 3D computer vision technology and SAAS platform technology. WiMi Hologram Cloud USES AI algorithms to turn ordinary images into holographic 3D content and is widely used in holographic advertising, holographic entertainment, holographic education, holographic communication and other fields. WiMi Hologram Cloud, with core technologies such as holographic face recognition, holographic face changing and holographic digital life, is looking for market collaboration and investment opportunities around the world. In the future, WiMi Hologram Cloud aims to expand Hologram ecology in the international market and become a global Hologram Cloud industry leader.

With the advent of 5G era, the industry believes that holographic image communication can use the characteristics of 5G network high speed to transmit 3D video signals with large data volume, which can show a more real world for users, have a qualitative leap in interactivity, or become a disruptive technology of Internet social interaction. At present, Samsung, Facebook and other tech giants are participating in this field of technology research and development, showing that the technology has a broad application prospect. At present, the number of domestic enterprises engaged in the field of holographic projection has also been greatly increased, according to data statistics, has reached more than a thousand holographic projection companies, the market capacity has also risen to the level of ten billion.

Samsung

Samsung introduced Digital Cockpit 2020 at CES, which USES 5G to link the internal and external functions of the vehicle and provide an interconnected experience for drivers and passengers. This is the third joint development between Samsung Electronics and Harman, which combines Samsung's strengths in communications technology, semiconductors and displays with Harman's automotive expertise. Support users in the car to achieve unlimited interaction with the office, home.

Another interesting AI topic, Samsung's performance at CES is a classic case of "big with small", because its Ballie intelligent AI robot is only a little bigger than a baseball, but it has attracted amazing attention due to the infinite application space generated.

The chubby AI robot, which moves by scrolling and ACTS as a steward for your home AIoT system, Ballie is controlled by a smartphone, equipped with artificial intelligence (AI) features, voice operations and a built-in camera to recognize and respond to users and help them with a variety of home tasks. It can respond to requests to speak like a pet, but can be used as a wake up call, fitness assistant, time recorder or to manage other smart devices in the home (like TVS and vacuum cleaners).

At the CES, samsung along the 5 g and AI are two of the most important trends of science and technology, this paper expounds the own understanding and research and development achievements, tablet PC, vehicle operating system or small AI robots, samsung expressed in all-round innovation into 5 g + AI will, existing achievement enough attractive but obviously samsung will also bring us more possibilities.

The mutual empowerment of 5G and ARTIFICIAL intelligence will bring new growth opportunities for the development of the Internet of Things. AI use case for the Internet of things, widely covered domestic and industrial/enterprise and wisdom city, including manufacturing automation and robotics, family and enterprise intelligent security, intelligent display and speakers, agriculture intelligent home control center, and smart appliances, sustainable urban and infrastructure, digital logistics and retail, etc.

In the future, 5G and AI will also affect every aspect of life and many industries, including education, healthcare, retail, manufacturing and transportation. According to statistics, the adoption rate of ARTIFICIAL intelligence in important market segments such as smartphones, PCS/tablets, extended reality (XR), cars and the Internet of Things will increase from less than 10 percent last year to 100 percent by 2025. Driven by this trend, terminal side AI will become a standard feature on many key platforms. 5G and AI technology will bring huge economic benefits to the world. As 5G becomes fully commercialized, it will empower many industries and generate up to $13.2 trillion in goods and services globally by 2035. At the enterprise level, ai derivatives will be worth $3.9 trillion by 2022.

Media contactCompany: Mobius TrendContact: Trends & Insights TeamE-Mail: cs@mobiustrend.comWebsite: http://www.mobiusTrend.comYoutube: https://www.youtube.com/channel/UCOlz-sCOlPTJ_24rMgR6JLw

SOURCE: Mobius Trend

View source version on accesswire.com: https://www.accesswire.com/598262/With-5GAI-Twin-Engines--Qualcomm-WIMI-and-Samsung-Bring-New-Opportunities-to-the-Industry

Read more:

With 5G+AI Twin Engines - Qualcomm, WIMI and Samsung Bring New Opportunities to the Industry - Yahoo Finance

Flytxt Applauded by Frost & Sullivan for Improving Telcos’ Marketing Agility with Its AI/ML Applications – Yahoo Finance

Flytxt's AI solutions aid rapid decision making and contextualize interactions to help telcos take customer engagement to the next level

LONDON, Aug. 24, 2021 /PRNewswire/ -- Frost & Sullivan recognizes Flytxt with the 2021 Global Company of the Year Award for its artificial intelligence (AI) in telecom marketing. As the telecommunications industry transitioned from rule-based to augmented/autonomous marketing, Flytxt adapted its technology using AI, data analytics, and machine learning (ML) to enable hyper-personalization at scale.

2021 Global AI in Telecom Marketing Company of the Year Award

"Flytxt's uniquely differentiated software applications and best practices help telco marketers with data-driven decisions that maximize customer lifetime value," said Hemangi Patel, Senior Research Analyst for Frost & Sullivan. "Its AI/ML applications handle decisions and actions dynamically and contextually, rapidly analyzing high data volumes to arrive at the best opportunities to uplift customer value. Flytxt's out-of-the-box solutions are easy to deploy and maintain without burdening in-house data engineers and scientists."

Flytxt's proprietary CVM technology (data model, embedded analytics, explainable AI, and privacy preservation) is offered through a broad set of solutions used by more than 70 telcos globally. The company helps enterprises to deliver comprehensive data-driven digital experiences via its omnichannel CVM solution packaging AI, analytics, and marketing automation. CVM-in-a-box is a tightly packaged solution for smaller enterprises and business units to benefit from AI-driven marketing rapidly. The CVM accelerator solutions provide AI and analytics purpose-built to augment enterprises' existing customer engagement systems and achieve the desired CVM goals faster.

"Flytxt's autonomous and explainable AI applications drive marketing optimization at scale. These applications ensure that enterprises will never miss any opportunity to maximize customer value across numerous micro-moments and contexts," noted Ruman Ahmed, Best Practices Research Analyst for Frost & Sullivan. "Its AI/ML solutions deliver the right set of decisioning variables and logic to meet changing market dynamics in different markets. With its continued AI/ML innovation and proven results in various use cases across multiple markets, Flytxt emerges as the AI and analytics partner of choice for telcos to drive customer lifetime value."

Story continues

Each year, Frost & Sullivan presents a Company of the Year award to the organization that demonstrates excellence in terms of growth strategy and implementation in its field. The award recognizes a high degree of innovation with products and technologies and the resulting leadership in customer value and market penetration. The Best Practices Awards recognize companies in a variety of regional and global markets for demonstrating outstanding achievement and superior performance in areas such as leadership, technological innovation, customer service, and strategic product development.

About Frost & Sullivan

For six decades, Frost & Sullivan has been world-renowned for its role in helping investors, corporate leaders, and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models, and companies to action, resulting in a continuous flow of growth opportunities to drive future success. Contact us: Start the discussion. Contact us: Start the discussion.

Contact:

Tarini SinghP: +91-20 6718 9725E: Tarini.Singh@frost.com

About Flytxt

Flytxt is a Dutch company and a pioneer in marketing automation and AI technology; specializing in offering Customer Life-Time Value (CLTV) management solutions for subscription and usage businesses such as Telecom, Banking, Utilities, (online) Media & Entertainment, and Travel. Our solutions are used by more than 100 enterprises including 70 leading Telecom operators across the world to increase customer lifetime value through increased upsell, cross sell, and retention.

Contact:Pravin VijayP: +91-9745961333E: Pravin.vijay@flytxt.com

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/flytxt-applauded-by-frost--sullivan-for-improving-telcos-marketing-agility-with-its-aiml-applications-301360358.html

SOURCE Frost & Sullivan

Continued here:

Flytxt Applauded by Frost & Sullivan for Improving Telcos' Marketing Agility with Its AI/ML Applications - Yahoo Finance