Page 158«..1020..157158159160..170180..»

Category Archives: Ai

Announcing the second annual VentureBeat AI Innovation Awards at Transform 2020 – VentureBeat

Posted: July 12, 2020 at 1:31 am

Take the latest VB Survey to share how your company is implementing AI today.

The past year has seen remarkable change. As innovation in the field of AI and real-world applications of its constituent technologies such as machine learning, natural language processing, and computer vision continue to grow, so has an understanding of their social impacts.

At our AI-focused Transform 2020 event, taking place July 15-17 entirely online, VentureBeat will recognize and award emergent, compelling, and influential work in AI through our second annual VB AI Innovation Awards.

Drawn both from our daily editorial coverage and the expertise, knowledge, and experience of our nominating committee members, these awards give us a chance to shine a light on the people and companies making an impact in AI.

Our nominating committee includes:

Claire Delaunay, Vice President of Engineering, Nvidia

Claire Delaunay is vice president of engineering at Nvidia, where she is responsible for the Isaac robotics initiative and leads a team to bring Isaac to market for use by roboticists and developers around the world.

Prior to joining Nvidia, Delaunay was the director of engineering at Uber, after it acquired Otto, a startup she cofounded. She was also the robotics program lead at Google and founded two other companies, Botiful and Robotics Valley.

Delaunay has 15 years of experience in robotics and autonomous vehicles leading teams ranging from startups and research labs to Fortune 500 companies. Sheholds a Master of Science in computer engineering from cole Prive des Sciences Informatiques (EPSI).

Asli Celikyilmaz, Principal Researcher, Microsoft Research

Asli Celikyilmaz is a principal researcher at Microsoft Research (MSR) in Redmond, Washington. She is also an affiliate professor at the University of Washington. She received her Ph.D. in information science from the University of Toronto, Canada, and continued her postdoc study in the Computer Science Department at the University of California, Berkeley.

Her research interests are mainly in deep learning and natural language (specifically language generation with long-term coherence), language understanding, language grounding with vision, and building intelligent agents for human-computer interaction. She serves on the editorial boards of Transactions of the ACL (TACL) as area editor and Open Journal of Signal Processing (OJSP) as associate editor. She has received several best of awards, including at NAFIPS 2007, Semantic Computing 2009, and CVPR 2019.

The award categories are:

Natural Language Processing/Understanding Innovation

Natural language processing and understanding have only continued to grow in importance, and new advancements, new models, and more use cases continue to emerge.

Business Application Innovation

The field of AI is rife with new ideas and compelling research, developed at a blistering pace, but its the practical applications of AI that matter to people right now, whether thats RPA to reduce human toil, streamlined processes, more intelligent software and services, or other solutions to real-world work and life problems.

Computer Vision Innovation

Computer vision is an exciting subfield of AI thats at the core of applications like facial recognition, object recognition, event detection, image restoration, and scene reconstruction and thats fast becoming an inescapable part of our everyday lives.

AI for Good

This award is for AI technology, the application of AI, or advocacy or activism in the field of AI that protects or improves human lives or operates to fight injustice, improve equality, and better serve humanity.

Startup Spotlight

This award spotlights a startup that holds great promise for making an impact with its AI innovation. Nominees are selected based on their contributions and criteria befitting their category, including technological relevance, funding size, and impact in their sub-field within AI.

As we count down to the awards, well offer editorial profiles of the nominees on VentureBeats AI channel The Machine and share them across our social channels. The award ceremony will be held on the evening of July 15 to conclude the first day of Transform 2020.

Go here to read the rest:

Announcing the second annual VentureBeat AI Innovation Awards at Transform 2020 - VentureBeat

Posted in Ai | Comments Off on Announcing the second annual VentureBeat AI Innovation Awards at Transform 2020 – VentureBeat

AI Weekly: Welcome to The Machine, VentureBeats AI site – VentureBeat

Posted: at 1:31 am

Take the latest VB Survey to share how your company is implementing AI today.

VentureBeat readers likely noticed this week that our site looks different. On Thursday, we rolled out a significant design change that includes not just a new look but also a new brand structure that better reflects how we think about our audiences and our editorial mission.

VentureBeat remains the flagship brand devoted to covering transformative technology that matters to business decision makers and now, our longtime GamesBeat sub-brand has its own homepage of sorts, and definitely its own look. And weve launched a new sub-brand. This one is for all of our AI content, and its called The Machine.

By creating two distinct brands under the main VentureBeat brand, were leaning hard into what weve acknowledged internally for a long time: Were serving more than one community of readers, and those communities dont always overlap. There are readers who care about our AI and transformative tech coverage, and there are others who ardently follow GamesBeat. We want to continue to cultivate those communities through our written content and events. So when we reorganized our site, we created dedicated space for games and AI coverage, respectively, while leaving the homepage as the main feed.

GamesBeat has long been a standout sub-brand under VentureBeat, thanks to the leadership of managing editor Jason Wilson and the hard work of Dean Takahashi, Mike Minotti, and Jeff Grubb. Thus, giving it a dedicated landing page makes logical sense. We want to give our AI coverage the same treatment, which is why we created The Machine.

We chose to take a long and winding path to selecting The Machine as the name for our AI sub brand. We could have just put our heads together and picked one, but wheres the fun in that? If youre going to come up with a name for an AI-focused brand, you should use AI to help you do it. And thats what we did.

First, we went through the necessary exercises to map out a brand: We talked through brand values, created an abstract about its focus and goals, listed the technologies and verticals we wanted to cover, and so on. Then, we humans brainstormed some ideas for names. (None stood out as clear winners.)

Armed with this data, we turned to Hugging Faces free online NLP tools, which require no code you just put text into the box and let the system do its thing. Essentially, we ended up following these tips to generate name ideas.

There are a few different approaches you can take. You can feed the system 20 names, lets say, and ask it to generate a 21st. You can give it tags and relevant terms (like machine learning, artificial intelligence, computer vision, and so on) and hope that it converts those into something that would be worthy of a name. You can enter a description of what you want (like a paragraph about what the sub-brand is all about) and see if it comes up with something. And you can tweak various parameters, like model size and temperature, to extract different results.

This sort of tinkering is a delightful rabbit hole to tumble down. After incessantly fiddling both with the data we fed the system and the various adjustable parameters, we ended up with a long and hilarious list of AI-generated names to chew on.

Here are some of our favorite terrible names that the tool generated:

This is a good lesson in the limitations of AI. The system had no idea what we wanted it to do. It couldnt, and didnt, solve our problem like some sort of name vending machine. AI isnt creative. We had to generate a bunch of data at the beginning, and then at the end, we had to sift through mostly unhelpful output (we ended up with dozens and dozens of names) to find inspiration.

But in the detritus, we found some nuggets of accidental brilliance. Here are a few NLP-generated names that are actually kind of good:

Its worth noting that the system all but insisted on AIBeat. No matter what permutations we tried, AIBeat kept resurfacing. It was tempting to pluck that low-hanging fruit it matched VentureBeat and GamesBeat, and theres no confusion about what beat wed be covering. But we humans decided to be more creative with the name, so we moved away from that construction.

We took a step back and used the long list of NLP-generated names to help us think up some fresh ideas. For example, We the Machine stood out to some of us as particularly punchy, but it wasnt quite right for a publication name. (Hello, I write for We the Machine doesnt exactly roll off the tongue.) But that inspired The Machine, which emerged as the winner from our shortlist.

The Machine has multiple layers. Its a play on machine learning, and its a wink at the persistent fear of sentient robots. And it frames our AI team as a formidable, well-oiled content machine, punching well above our weight with a tiny roster of writers.

And so, I write for The Machine. Bookmark this page and visit every day for the latest AI news, analysis, and features.

Read the original:

AI Weekly: Welcome to The Machine, VentureBeats AI site - VentureBeat

Posted in Ai | Comments Off on AI Weekly: Welcome to The Machine, VentureBeats AI site – VentureBeat

Five Of The Leading British AI Companies Revealed – Forbes

Posted: at 1:31 am

Amid the Covid-19 gloom, many cutting-edge technology companies have quietly been getting on with raising finance with artificial intelligence emerging as a particular focus for investors. Last month, for example, London-based Temporall raised 1m of seed investment to continue developing its AI- and analytics-based workplace insights platform. It was just the latest in a string of AI businesses to successfully raise finance in recent months despite the uncertainties of the pandemic.

That extends a trend seen last year. In September, a report from TechNation and Crunchbase revealed that UK AI investment reached a record level of $1bn in the first half of 2019, surpassing the total amount of new finance raised during the whole of the previous year.

The UKs AI industry has been boosted by a supportive public sector environment: the UK government is leading the way on AI investment in Europe and has become the third biggest spender on AI in the world. In the private sector, meanwhile, many British companies offer world-leading technologies. Take just five of the most innovative AI start-ups in the UK:

Synthesized

Founded in 2017, by Dr Nicolai Baldin, a machine-learning researcher based at the University of Cambridge, Synthesized has created an all-in-one data provisioning and preparation platform that is underpinned by AI.

In just 10 minutes, its technology can generate a representative synthetic dataset incorporating millions of records, helping an organisation to share insights safely and efficiently while automatically complying with data regulations. In March, Synthesized raised $2.8m in funding with the aim of doubling the number of its employees in London and accelerating the companys rapid expansion.

Onfido

With more than $180m in funding, Onfido is on a mission to help businesses verify people's identities. Founded in 2012, it uses machine learning and AI technologies, including face detection and character recognition, to verify documents such as passports and ID cards, and to help companies with fraud prevention.

Onfido is headquartered in London and now employs more than 400 employees across seven offices worldwide. In 2019 the company had over 1,500 customers including Revolut, Monzo and Zipcar.

Benevolent AI

Aiming disrupt the pharmaceutical sector, Benevolent AIs goal is to find medicines for diseases that have no treatment. Benevolent AI applies AI and machine learning tools together with other cutting-edge technologies to try to reinvent the ways drugs are discovered and developed.

The business was founded in 2013 and has raised almost $300m in funding. Its software reduces drug development costs, decreases failure rates and increases the speed at which medicines are generated. Right now, it is focusing on searching for treatments for Covid-19.

Plum Fintech

Plum is an AI assistant that helps people manage their money and increase their savings. It uses a mix of AI and behavioural science to help users change the way they engage with their finances for example, it points out savings they can afford by analysing their bank transactions.

Plum also allows its users to invest the money saved, as well as to easily switch household suppliers to secure better deals - the average customer can save roughly 230 a year on regular bills it claims.

Poly AI

After meeting at the Machine Intelligence Lab at the University of Cambridge, Nikola Mrki, Pei-Hao Su and Tsung-Hsien Wen a group of conversational AI experts started Poly AI. CEO Mrki was previously the first engineer at Apple-acquired VocalIQ, which became an essential part of Siri.

Poly AI helps contact centres scale. The companys technology not only understands customers queries, but also addresses them in a conversational way, either via voice, email or messaging. The company doesnt position itself as a replacement to human contact centre agents, but as an enhancement that works alongside them. Poly AI has secured $12m in funding to date and works as a team of seven out of its London headquarters.

Here is the original post:

Five Of The Leading British AI Companies Revealed - Forbes

Posted in Ai | Comments Off on Five Of The Leading British AI Companies Revealed – Forbes

Adobe tests an AI recommendation tool for headlines and images – TechCrunch

Posted: at 1:31 am

Team members at Adobe have built a new way to use artificial intelligence to automatically personalize a blog for different visitors.

This tool was built as part of the Adobe Sneaks program, where employees can create demos to show off new ideas, which are then showcased (virtually, this year) at the Adobe Summit. While the Sneaks start out as demos, Adobe Experience Cloud Senior Director Steve Hammond told me that 60% of Sneaks make it into a live product.

Hyman Chung, a senior product manager for Adobe Experience Cloud, said that this Sneak was designed for content creators and content marketers who are probably seeing more traffic during the coronavirus pandemic (Adobe says that in April, its own blog saw a 30% month-over-month increase), and who may be looking for ways to increase reader engagement while doing less work.

So in the demo, the Experience Cloud can go beyond simple A/B testing and personalization, leveraging the companys AI technology Adobe Sensei to suggest different headlines, images (which can come from a publishers media library or Adobe Stock) and preview blurbs for different audiences.

Image Credits: Adobe

For example, Chung showed me a mocked-up blog for a tourism company, where a single post about traveling to Australia could be presented differently to thrill-seekers, frugal travelers, partygoers and others. Human writers and editors can still edit the previews for each audience segment, and they can also consult a Snippet Quality Score to see the details behind Senseis recommendation.

Hammond said the demo illustrates Adobes general approach to AI, which is more about applying automation to specific use cases rather than trying to build a broad platform. He also noted that the AI isnt changing the content itself just the way the content is promoted on the main site.

This is leveraging the creativity youve got and matching it with content, he said. You can streamline and adapt the content to different audiences without changing the content itself.

From a privacy perspective, Hammond noted that these audience personas are usually based on information that visitors have opted to share with a brand or website.

More here:

Adobe tests an AI recommendation tool for headlines and images - TechCrunch

Posted in Ai | Comments Off on Adobe tests an AI recommendation tool for headlines and images – TechCrunch

A new AI tool to fight the coronavirus – Axios

Posted: at 1:31 am

A coalition of AI groups is forming to produce a comprehensive data source on the coronavirus pandemic for policymakers and health care leaders.

Why it matters: A torrent of data about COVID-19 is being produced, but unless it can be organized in an accessible format, it will do little good. The new initiative aims to use machine learning and human expertise to produce meaningful insights for an unprecedented situation.

Driving the news: Members of the newly formed Collective and Augmented Intelligence Against COVID-19 (CAIAC) announced today include the Future Society, a non-profit think tank from the Harvard Kennedy School of Government, as well as the Stanford Institute for Human-Centered Artificial Intelligence and representatives from UN agencies.

What they're saying: "With COVID-19 we realized there are tons of data available, but there was little global coordination on how to share it," says Cyrus Hodes, chair of the AI Initiative at the Future Society and a member of the CAIAC steering committee. "That's why we created this coalition to put together a sense-making platform for policymakers to use."

Context: COVID-19 has produced a flood of statistics, data and scientific publications more than 35,000 of the latter as of July 8. But raw information is of little use unless it can be organized and analyzed in a way that can support concrete policies.

The bottom line: Humans aren't exactly doing a great job beating COVID-19, so we need all the machine help we can get.

See the original post here:

A new AI tool to fight the coronavirus - Axios

Posted in Ai | Comments Off on A new AI tool to fight the coronavirus – Axios

Pentagon AI center shifts focus to joint war-fighting operations – C4ISRNet

Posted: at 1:31 am

The Pentagons artificial intelligence hub is shifting its focus to enabling joint war-fighting operations, developing artificial intelligence tools that will be integrated into the Department of Defenses Joint All-Domain Command and Control efforts.

As we have matured, we are now devoting special focus on our joint war-fighting operation and its mission initiative, which is focused on the priorities of the National Defense Strategy and its goal of preserving Americas military and technological advantages over our strategic competitors, Nand Mulchandani, acting director of the Joint Artificial Intelligence Center, told reporters July 8. The AI capabilities JAIC is developing as part of the joint war-fighting operations mission initiative will use mature AI technology to create a decisive advantage for the American war fighter.

That marks a significant change from where JAIC stood more than a year ago, when the organization was still being stood up with a focus on using AI for efforts like predictive maintenance. That transformation appears to be driven by the DoDs focus on developing JADC2, a system of systems approach that will connect sensors to shooters in near-real time.

JADC2 is not a single product. It is a collection of platforms that get stitched together woven together into effectively a platform. And JAIC is spending a lot of time and resources focused on building the AI component on top of JADC2, said the acting director.

According to Mulchandani, the fiscal 2020 spending on the joint war-fighting operations initiative is greater than JAIC spending on all other mission initiatives combined. In May, the organization awarded Booz Allen Hamilton a five-year, $800 million task order to support the joint war-fighting operations initiative. As Mulchandani acknowledged to reporters, that task order exceeds JAICs budget for the next few years and it will not be spending all of that money.

One example of the organizations joint war-fighting work is the fire support cognitive system, an effort JAIC was pursuing in partnership with the Marine Corps Warfighting Lab and the U.S. Armys Program Executive Office Command, Control and Communications-Tactical. That system, Mulchandani said, will manage and triage all incoming communications in support of JADC2.

Mulchandani added that JAIC was about to begin testing its new flagship joint war-fighting project, which he did not identify by name.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

We do have a project going on under joint war fighting which we are going to be actually go into testing, he said. They are very tactical edge AI is the way Id describe it. That work is going to be tested. Its actually promising work were very excited about it.

As I talked about the pivot from predictive maintenance and others to joint war fighting, that is probably the flagship project that were sort of thinking about and talking about that will go out there, he added.

While left unnamed, the acting director assured reporters that the project would involve human operators and full human control.

We believe that the current crop of AI systems today [...] are going to be cognitive assistance, he said. Those types of information overload cleanup are the types of products that were actually going to be investing in.

Cognitive assistance, JADC2, command and controlthese are all pieces, he added.

Read the rest here:

Pentagon AI center shifts focus to joint war-fighting operations - C4ISRNet

Posted in Ai | Comments Off on Pentagon AI center shifts focus to joint war-fighting operations – C4ISRNet

Where it Counts, U.S. Leads in Artificial Intelligence – Department of Defense

Posted: at 1:31 am

When it comes to advancements in artificial intelligence technology, China does have a lead in some places like spying on its own people and using facial recognition technology to identify political dissenters. But those are areas where the U.S. simply isn't pointing its investments in artificial intelligence, said director of the Joint Artificial Intelligence Center. Where it counts, the U.S. leads, he said.

"While it is true that the United States faces formidable technological competitors and challenging strategic environments, the reality is that the United States continues to lead in AI and its most important military applications," said Nand Mulchandani, during a briefing at the Pentagon.

The Joint Artificial Intelligence Center, which stood up in 2018, serves as the official focal point of the department's AI strategy.

China leads in some places, Mulchandani said. "China's military and police authorities undeniably have the world's most advanced capabilities, such as unregulated facial recognition for universal surveillance and control of their domestic population, trained on Chinese video gathered from their systems, and Chinese language text analysis for internet and media censorship."

The U.S. is capable of doing similar things, he said, but doesn't. It's against the law, and it's not in line with American values.

"Our constitution and privacy laws protect the rights of U.S. citizens, and how their data is collected and used," he said. "Therefore, we simply don't invest in building such universal surveillance and censorship systems."

The department does invest in systems that both enhance warfighter capability, for instance, and also help the military protect and serve the United States, including during the COVID-19 pandemic.

The Project Salus effort, for instance, which began in March of this year, puts artificial intelligence to work helping to predict shortages for things like water, medicine and supplies used in the COVID fight, said Mulchandani.

"This product was developed in direct work with [U.S. Northern Command] and the National Guard," he said. "They have obviously a very unique role to play in ensuring that resource shortages ... are harmonized across an area that's dealing with the disaster."

Mulchandani said what the Guard didn't have was predictive analytics on where such shortages might occur, or real-time analytics for supply and demand. Project Salus named for the Roman goddess of safety and well-being fills that role.

"We [now have] roughly about 40 to 50 different data streams coming into project Salus at the data platform layer," he said. "We have another 40 to 45 different AI models that are all running on top of the platform that allow for ... the Northcom operations team ... to actually get predictive analytics on where shortages and things will occur."

As an AI-enabled tool, he said, Project Salus can be used to predict traffic bottlenecks, hotel vacancies and the best military bases to stockpile food during the fallout from a damaging weather event.

As the department pursues joint all-domain command and control, or JADC2, the JAIC is working to build in the needed AI capabilities, Mulchandani.

"JADC2 is ... a collection of platforms that get stitched together and woven together[ effectively into] a platform," Mulchandani said. "The JAIC is spending a lot of time and resources focused on building the AI components on top of JADC2. So if you can imagine a command and control system that is current and the way it's configured today, our job and role is to actually build out the AI components both from a data, AI modeling and then training perspective and then deploying those."

When it comes to AI and weapons, Mulchandani said the department and JAIC are involved there too.

"We do have projects going on under joint warfighting, which are actually going into testing," he said. "They're very tactical-edge AI, is the way I describe it. And that work is going to be tested. It's very promising work. We're very excited about it."

While Mulchandani didn't mention specific projects, he did say that while much of the JAIC's AI work will go into weapons systems, none of those right now are going to be autonomous weapons systems. The concepts of a human-in-the-loop and full human control of weapons, he said, "are still absolutely valid."

Read more:

Where it Counts, U.S. Leads in Artificial Intelligence - Department of Defense

Posted in Ai | Comments Off on Where it Counts, U.S. Leads in Artificial Intelligence – Department of Defense

Beyond the AI hype cycle: Trust and the future of AI – MIT Technology Review

Posted: at 1:31 am

Theres no shortage of promises when it comes to AI. Some say it will solve all problems while others warn it will bring about the end of the world as we know it. Both positions regularly play out in Hollywood plotlines like Westworld, Carbon Black, Minority Report, Her, and Ex Machina. Those stories are compelling because they require us as creators and consumers of AI technology to decide whether we trust an AI system or, more precisely, trust what the system is doing with the information it has been given.

This content was produced by Nuance. It was not written by MIT Technology Review's editorial staff.

Joe Petro is CTO at Nuance.

Those stories also provide an important lesson for those of us who spend our days designing and building AI applications: trust is a critical factor for determining the success of an AI application. Who wants to interact with a system they dont trust?

Even as a nascent technology AI is incredibly complex and powerful, delivering benefits by performing computations and detecting patterns in huge data sets with speed and efficiency. But that power, combined with black box perceptions of AI and its appetite for user data, introduces a lot of variables, unknowns, and possible unintended consequences. Hidden within practical applications of AI is the fact that trust can have a profound effect on the users perception of the system, as well as the associated companies, vendors, and brands that bring these applications to market.

Advancements such as ubiquitous cloud and edge computational power make AI more capable and effective while making it easier and faster to build and deploy applications. Historically, the focus has been on software development and user-experience design. But its no longer a case of simply designing a system that solves for x. It is our responsibility to create an engaging, personalized, frictionless, and trustworthy experience for each user.

The ability to do this successfully is largely dependent on user data. System performance, reliability, and user confidence in AI model output is affected as much by the quality of the model design as the data going into it. Data is the fuel that powers the AI engine that virtually converts the potential energy of user data into kinetic energy in the form of actionable insights and intelligent output. Just as filling a Formula 1 race car with poor or tainted fuel would diminish performance, and the drivers ability to compete, an AI system trained with incorrect or inadequate data can produce inaccurate or unpredictable results that break user trust. Once broken, trust is hard to regain. That is why rigorous data stewardship practices by AI developers and vendors are critical for building effective AI models as well as creating customer acceptance, satisfaction, and retention.

Responsible data stewardship establishes a chain of trust that extends from consumers to the companies collecting user data and those of us building AI-powered systems. Its our responsibility to know and understand privacy laws and policies and consider security and compliance during the primary design phase. We must have a deep understanding of how the data is used and who has access to it. We also need to detect and eliminate hidden biases in the data through comprehensive testing.

Treat user data as sensitive intellectual property (IP). It is the proprietary source code used to build AI models that solve specific problems, create bespoke experiences, and achieve targeted desired outcomes. This data is derived from personal user interactions, such as conversations between consumers and call agents, doctors and patients, and banks and customers. It is sensitive because it creates intimate, highly detailed digital user profiles based on private financial, health, biometric, and other information.

User data needs to be protected and used as carefully as any other IP, especially for AI systems in highly regulated industries such as health care and financial services. Doctors use AI speech, natural-language understanding, and conversational virtual agents created with patient health data to document care and access diagnostic guidance in real time. In banking and financial services, AI systems process millions of customer transactions and use biometric voiceprint, eye movement, and behavioral data (for example, how fast you type, the words you use, which hand you swipe with) to detect possible fraud or authenticate user identities.

Health-care providers and businesses alike are creating their own branded digital front door that provides efficient, personalized user experiences through SMS, web, phone, video, apps, and other channels. Consumers also are opting for time-saving real-time digital interactions. Health-care and commercial organizations rightfully want to control and safeguard their patient and customer relationships and data in each method of digital engagement to build brand awareness, personalized interactions, and loyalty.

Every AI vendor and developer not only needs to be aware of the inherently sensitive nature of user data but also of the need to operate with high ethical standards to build and maintain the required chain of trust.

Here are key questions to consider:

Who has access to the data? Have a clear and transparent policy that includes strict protections such as limiting access to certain types of data, and prohibiting resale or third-party sharing. The same policies should apply to cloud providers or other development partners.

Where is the data stored, and for how long? Ask where the data lives (cloud, edge, device) and how long it will be kept. The implementation of the European Unions General Data Protection Regulation, the California Consumer Privacy Act, and the prospect of additional state and federal privacy protections should make data storage and retention practices top of mind during AI development.

How are benefits defined and shared? AI applications must also be tested with diverse data sets to reflect the intended real-world applications, eliminate unintentional bias, and ensure reliable results.

How does the data manifest within the system? Understand how data will flow through the system. Is sensitive data accessed and essentially processed by a neural net as a series of 0s and 1s, or is it stored in its original form with medical or personally identifying information? Establish and follow appropriate data retention and deletion policies for each type of sensitive data.

Who can realize commercial value from user data? Consider the potential consequences of data-sharing for purposes outside the original scope or source of the data. Account for possible mergers and acquisitions, possible follow-on products, and other factors.

Is the system secure and compliant? Design and build for privacy and security first. Consider how transparency, user consent, and system performance could be affected throughout the product or service lifecycle.

Biometric applications help prevent fraud and simplify authentication. HSBCs VoiceID voice biometrics system has successfully prevented the theft of nearly 400 million (about $493 million) by phone scammers in the UK. It compares a persons voiceprint with thousands of individual speech characteristics in an established voice record to confirm a users identity. Other companies use voice biometrics to validate the identities of remote call center employees before they can access proprietary systems and data. The need for such measures is growing as consumers conduct more digital and phone-based interactions.

Intelligent applications deliver secure, personalized, digital-first customer service. A global telecommunications company is using conversational AI to create consistent, secure, and personalized customer experiences across its large and diverse brand portfolio. With customers increasingly engaging across digital channels, the company looked to technology partners to expand its own in-house expertise while ensuring it would retain control of its data in deploying a virtual assistant for customer service.

A top-three retailer uses voice-powered virtual assistant technology to let shoppers upload photos of items theyve seen offline, then presents items for them to consider buying based on those images.

Ambient AI-powered clinical applications improve health-care experiences while alleviating physician burnout. EmergeOrtho in North Carolina is using the Nuance Dragon Ambient eXperience (DAX) application to transform how its orthopedic practices across the state can engage with patients and document care. The ambient clinical intelligence telehealth application accurately captures each doctor-patient interaction in the exam room or on a telehealth call, then automatically updates the patient's health record. Patients have the doctors full attention while streamlining the burnout-causing electronic paperwork physicians need to complete to get paid for delivering care.

AI-driven diagnostic imaging systems ensure that patients receive necessary follow-up care. Radiologists at multiple hospitals use AI and natural language processing to automatically identify and extract recommendations for follow-up exams for suspected cancers and other diseases seen in X-rays and other images. The same technology can help manage a surge of backlogged and follow-up imaging as covid-19 restrictions ease, allowing providers to schedule procedures, begin revenue recovery, and maintain patient care.

As digital transformation accelerates, we must solve the challenges we face today while preparing for an abundance of future opportunities. At the heart of that effort is the commitment to building trust and data stewardship into our AI development projects and organizations.

See more here:

Beyond the AI hype cycle: Trust and the future of AI - MIT Technology Review

Posted in Ai | Comments Off on Beyond the AI hype cycle: Trust and the future of AI – MIT Technology Review

AI Chip Strikes Down the von Neumann Bottleneck With In-Memory Neural Net Processing – News – All About Circuits

Posted: at 1:31 am

Computer architecture is a highly dynamic field that has evolved significantly since its inception.

Amongst all of the change and innovation in the field since the 1940s, one concept has remained integral and unscathed: the von Neumann Architecture. Recently, with the growth of artificial intelligence, architects are beginning to break the mold and challenge von Neumanns tenure.

Specifically, two companies have teamed up to create an AI chip that performs neural network computations in hardware memory.

The von Neumann architecture was first introduced by John von Neumann in his 1945 paper, First Draft of a Report on the EDVAC." Put simply, the von Neumann architecture is one in which program instructions and data are stored together in memory to later be operated on.

There are three main components in a von Neumann architecture: the CPU, the memory, and the I/O interfaces. In this architecture, the CPU is in charge of all calculations and controlling information flow, the memory is used to store data and instructions, and the I/O interface allows memory to communicate with peripheral devices.

This concept may seem obviousto the average engineer, but that is because the concept has become so universal that most people cannot fathom a computer working otherwise.

Before von Neumanns proposal, most machines would split up memory into program memory and data memory. This made the computers very complex and limited their performance abilities. Today, most computers employ the von Neumann architectural concept in their design.

One of the major downsides to the von Neumann architecture is what has become known as the von Neumann bottleneck. Since memory and the CPU are separated in this architecture, the performance of the system is often limited by the speed ofaccessing memory. Historically, the memory access speed is orders of magnitude slower than the actual processing speed, creating a bottleneck in the system performance.

Furthermore, the physical movement of data consumes a significant amount of energy due to interconnect parasitics. In given situations, it has been observed that the physical movement of data from memory can consume up to 500 times more energy than the actual processing of that data. This trend is only expected to worsen as chips scale.

The von Neumann bottleneck imposes a particularly challenging problem on artificial intelligence applications because of their memory-intensive nature. The operation of neural networks depends on large vector-matrix multiplications and the movement of enormous amounts of data for things such as weights, all of which are stored in memory.

The power and timing constraints due to the movement of data in and out of memory have made it nearly impossible for small computing devices like smartphones to run neural networks. Instead, data must be served via cloud-based engines, introducing a plethora of privacy and latency concerns.

The response to this issue, for many, has been to move away from the von Neumann architecture when designing AI chips.

This week, Imec and GLOBALFOUNDRIES announced a hardware demonstration of a new artificial intelligence chip that defies the notion that processing and memory storage must be entirely separate functions.

Instead, the new architecture they are employing is called analog-in-memorycomputing (AiMC). As the name suggests, calculations are performed in memory without needing to transfer data from memory to CPU. In contrast to digital chips, this computation occursin the analog domain.

Performing analog computing in SRAM cells, this accelerator can locally process pattern recognition from sensors, which might otherwise rely on machine learning in data centers.

The new chip claims to have achieved a staggering energy efficiency as high as 2,900 TOPS/W, which is said to be ten to a hundred times better than digital accelerators."

Saving this much energy will make running neural networks on edge devices much more feasible. With that comes an alleviation of the privacy, security, and latency concerns related to cloud computing.

This new chip is currently in development at GFs 300mm production line in Dresden, Germany, and looks to reach the market in the near future.

Excerpt from:

AI Chip Strikes Down the von Neumann Bottleneck With In-Memory Neural Net Processing - News - All About Circuits

Posted in Ai | Comments Off on AI Chip Strikes Down the von Neumann Bottleneck With In-Memory Neural Net Processing – News – All About Circuits

The US, China and the AI arms race: Cutting through the hype – CNET

Posted: at 1:31 am

Prasit photo/Getty Images

Artificial intelligence -- which encompasses everything from service robots to medical diagnostic tools to your Alexaspeaker -- is a fast-growing field that is increasingly playing a more critical role in many aspects of our lives. A country's AI prowess has major implications for how its citizens live and work -- and its economic and military strength moving into the future.

With so much at stake, the narrative of an AI "arms race" between the US and China has been brewing for years. Dramatic headlines suggest that China is poised to take the lead in AI research and use, due to its national plan for AI domination and the billions of dollars the government has invested in the field, compared with the US' focus on private-sector development.

Subscribe to the TVs, Streaming and Audio newsletter, receive notifications and see related stories on CNET.

But the reality is that at least until the past year or so, the two nations have been largely interdependent when it comes to this technology. It's an area that has drawn attention and investment from major tech heavy hitters on both sides of the Pacific, including Apple, Google and Facebook in the US and SenseTime, Megvii and YITU Technology in China.

Generation China is a CNET series that looks at the areas of technology where the country is looking to take a leadership position.

"Narratives of an 'arms race' are overblown and poor analogies for what is actually going on in the AI space," said Jeffrey Ding, the China lead for the Center for the Governance of AI at the University of Oxford's Future of Humanity Institute. When you look at factors like research, talent and company alliances, you'll find that the US and Chinese AI ecosystems are still very entwined, Ding added.

But the combination of political tensions and the rapid spread of COVID-19 throughout both nations is fueling more of a separation, which will have implications for both advances in the technology and the world's power dynamics for years to come.

"These new technologies will be game-changers in the next three to five years," said Georg Stieler, managing director of Stieler Enterprise Management Consulting China. "The people who built them and control them will also control parts of the world. You cannot ignore it."

You can trace China's ramp up in AI interest back to a few key moments starting four years ago.

The first was in March 2016, when AlphaGo -- a machine-learning system built by Google's DeepMind that uses algorithms and reinforcement learning to train on massive datasets and predict outcomes -- beat the human Go world champion Lee Sedol. This was broadcast throughout China and sparked a lot of interest -- both highlighting how quickly the technology was advancing, and suggesting that because Go involves war-like strategies and tactics, AI could potentially be useful for decision-making around warfare.

The second moment came seven months later, when President Barack Obama's administration released three reports on preparing for a future with AI, laying out a national strategic planand describing the potential economic impacts(all PDFs). Some Chinese policymakers took those reports as a sign that the US was further ahead in its AI strategy than expected.

This culminated in July 2017, when the Chinese government under President Xi Jinping released a development plan for the nation to become the world leader in AI by 2030, including investing billions of dollars in AI startups and research parks.

In 2016, professional Go player Lee Sedol lost a five-game match against Google's AI program AlphaGo.

"China has observed how the IT industry originates from the US and exerts soft influence across the world through various Silicon Valley innovations," said Lian Jye Su, principal analyst at global tech market advisory firm ABI Research. "As an economy built solely on its manufacturing capabilities, China is eager to find a way to diversify its economy and provide more innovative ways to showcase its strengths to the world. AI is a good way to do it."

Despite the competition, the two nations have long worked together. China has masses of data and far more lax regulations around using it, so it can often implement AI trials faster -- but the nation still largely relies on US semiconductors and open source software to power AI and machine learning algorithms.

And while the US has the edge when it comes to quality research, universities and engineering talent, top AI programs at schools like Stanford and MIT attract many Chinese students, who then often go on to work for Google, Microsoft, Apple and Facebook -- all of which have spent the last few years acquiring startups to bolster their AI work.

China's fears about a grand US AI plan didn't really come to fruition. In February 2019, US President Donald Trump released an American AI Initiative executive order, calling for heads of federal agencies to prioritize AI research and development in 2020 budgets. It didn't provide any new funding to support those measures, however, or many details on how to implement those plans. And not much else has happened at the federal level since then.

Meanwhile, China plowed on, with AI companies like SenseTime, Megvii and YITU Technology raising billions. But investments in AI in China dropped in 2019, as theUS-China trade war escalated and hurt investor confidence in China, Su said. Then, in January, the Trump administration made it harder for US companies to export certain types of AI software in an effort to limit Chinese access to American technology.

Just a couple weeks later, Chinese state media reported the first known death from an illness that would become known as COVID-19.

In the midst of the coronavirus pandemic, China has turned to some of its AI and big data tools in attempts to ward off the virus, including contact tracing, diagnostic tools anddrones to enforce social distancing. Not all of it, however, is as it seems.

"There was a lot of propaganda -- in February, I saw people sharing on Twitter and LinkedIn stories about drones flying along high rises, and measuring the temperature of people standing at the window, which was complete bollocks," Stieler said. "The reality is more like when you want to enter an office building in Shanghai, your temperature is taken."

A staff member introduces an AI digital infrared thermometer at a building in Beijing in March.

The US and other nations are grappling with the same technologies -- and the privacy, security and surveillance concerns that come along with them -- as they look to contain the global pandemic, said Elsa B. Kania, adjunct fellow with the Center for a New American Security's Technology and National Security Program, focused on Chinese defense innovation and emerging technologies.

"The ways in which China has been leveraging AI to fight the coronavirus are in various respects inspiring and alarming," Kania said. "It'll be important in the United States as we struggle with these challenges ourselves to look to and learn from that model, both in terms of what we want to emulate and what we want to avoid."

The pandemic may be a turning point in terms of the US recognizing the risks of interdependence with China, Kania said. The immediate impact may be in sectors like pharmaceuticals and medical equipment manufacturing. But it will eventually influence AI, as a technology that cuts across so many sectors and applications.

Despite the economic impacts of the virus, global AI investments are forecast to grow from $22.6 billion in 2019 to $25 billion in 2020, Su said. The bigger consequence may be on speeding the process of decoupling between the US and China, in terms of AI and everything else.

The US still has advantages in areas like semiconductors and AI chips. But in the midst of the trade war, the Chinese government is reducing its reliance on foreign technologies, developing domestic startups and adopting more open-source solutions, Su said. Cloud AI giants like Alibaba, for example, are using open-source computing models to develop their own data center chips. Chinese chipset startups like Cambricon Technologies, Horizon Robotics and Suiyuan Technology have also entered the market in recent years and garnered lots of funding.

But full separation isn't on the horizon anytime soon. One of the problems with referring to all of this as an AI arms race is that so many of the basic platforms, algorithms and even data sources are open-source, Kania said. The vast majority of the AI developers in China use Google TensorFlow or Facebook PyTorch, Stieler added -- and there's little incentive to join domestic options that lack the same networks.

The US remains the world's AI superpower for now, Su and Ding said. But ultimately, the trade war may do more harm to American AI-related companies than expected, Kania said.

Now playing: Watch this: Coronavirus care gets help from AI

0:26

"My main concern about some of these policy measures and restrictions has been that they don't necessarily consider the second-order effects, including the collateral damage to American companies, as well as the ways in which this may lessen US leverage or create much more separate or fragmented ecosystems," Kania said. "Imposing pain on Chinese companies can be disruptive, but in ways that can in the long term perhaps accelerate these investments and developments within China."

Still, "'arms race' is not the best metaphor," Kania added. "It's clear that there is geopolitical competition between the US and China, and our competition extends to these emerging technologies including artificial intelligence that are seen as highly consequential to the futures of our societies' economies and militaries."

Read the original post:

The US, China and the AI arms race: Cutting through the hype - CNET

Posted in Ai | Comments Off on The US, China and the AI arms race: Cutting through the hype – CNET

Page 158«..1020..157158159160..170180..»