Cloak your photos with this AI privacy tool to fool facial recognition – The Verge

Ubiquitous facial recognition is a serious threat to privacy. The idea that the photos we share are being collected by companies to train algorithms that are sold commercially is worrying. Anyone can buy these tools, snap a photo of a stranger, and find out who they are in seconds. But researchers have come up with a clever way to help combat this problem.

The solution is a tool named Fawkes, and was created by scientists at the University of Chicagos Sand Lab. Named after the Guy Fawkes masks donned by revolutionaries in the V for Vendetta comic book and film, Fawkes uses artificial intelligence to subtly and almost imperceptibly alter your photos in order to trick facial recognition systems.

The way the software works is a little complex. Running your photos through Fawkes doesnt make you invisible to facial recognition exactly. Instead, the software makes subtle changes to your photos so that any algorithm scanning those images in future sees you as a different person altogether. Essentially, running Fawkes on your photos is like adding an invisible mask to your selfies.

Scientists call this process cloaking and its intended to corrupt the resource facial recognition systems need to function: databases of faces scraped from social media. Facial recognition firm Clearview AI, for example, claims to have collected some three billion images of faces from sites like Facebook, YouTube, and Venmo, which it uses to identify strangers. But if the photos you share online have been run through Fawkes, say the researchers, then the face the algorithms know wont actually be your own.

According to the team from the University of Chicago, Fawkes is 100 percent successful against state-of-the-art facial recognition services from Microsoft (Azure Face), Amazon (Rekognition), and Face++ by Chinese tech giant Megvii.

What we are doing is using the cloaked photo in essence like a Trojan Horse, to corrupt unauthorized models to learn the wrong thing about what makes you look like you and not someone else, Ben Zhao, a professor of computer science at the University of Chicago who helped create the Fawkes software, told The Verge. Once the corruption happens, you are continuously protected no matter where you go or are seen.

The group behind the work Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Y. Zhao published a paper on the algorithm earlier this year. But late last month they also released Fawkes as free software for Windows and Macs that anyone can download and use. To date they say its been downloaded more than 100,000 times.

In our own tests we found that Fawkes is sparse in its design but easy enough to apply. It takes a couple of minutes to process each image, and the changes it makes are mostly imperceptible. Earlier this week, The New York Times published a story on Fawkes in which it noted that the cloaking effect was quite obvious, often making gendered changes to images like giving women mustaches. But the Fawkes team says the updated algorithm is much more subtle, and The Verges own tests agree with this.

But is Fawkes a silver bullet for privacy? Its doubtful. For a start, theres the problem of adoption. If you read this article and decide to use Fawkes to cloak any photos you upload to social media in future, youll certainly be in the minority. Facial recognition is worrying because its a society-wide trend and so the solution needs to be society-wide, too. If only the tech-savvy shield their selfies, it just creates inequality and discrimination.

Secondly, many firms that sell facial recognition algorithms created their databases of faces a long time ago, and you cant retroactively take that information back. The CEO of Clearview, Hoan Ton-That, told the Times as much. There are billions of unmodified photos on the internet, all on different domain names, said Ton-That. In practice, its almost certainly too late to perfect a technology like Fawkes and deploy it at scale.

Naturally, though, the team behind Fawkes disagree with this assessment. They note that although companies like Clearview claim to have billions of photos, that doesnt mean much when you consider theyre supposed to identify hundreds of millions of users. Chances are, for many people, Clearview only has a very small number of publicly accessible photos, says Zhao. And if people release more cloaked photos in the future, he says, sooner or later the amount of cloaked images will outnumber the uncloaked ones.

On the adoption front, however, the Fawkes team admits that for their software to make a real difference it has to be released more widely. They have no plans to make a web or mobile app due to security concerns, but are hopeful that companies like Facebook might integrate similar tech into their own platform in future.

Integrating this tech would be in these companies interest, says Zhao. After all, firms like Facebook dont want people to stop sharing photos, and these companies would still be able to collect the data they need from images (for features like photo tagging) before cloaking them on the public web. And while integrating this tech now might only have a small effect for current users, it could help convince future, privacy-conscious generations to sign up to these platforms.

Adoption by larger platforms, e.g. Facebook or others, could in time have a crippling effect on Clearview by basically making [their technology] so ineffective that it will no longer be useful or financially viable as a service, says Zhao. Clearview.ai going out of business because its no longer relevant or accurate is something that we would be satisfied [with] as an outcome of our work.

See original here:

Cloak your photos with this AI privacy tool to fool facial recognition - The Verge

How AI and tech could strengthen America’s border wall – Fox News

The best approach for border security and immigration control is a layered strategy, experts tell Fox News. This harnesses artificial intelligence, aerial drones, biometrics and other sophisticated technologies in addition to existing or future fencing or walls along U.S. borders.

Dr. Brandon Behlendorf, a noted border security expert and professor at the University of Albany, New York, told Fox News that advancements in technology have made virtual border security much more feasible. Motion sensors, surveillance systems, drone cameras, thermal imaging -- they help form a barrier that is fed into operations centers all across the border.

[This hinges on] the use of physical and virtual infrastructure, combined with patrol and response capabilities of agents, to provide multiple opportunities for detecting and interdicting illegal border crossings not just at the border, but also some distance from the border, he said. You need to leverage the benefits of each with properly trained and outfitted agents to provide the most effective approach to border security. Neither a wall nor technology itself will suffice.

HOW AI FIGHTS THE WAR AGAINST FAKE NEWS

One of the most interesting innovations is called the Edgevis Shield, a surveillance platform originally developed for use in Afghanistan. The platform uses ground-based sensors that detect activity, and they are self-healing. The sensors form a mesh network, so if one of them is compromised, the entire network can self-correct and keep functioning. The shield can detect whether someone is moving on foot or in a vehicle; and, it uses a low latency wireless network.

Charles King, principal analyst of the Hayward, Calif.-based tech research firm Pund-IT, says other advancements are helping create a virtual border. Because a physical wall only stops illegal border crossings above ground, the U.S. Customs and Border Protection plans to deploy surveillance robots called Marcbots that can explore tunnels, similar to what the military uses today for bomb detections, he says.

The AVATAR (or Automated Virtual Agent for Truth Assessments in Real-time) is a kiosk being developed at San Diego State University. The kiosk uses artificial intelligence to ask questions at a border crossing and can detect physiological changes in expression, voice, and gestures.

NEW $27 MILLION FUND AIMS TO SAVE HUMANITY FROM DESTRUCTIVE AI

For example, the kiosk might ask an immigrant if he or she is carrying any weapons, then look for signs of deception. The kiosk is currently being tested at Canadian border crossings.

Behlendorf says some of the most interesting work related to border patrol is in development at computer labs in the U.S., not at the actual border. Today, there are reams of data from the past that show how illegal immigrants have moved across the border and are then apprehended. This data provides a rich trove for machine learning to look for patterns and even predict likely behavior in the future. Its more than only tracking or blocking one individual crossing.

Developments in other fields related to pattern recognition, machine learning, and predictive analytics could greatly enhance the information with which sector and station commanders have to decide on allocations of key resources, Behlendorf said. Those efforts are starting to develop, and in my opinion over the next few years will form a cornerstone of virtual fence development.

WHITE HOUSE: WE'RE RESEARCHING AI, BUT DONT WORRY ABOUT KILLER ROBOTS

One example of this: using analytics data, border patrol agents could determine where to allocate the most resources to augment a physical wall. Theres already a precedent for this, he says. Los Angeles International Airport uses game theory to randomize how security guards go on patrol, rather than relying on the same set pattern that criminals and terrorists could predict.

The technologies required for supporting a virtual wall, from sensors to surveillance drones to wireless networks and communications to advanced analytics, are more capable and mature today than they have ever been in the past, said Pund-ITs King. The stars are better aligned for the development and deployment of virtual border security today than in the past.

In the future, border patrols could rely more on a virtual infrastructure -- the technology on the back end that looks for patterns, the facial recognition technology at borders -- for security.

In the end, its all of the above that will help protect U.S. borders.

More here:

How AI and tech could strengthen America's border wall - Fox News

Ai | Pretty Cure Wiki | FANDOM powered by Wikia

This article is about the Doki Doki! Pretty Cure character Ai also known as Ai-chan. For the Yes! Pretty Cure 5 and GoGo! character, please go to Natsuki Ai.

Ai (, Ai?)(or Dina in Glitter Force Doki Doki) also called Ai-chan (-, Ai-chan?)by the girls, is a baby fairy mascot that appears in Doki Doki! Pretty Cure.She hatches from an egg. She used to be Cure Ace's partner, until they separated, as Ai got turned into an egg. They saw each other again, inepisode 23. In episode 46, it was revealed that Ai is in fact Princess Marie Ange, who was reverted back into an egg after she split her good and bad parts and Joe found her later on.Though she mispronounced her sentences, she ends them with "~kyupi".

Ai is a fair-skinned baby with big blue eyes that have a curled yellow marking at the lower corner, pale blue hearts on her cheeks, and a button nose. Her pink hair is worn in heart shaped buns held by a yellow flower, and her bangs have a heart formed on the right side. She wears a yellow onsie with a white bib lined in light blue frills with a fuchsia heart on it, along with light purple booties. She has small angel wings.

According to Joe, he found her egg in a river and never found out her true origin.DDPC18However, long before that, she used to be Cure Ace's partner. As Cure Ace started fighting Selfish King, she lost resulting on them to separate, as Cure Ace went to Earth, while Ai got changed back into an egg.DDPC27

Ai-chan hatches from an giant egg in front of the Cures. Joe was nearby and was the one who had the egg to start with. He explains that the girls need to use other Cure Loveads to take care of her.DDPC08

Ai has powers similar toChiffon from Fresh Pretty Cure!. Ai can summonLoveadsas they could be a help for the Cures, or could be a help for her. She can also make a barrier to protect herself, as seen in episode 11. She is also the partner of Aguri.

Aida ManaandKenzaki Makoto: Joe calls them Ai-chan's mother and father, respectively. Both of them find Ai-chan cute, and promised to protect her.

Okada Joe: Joe seems to know alot about Ai-chan. He tells the girls about other Cure Loveads to take care of Ai.

Madoka Aguri: She is Ai's transformation partner. She was also once part of her heart, representing her good half.

Regina: She was once part of Ai's heart, representing her bad half.

Ai() -Aimeans "love" in Japanese, as well as a common girls' name in Japan.

Dina - A Hebrew name meaning "judged", given to the daughter of the biblical figures Jacob and Leah.[1] Dina could also be short for Adelina, meaning "noble"[2]; or Augustina[3], a feminine form of Augustus, which means "great" or "venerable".[4]

Ai's voice actress, Imai Yuka, has participated in one character song for the character she voices.

See original here:

Ai | Pretty Cure Wiki | FANDOM powered by Wikia

Sonys AI subsidiary is developing smarter opponents and teammates for PlayStation games – The Verge

In 2019, Sony quietly established a subsidiary dedicated to researching artificial intelligence. What exactly the company plans to do with this tech has always been a bit unclear, but a recent corporate strategy meeting offers a little more information.

Sony AI [...] has begun a collaboration with PlayStation that will make game experiences even richer and more enjoyable, say notes from a recent strategy presentation given by Sony CEO Kenichiro Yoshida. By leveraging reinforcement learning, we are developing Game AI Agents that can be a players in-game opponent or collaboration partner.

This is pretty much what youd expect from a partnership between PlayStation and Sonys AI team, but its still good to have confirmation! Reinforcement learning, which relies on trial and error to teach AI agents how to carry out tasks, has proved to be a natural fit for video game environments, where agents can run at high speeds under close observation. Its been the focus of heavy-hitting research, like DeepMinds StarCraft II AI.

Other big tech companies with gaming interests such as Microsoft are also exploring this space. But while Microsofts efforts are tilted towards pure research, Sonys sound like theyre more focused on getting this research out of the lab and into video games, pronto. The end result should be smarter teammates as well as opponents.

This tidbit was just one point in the presentation, though, in which Sony laid out numerous plans for its future growth. Here are some of the other ambitions mentioned:

For more details you can check out Sonys presentation for yourself here. Though, be prepared to wade through some absolutely incredible corporation-speak. We particularly liked the opening declaration that the company has now implemented structural reform that liberated us from a loss-making paradigm. In other words: they changed things so that Sony makes money instead of losing it! Got to dress that up somehow, I guess.

Read more from the original source:

Sonys AI subsidiary is developing smarter opponents and teammates for PlayStation games - The Verge

New AI test ‘can identify Covid-19 within one hour’ – Aberdeen Evening Express

A new test powered by artificial intelligence (AI) could be capable of identifying coronavirus within one hour, according to new research.

Its developers say it can rapidly screen people arriving at hospitals for Covid-19 and accurately predict whether or not they have the disease.

The Curial AI test has been developed by a team at the University of Oxford and assesses data typically gathered from patients within the first hour of arriving in an emergency department such as blood tests and vital signs to determine the chance of a patient testing positive for Covid-19.

Testing for the virus currently involves the molecular analysis of a nose and throat swab, with results having a typical turnaround time of between 12 and 48 hours.

However, the Oxford team said their tool could deliver near-real-time predictions for a patients Covid-19 status.

In a study running since March, the researchers have tested the AI tool on data from 115,000 visits to A&E at Oxford University Hospitals (OUH).

Study lead Dr Andrew Soltan said the tool had accurately predicted a patients Covid-19 status in more than 90% of cases, and argued that it could be a useful tool for the NHS.

Until we have confirmation that patients are negative, we must take additional precautions for patients with coronavirus symptoms, which are very common, he said.

The Curial AI is optimised to quickly give negative results with high confidence, safely excluding Covid-19 at the front door and maintaining flow through the hospital.

When we tested the Curial AI on data for all patients coming to OUHs emergency departments in the last week of April and the first week of May, it correctly predicted patients Covid status more than 90% of the time.

He added that the researchers now hope to carry out real-world trials of the technology.

The next steps are to deploy our AI in to the clinical workflow and assess its role in practice, he said.

A strength of our AI is that it fits within the existing clinical care pathway and works with existing lab equipment. This means scaling it up may be relatively fast and cheap.

I hope that our AI may help keep patients and staff safer while waiting for results of the swab test.

More here:

New AI test 'can identify Covid-19 within one hour' - Aberdeen Evening Express

Playing a piano duet with Google’s new AI tool is fun – CNET

The yellow notes are those played by the A.I. Duet.

Wanna play a piano duet but nobody's around? No worries; you still can, courtesy of Google's new interactive experiment called A.I. Duet. Basically, you play a few notes and the computer plays other notes in response to your melody.

What's special about A.I. Duet is that it plays with you using machine learning, and not just as a machine that's programmed to play music with notes and rules hard-coded into it.

According to Yotam Mann, a member of Google's Creative Lab team, A.I. Duet has been exposed to a lot of examples of melodies. Over time, it learns the relationships between notes and timing and builds its own music maps based on what it's "listened" to. These maps are saved in the A.I.'s neural networks. As you play music to the computer, it compares what you're playing with what it's learned and responds with the best match in real time. This results in "natural" responses, and the computer can even produce something it was never programmed to do.

You can try A.I Duet here. You don't need to be a musician to use it, because the A.I. responds even if you just smash on the keyboard. And in that case, its notes definitely sound better than yours.

A.I. Duet is part of a project called Magenta that's being run by Google's Google Brain unit. It's an open-source effort that's available for download.

Go here to read the rest:

Playing a piano duet with Google's new AI tool is fun - CNET

5 Fintech Companies Using AI to Improve Business – Singularity Hub

Artificial intelligence may be all the craze in Silicon Valley, but on Wall Street, well, theres a lot of skepticism.

High-powered algorithms are not a new phenomenon in finance, and for this industry, the name of the game is efficiency and precision.

Quite frankly, finance executives want systems that, in one way or another, make money. Because of this, new wild and flashy AI systems that just make something smart wont fly.

The fintech companies that are successfully leveraging AI today are the ones that have found a very concrete way to apply the technology to an existing business problem. For example, technology such as specialized hardware, big data analytics, and machine learning algorithms are being used in fintech to augment tasks that people already perform.

At the Singularity University Exponential Finance Summit this week, Neil Jacobstein, faculty chair of Artificial Intelligence and Robotics at SU, shared some of the most interesting AI companies in fintech right now.

Not surprisingly, these companies each have a clear market application and reduce friction in the business problems they address.

Numerai is a new kind of hedge fund that is built by crowdsourcing knowledge through a massive network of hedge fundsthe system collects hundreds of thousands of financial models and individual predictions. With this information, Numerai is building their own financial models that incorporate the algorithms submitted through the crowdsourced community. Numerai has already secured funding from First Round Capital and Union Square Ventures, which is no small feat.

In 2010, AlphaSense launched its intelligent search engine, which uses AI, natural language processing algorithms, and advanced linguistic search tools to provide researchers with critical insights with serious accuracy and speed. Financial analysts can pose questions to AlphaSenses systems and get insights that are significantly more customized and accurate than a simple Google search would provide. Its a great example of an AI augmenting a critical task in finance: research.

Opera is helping companies turn their big data into predictive insights and business intelligence. The company uses pattern recognition to identify what they call signals, meaning actionable insights from data. Their signals help researchers understand conditions that may be happening in the market, or the world at large, so they can act quickly on these changes.

AppZen is a very practical solution to one of every executives most arduous taskssubmitting expense reports. The system uses AI to audit 100 percent of employee expenses and then generates an expense report in real time. Automating this process saves companies hours of lost productivity. AppZen also gives companies more confidence in their ability to flag suspicious charges. So, if youve been considering expensing that pricey night out with clients, dont, because AppZen will likely flag it.

CollectAI is a cloud-based software system thats shaking up the collection business. The system is able to mimic the voice and tone of a collection agent to gather important information over the phone about a collections case. With this information, CollectAI uses a self-learning algorithm to learn about the case, and then pulls knowledge from previous successful cases and applies those insights to decide how to best approach the situation at hand. The system gets better and better over time, which is pretty incredible.

Image Credit: Pond5

Continue reading here:

5 Fintech Companies Using AI to Improve Business - Singularity Hub

5 AI-powered companies gaining traction for 2017 – VentureBeat

AI is becoming a way of life for many of us. We check on flights using a chatbot like Mezi, we benefit from the AI within the booking engine used at Hoppers website, and we are sending messages to businesses easier thanks to the machine learning at Yelp.

It should not come as a big surprise when the AI improves, advances, and becomes even more helpful. After all, taking a cue from the human brain, AI is always adapting, looking for new ways to help us on a constant iteration cycle. The engineers behind AI are keen to make the technology more powerful and integrated into our daily workflow, even when things get really complex.

Thats why several companies are not interested in spinning their wheels when it comes to AI. Today at MB 2017, four companies made a splash with announcements that are intended to make their services even more competitive and help make your life easier.

One interesting upgrade has to do with the Mezi chatbot. The app uses AI algorithms to help with flight searches and other duties but is also powered by human agents. Today, they have announced Mezi for Business. The new service, intended for travel agents and corporate travel reps, will improve efficiency and productivity.

Similar to the consumer app, it employs algorithms to help with travel booking and management and much more.

We have decided to go all-in on travel, saysSwapnil Shinde, the CEO and founder of Mezi, speaking at MB 2017. We empower businesses with a suite of travel bots that automate requests. For travel agents we offer a state-of-the-art travel dashboard.

Another example of gaining traction Yelp is using machine learning to facilitate and improve the interactions between customers and businesses. Its fine-tuned behind the scenes by an AI. 35,000 messages are fed through their machine learning tech. They use data from service companies to find out about geofencing parameters. They extract data about the services as well. Yelp is also using machine learning to weed through content and verify it, making sure that five-star review of an auto repair business is valid.

The last feature, requesting a quote from a business, is also AI enabled. For example, it makes sure a business matches the request.

We estimate that every month, Yelp sends billions of dollars of leads to local service businesses listed on our site through the Request A Quote feature, says Jim Blomo, the director of engineering at Yelp. Growth of this feature has been through the roof, and a lot of that progress can be attributed to the machine learning work on this product, allowing us to surface the most useful and relevant businesses when a consumer types iPhone 7 screen repair or overflowing toilet into Yelp.

Another company, GobTech, is using AI in its iOS and Android app called Neural Sandbox. The apps let you experiment with neural networks. At MB 2017, the startup is launching a way to compare neural networks called Gauntlet. Users can compare their score against other users using the Google Play leaderboard.

GobTech is exploring new frontiers in AI for gaming using a unique combination of neural networks and genetic algorithms, says Gabriel Kauffman, the CEO of GobTech. This combo, known as neuroevolution, is a way for neural networks to evolve through natural selection, in our case to learn to play a game by itself.

Meanwhile, Hopper is using machine learning to improve its back-end booking agent. Its an effort to make booking work more like you have a human helping you find the best travel deals. Maggie Moran, the Head of Product at Hopper, explained how the AI bunny empowers travelers about how to find the best deals.

GoPro revealed how they are using AI.Meghan Laffey, the VP of product at GoPro, explained how the app is central to their product offering. The phone has made it easy to go from capturing to sharing, she says. Its been a challenge to go from the experience to the actual playback.

A new feature called Quik Stories allows users to film and edit videos without the hassle of watching all of your footage. With a single tap, stories are generated automatically. Algorithms analyze content and find the best moments, syncing them to music.

These announcements show how AI will ultimately gain traction by iterating, improving, and capturing new audiences.

The ability to use AI within an app is nothing new. What will create a differentiator in the long run is when companies keep enhancing the AI, when the machine learning power an app or website is so compelling that it attracts new users.

See more here:

5 AI-powered companies gaining traction for 2017 - VentureBeat

Anyscale raises $20.6 million to simplify writing AI and ML applications with Ray – VentureBeat

Anyscale, a company promising to let application developers more easily build so-called distributed applications that are behind most AI and machine learning efforts, has raised $20.6 million from investors in a first round of funding.

The company has some credibility off the bat because its cofounded by Ion Stoica, a professor of computer science at the University of California, Berkeley who played a significant role in building out some successful big data frameworks and tools, including Apache Spark and Databricks.

The new company is based on an open source framework called Ray also developed in a lab that Stoica co-directs that focuses on allowing software developers to more easily write compute-intensive applications by simplifying the hardware decisions made underneath.

Rays emergence is significant because it aims to solve a growing problem in the industry, Stoica said in an interview with VentureBeat. On one hand, developers are writing more and more applications for example AI- and ML-driven applications that are increasingly intensive in their number-crunching needs. The amount of compute for the largest AI applications has doubled every three to four months since 2012, according to OpenAI an astonishing exponential rate.

On the other hand, the ability of the processing hardware underneath needed to do this number-crunching is falling behind. Application developers are thus being forced to distribute their applications across thousands of CPU and GPU cores to factor out the processing workload in a way that allows hardware to keep up with their needs. And that process is complex and labor intensive. Companies have to hire specialized engineers to build this architecture, linking things like AWS or Azure cloud instances with Spark and distribution management tools like Kubernetes.

The tools required for this have been kind of jerry-rigged in a way they shouldnt be, said Ben Horowitz, a partner at venture firm Andreessen Horowitz, which led the round of funding. Thats effectively meant large barriers to entry for building scaled applications, and its kept companies from reaping the promised benefits of AI.

Ray was developed at UC Berkeley, in the RISELab successor to the AMPLab, which created Apache Spark and Databricks. Stoica was cofounder of Databricks, a company that helped commercialize Apache Spark, a dominant open source framework that helps data scientists and data engineers process large amounts of data quickly. Databricks was founded in 2013, and is already valued at $6.2 billion. Whereas Spark and Databricks targeted data scientists, Ray is targeting software developers.

From a developer standpoint, you write the code in a way that it talks to Ray, said Horowitz, and you dont have to worry about a lot of that [infrastructure].

Ray is one of the fastest-growing open source projects weve ever tracked, and its being used in production at some of the largest and most sophisticated companies, Horowitz added. Intel has used Ray for things like AutoML, hyperparameter search, and training models, whereas startups like Bonsai and Skymind have used it for reinforcement learning projects. Amazon and Microsoft are also users.

Another Anyscale cofounder, Robert Nishihara, who is also the CEO, likens Anyscales mission with Ray to what Microsoft did when it built Windows. The operating system let developers build applications much more rapidly. We want to make it as easy to program clusters [or thousands of cores] and scalable applications as it is to program on your laptop.

Stoica and Nishihara say applications built with Ray can easily be scaled out from a laptop to a cluster, eliminating the need for in-house distributed computing expertise and resources.

To be sure, developing a company around an open source framework can be challenging. Theres no guarantee that the company can make money from an open framework that other companies can build around, too. Witness what happened with Docker, the company that built around Kubernetes, but which hasnt been able to commercialize. Other companies stepped in and did it instead.

Stoica and Nishihara said they were confident they would avoid Dockers fate, given Stoicas background with Databricks, which he gave as an example of knowing how to commercialize smartly and aggressively. They said that they knew more about Ray than anyone else, and so are in the best position to build a company around it.

Moreover, the pair said they arent afraid of other companies that have been building so-called serverless computing offerings for example, Google with Cloud Function and Amazon with AWS Lambda that are tackling the same problem of letting people develop scalable applications without thinking about infrastructure. Thats a very different approach, a very limited programming model, and restricted in terms of the things you can do, Nishihara said of serverless. What were doing is much more general.

These serverless platforms are notoriously bad at supporting scalable AI, added Stoica. We are excelling in that aspect.

The two founded the company in June alongside Philipp Moritz and UC Berkeley professor Michael Jordan, and Anyscale has no product or revenue yet. Besides Andreessen Horowitz, investors in the round include Intel Capital, Ant Financial, Amplify Partners, and The House Fund. With the funding, Anyscales founders said, they will expand the companys leadership team (the company has 12 employees) and continue to commit to expanding Ray.

See the original post here:

Anyscale raises $20.6 million to simplify writing AI and ML applications with Ray - VentureBeat

Google launches its own AI Studio to foster machine intelligence … – TechCrunch

A new week brings a fresh Google initiative targeting AI startups. We started the month with the announcement of Gradient Ventures, Googles on-balance sheet AI investment vehicle. Two days later we watched the finalists of Google Clouds machine learning competition pitch to a panel of top AI investors. And today, Googles Launchpad is announcing a new hands-on Studio program to feed hungry AI startups the resources they need to get off the ground and scale.

The thesis is simple not all startups are created the same. AI startups love data and struggle to get enough of it. They often have to go to market in phases, iterating as new data becomes available. And they typically have highly technical teams and a dearth of product talent. You get the picture.

The Launchpad Studio aims to address these needs head-on with specialized data sets, simulation tools and prototyping assistance. Another selling point of the Launchpad Studio is that startups accepted will have access to Google talent, including engineers, IP experts and product specialists.

Launchpad, to date, operates in 40 countries around the world, explains Roy Geva Glasberg, Googles Global Lead for Accelerator efforts. We have worked with over 10,000 startups and trained over 2,000 mentors globally.

This core mentor base will serve as a recruiting pool for mentors that will assist the Studio.Barak Hachamov, board member for Launchpad, has been traveling around the world withGlasberg to identify new mentors for the program.

The idea of a startup studio isnt new. It has been attempted a handful of times in recent years, but seems to have finally caught on withAndy Rubins Playground Global. Playground offers startups extensive services and access to top talent to dial-in products and compete with the largest of tech companies.

On the AI Studio front, Yoshua Bengios Element AI raised a $102 million Series A to create a similar program. Bengio, one of, if not the, most famous AI researchers, can help attract top machine learning talent to enable recruiting parity with top AI groups like Googles DeepMind and Facebooks FAIR. Launchpad Studio wont have Bengio, but it will bringPeter Norvig, Dan Ariely, Yossi Matias and Chris DiBona to the table.

But unlike Playgrounds $300 million accompanying venture capital arm and Elements own coffers, Launchpad Studio doesnt actually have any capital to deploy. On one hand, capital completes the package. On the other, Ive never heard a good AI startup complain about not being able to raise funding.

Launchpad Studio sits on top of the Google Developer Launchpad network. The group has been operating an accelerator with global scale for some time now. Now on its fourth class of startups, the team has had time to flesh out its vision and build relationships with experts within Google to ease startup woes.

Launchpad has positioned itself as the Google global program for startups, asserts Glasberg. It is the most scaleable tool Google has today to reach, empower, train and support startups globally.

With all the resources in the world, Googles biggest challenge with its Studio wont be vision or execution but this doesnt guarantee everything will be smooth sailing. Between GV, Capital G, Gradient Ventures, GCP and Studio, entrepreneurs are going to have a lot of potential touch-points with the company.

On paper, Launchpad Studio is the Switzerland of Googles programs. It doesnt aim to make money or strengthen Google Clouds positioning. But from the perspective of founders, theres bound to be some confusion. In an ideal world we will see a meeting of the minds between Launchpads Glasberg, Gradients Anna Patterson and GCPs Sam OKeefe.

The Launchpad Studio will be based in San Francisco, with additional operations in Tel Aviv and New York City. Eventually Toronto, London, Bangalore and Singapore will host events locally for AI founders.

Applications to the Studio are now open if youre interested you can apply here.The program itself is stage-agnostic, so there are no restrictions on size. Ideally early and later-stage startups can learn from each other as they scale machine learning models to larger audiences.

View post:

Google launches its own AI Studio to foster machine intelligence ... - TechCrunch

France is using AI to check whether people are wearing masks on public transport – The Verge

France is integrating new AI tools into security cameras in the Paris metro system to check whether passengers are wearing face masks.

The software, which has already been deployed elsewhere in the country, began a three-month trial in the central Chatelet-Les Halles station of Paris this week, reports Bloomberg. French startup DatakaLab, which created the program, says the goal is not to identify or punish individuals who dont wear masks, but to generate anonymous statistical data that will help authorities anticipate future outbreaks of COVID-19.

We are just measuring this one objective, DatakaLab CEO Xavier Fischer told The Verge. The goal is just to publish statistics of how many people are wearing masks every day.

The pilot is one of a number of measures cities around the world are introducing as they begin to ease lockdown measures and allow people to return to work. Although France, like the US, initially discouraged citizens from wearing masks, the country has now made them mandatory on public transport. Its even considering introducing fines of 135 ($145) for anyone found not wearing a mask on the subway, trains, buses, or taxis.

The introduction of AI software to monitor and possibly enforce these measures will be closely watched. The spread of AI-powered surveillance and facial recognition software in China has worried many privacy advocates in the West, but the pandemic is an immediate threat that governments may feel takes priority over dangers to individual privacy.

DatakaLab, though, insists its software is privacy-conscious and compliant with the EUs General Data Protection Regulation (GDPR). The company has sold AI-powered video analytics for several years, using the technology to generate data for shops and malls about the demographics of their customers. We never sell for security purposes, says Fischer. And that is a condition in all our sales contracts: you cant use this data for surveillance.

The software is lightweight enough to work on location wherever installed, meaning no data is ever sent to the cloud or to DatakaLabs offices. Instead, the software generates statistics about how many individuals are seen wearing masks in 15-minute intervals.

The company has already integrated the software into buses in the French city of Cannes in the south of the country. It added small CPUs to existing CCTV cameras installed in buses, which process the video in real time. When the bus returns to the depot at night, it connects to Wi-Fi and sends the data on to the local transport authorities. Then if we say, for example, that 74 percent of people were wearing a mask in this location, then the mayor will understand where they need to deliver more resources, says Fischer.

Although technology like DatakaLabs is only being tested right now, its likely it will become a staple of urban life in the near future. As countries begin to weigh the economic damage of a lockdown against the loss of life caused by more COVID-19 infections, greater pressure will be put on mitigating measures like mandatory masks. In countries in the West where mask-wearing is more unfamiliar, software like DatakaLabs can help authorities understand whether their messaging is convincing the public.

Fischer says that although the pandemic has certainly created new uses cases for AI, it doesnt mean that countries like France need to abandon their values of privacy and embrace invasive surveillance software. We respect the rules of Europe, says Fischer. This technology is very useful but can be very dangerous ... [But] we have our values and they are part of our company.

See more here:

France is using AI to check whether people are wearing masks on public transport - The Verge

China’s Didi Chuxing opens US lab to develop AI and self-driving car tech – TechCrunch

Chinas Uber rival Didi Chuxing has officially opened its U.S.-based research lab. The new center is part of a move to suck up talent beyond Didis current catchment pool in China, particularly in the areas of AI and self-driving vehicles, but it doesnt signal an expansion of itsservice into North America.

The existence of the research center itself isnt new. Last September, TechCrunch wrote that Didi had hired a pair of experienced security experts based in the U.S. Dr Fengmin Gong and Zheng Bu to lead the center, which works closely with another China-based facilitythat opened in late 2015, but now it is officially open.

Dr Gong will lead the facility in Mountain View, and his team of dozens of leading data scientists and researchers will include former Uber researcher Charlie Miller. Miller rose to fame in 2015 when he hacked a journalists vehicle from a laptop 10 miles awayin a pre-arranged stunt to demonstratevulnerabilities within the automotive industry.

Millers job seems much like his role at Uber according to tweets he sent out today. His defection is noteworthy since it appears to be the first major poach that Didi has made from Uber, and it falls in the self-driving car space whereUber has made a huge push.

Didi is looking to make an early impact in Silicon Valley through a partnership with Udacity around self-driving vehicles. The two companies announced a joint contest inviting teams to developan Automated Safety and Awareness Processing Stack (ASAPS) to increasedriving safety for both manual and self-driving vehicles. Five finalists chosen will get a shot at the $100,000 grand prize and the opportunity to work closer with Didi and Udacity on automotive projects.

Go here to see the original:

China's Didi Chuxing opens US lab to develop AI and self-driving car tech - TechCrunch

Cloudera built its conversational AI chops by keeping things simple – VentureBeat

Last Chance: Register for Transform, VB's AI event of the year, hosted online July 15-17.

When enterprise data software company Cloudera looked into using conversational AI to improve its customer support question-and-answer experience, it didnt want to go slow, said senior director of engineering Adam Warrington in a conversation at Transform 2020. When your company is new to conversational AI, conventional wisdom says you might gradually ease into it with a simple use case and an off-the-shelf chatbot that learns over time.

But Cloudera is a data company, which gives it a head start. We were kind of interested in how we could possibly use our own data sets and technologies that we had internally to do something a little bit more than just dipping our toes into the water, Warrington said. We were more interested in getting off-the-shelf chatbot software that was extensible through APIs, he added. Warrington said Cloudera already had an internally stored wealth of data in the form of customer interactions, support cases, community posts, and so on. The idea was to answer customer support questions with a high degree of accuracy without having to wait for the chatbot to acquire domain knowledge.

Because Cloudera maintained records again, this is a data company of past customer issues and solutions, it had its own corpus to feed the chatbot. In order to teach the chatbot, the company wanted to extract the semantic context of things like the back-and-forth chatter between a support person and customer, as well as the specifics of the actual problem being solved.

To ensure that they knew what was relevant, the Cloudera team relied on their own subject experts to manually label and classify the data set. The work can be a little bit tedious, as is the case with many machine learning projects, but you dont need in this particular case millions and millions of things categorized and labeled, Warrington said. He added that after about a week of work, they ended up with a labeled data set they could use for training and testing. And, Warrington said, they achieved their goal of 90% accuracy.

The company now had models that could understand which words and sentences within a given support case were technically relevant to that case. Then the models could extract the right solution from the best source, be it a knowledge base article, product documentation, community post, or what have you.

But the team needed to go a step further. Now theres the derivative problem downstream, which is [that] what we actually want to do is provide answers to the customers that are relevant to their problems. Its not just about understanding whats technically relevant and whats not, Warrington said. Here again, the team relied on subject matter experts specifically, support engineers to ensure customers were receiving the best solutions.

Warrington said that although Cloudera is currently using its subject matter experts internally, more data is coming in from real interactions. As this project continues to go on in the public space, we expect to get more signals from our customers that are actually using the chatbot, he said. And so well start to use those inputs, those signals, from our customers to really expand on our test sets and our training set, to improve the quality from where its at today.

Whats perhaps most surprising is the short time to market. From inception of the problem statement of trying to use our own data sets and our own technology to augment chatbot software to return relevant results based on customer problem descriptions this took under a month, Warrington said. Why so fast? It certainly helped that Cloudera has its data already set up in its own data lake. All of our processing capabilities already exist on top of this, so everything from analytics to operational databases to our machine learning systems and things like Spark were able to access these data sets through these different technologies.

More to the point, Warrington said in the course of researching chatbot software they could use, the team discovered they already had some pertinent models. They had previously built models to help their internal engineers more efficiently find and address customer support issues. It turns out when youre running all these machine learning projects on an architecture like this, you can share work that has been done in the past that you didnt necessarily expect to use in this way, Warrington noted. He also said the fact that they had a modern data structure, meaning the data was already unsiloed, was a huge advantage.

In addition to the wisdom of relying on subject matter experts, focusing on a specific problem or set of problems, and starting with data architectures that grant you agility, Warringtons advice is to keep things simple. As we grow and mature, this particular approach in this particular implementation we very well could go and explore more advanced techniques [and] more advanced models as we add more types of signals into the system, he said. But out of the gate, to hit the ground running, use something simple. We found that you can actually provide very useful results to the customers, very quickly, using these kinds of approaches.

Read the rest here:

Cloudera built its conversational AI chops by keeping things simple - VentureBeat

Outfoxed by a bot? Facebook is teaching AI to negotiate – CNET

Facebook is teaching chat bots a new skill.

One day, the art of the deal might just involve lettingartificial intelligencedo your dirty work for you.

Researchers from Facebook Artificial Intelligence Research (FAIR) have created AI models, or what they call dialog agents, that can negotiate, according to a blog post Wednesday. They're publishing open-source code as well as research on those dialog agents, the result of about six months' work on the project.

The idea is that negotiation is a basic part of life whether you're picking a restaurant with friends or deciding on a movie to watch. But current chat bots aren't capable of much complexity. Their state of the art is to do simple tasks like book a restaurant or have short conversations of limited scope.

FAIR worked on the problem of how to get dialog agents to operate like people -- that is, come into a situation with different goals and eventually reach a compromise.

The effort is part of a broader push by Facebook to get us to use chat bots. At its developer conference in 2016, founder and CEO Mark Zuckerberg walked through scenarios in which you might use a bot to interact with a business, for example, to order a product or get customer service help. While tech giants like Facebook, Google and Apple are keen to build the personal digital assistant of the future, today's helpers still lack the necessary skills.

It's just one stitch in the larger fabric of work by Silicon Valley, academic researchers and the business community in the area of artificial intelligence, driven by powerful chips, fast networks and access to massive amounts of data about how people lead their digital lives. That's showing up in everything from sorting photos on Facebook to beating Go champions and diagnosing medical conditions.

FAIR didn't delve too far into what applications might be appropriate for bot-bargaining or whether this capability will surface in any Facebook products. But the post did mention this could be an advantage for bot developers working on chat bots with the ability to "reason, converse and negotiate, all key steps toward building a personalized digital assistant."

Negotiation, the FAIR post explains, is both a linguistic and reasoning problem. In other words, you've got to know what you want several steps down the road and be able to communicate it.

In one example, dialog agents were tasked with dividing up a collection of items like five books, three hats and two balls. Each agent had different priorities and each item carried a different value for each agent. The AIs were taught, in a sense, that walking away from the negotiation wasn't an option.

The ability to think ahead is crucial and with the introduction of something called dialog rollouts, which simulate future conversations, the bots were able to do so.

Or as FAIR scientist Mike Lewis put it: "If I say this, you might say that, and then I'll say something else." Lewis said those rollouts are the key innovation in this project.

The research has boosted performance in using various negotiation tactics, like being able to negotiate until there's a successful outcome, propose more final deals and produce novel sentences. The agents even started pretending to be interested in an item so they could later concede it as if it were a compromise.

Humans had a chance to try out the agents, and the researchers said the people couldn't tell they were chatting with bots.

CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.

Batteries Not Included: The CNET team reminds us why tech is cool.

See the original post here:

Outfoxed by a bot? Facebook is teaching AI to negotiate - CNET

Super Smash Borg Melee: AI takes on top players of the classic … – TechCrunch

You can add the cult classic Super Smash Bros Melee to the list of games soon to be dominated by AIs. Research at MITs Computer Science and Artificial Intelligence Laboratory has produced a computer player superior to the drones you can already fight in the game. Its good enough that it held its own against globally-ranked players.

In case youre not familiar with Smash, its a fighting game series from Nintendo that pits characters from the companys various franchises against each other. Its cutesy appearance belies its strategic depth: The SSBM environment has complex dynamics and partial observability, making it challenging for human and machine alike. The multiplayer aspect poses an additional challenge, reads the papers abstract.

Its playing style, as so often seems to be the case with these models, is a mixed bag of traditional and odd:

It uses a combination of human techniques and some odd ones too both of which benefit from faster-than-human reflexes, wrote Firoiu in an email to TechCrunch. It is sometimes very conservative, being unwilling to attack until it sees theres a opening. Other times it goes for risky off-stage acrobatics that it turns into quick kills.

Thats the system playing against several players ranked in the top 100 globally, against which it won more than it lost. Unfortunately its no good with projectiles (hence playing Caption Falcon), and it has a secret weakness:

If the opponent crouches in the corner for a long period of time, it freaks out and eventually suicides, Firiou wrote. (This should be a warning against releasing agents trained in simulation into the real world, he added)

Its not going to win the Nobel Prize, but as with Go, Doom, and others, this type of research is a good way to see how existing learning models and techniques stack up in a new environment.

You can read the details in the paper at Arxiv; its been submitted for consideration at the International Joint Conference on Artificial Intelligence in Melbourne, so best of luck to Firoiu et al.

Read more from the original source:

Super Smash Borg Melee: AI takes on top players of the classic ... - TechCrunch

AI in the Translation Industry The 5-10 Year Outlook – AiThority

Artificial intelligence(AI) has had a major and positive impact on a range of industries already, with the potential to give much more in the future. We sat down with Ofer Tirosh, CEO ofTomedes, to find out how the translation industry has changed as a result of advances in technology over the past 10 years and what the future might hold in store for it.

Translation services have felt the impact of technology in various positive ways during recent years. For individual translators, the range and quality of computer-assisted translation (CAT) tools have increased massively. A CAT tool is a piece of software that supports the translation process. It helps the translator to edit and manage their translations.

CAT tools usually include translation memories, which are particularly valuable to translators. They store sentences and their translations for future use and can save a vast amount of time during the translation process. This means that translators can work more efficiently, without compromising on quality.

There are myriad other ways that technology has helped the industry. Everything from transcription to localization services has become faster and better as a result of tech advances. Even things likecontract automationmake a difference, as they speed up the overall time taken to set up and deliver on each client contract.

Also Read:Top 9 SaaS Startups in India 2020

Machine translation is an issue that affects not just our translation agency but the industry as a whole. Human translation still outdoes machine translation in terms of quality but the fact that websites that can translate for free are widely available has tempted many companies to try machine translation. The resulting translations are not good qualitand this acceptance of below-par translations isnt great for the industry as a whole, as it drives down standards.

There were some fears around machine translation taking over from professional translation services whenmachine learningwas first used to move away from statistical-based machine translation. However, those fears havent really materialized. Indeed, the Bureau of Labor Statistics is projecting19% growthfor the employment of interpreters and translators between 2018 and 2028, which is well above the average growth rate.

Instead, the industry has adapted to work alongside the new machine translation technology, with translators providing post-editing machine translation services, which essentially tidy up computerized attempts at translation and turn them into high-quality documents that accurately reflect the original content.

It was the introduction of neural networks that really took machine language learning to the next level. Previously, computers relied on the analysis of phrases (and before that, words) from existing human translations in order to produce a translation. The results were far from ideal.

Neural networks have provided a different way forward. A machine learning algorithm is used so that the machine can explore the data in its own way, learning and progressing in ways that were not previously possible. What is particularly exciting about this approach is the adaptability of the model that the machine creates. Its not a static process but one that can flex and change over time and based on new data.

Also Read:Vectors of Innovation with Conversational AI

I think the fears of machines taking over from human translation professionals have been put to bed for now. Yes, machines can translate better than they used to, but they still cant translate as well as humans can.

I think that well see a continuation of the trend towards more audio and video translation. Video, in particular, has become such an important marketing and social connection tool that demandvideo translationis likely to boom in the years ahead, just as it has for the past few years.

Ive not had access yet to anyPredictive Intelligencedata for the translation industry, unfortunately, but were definitely likely to experience an increase in demand for more blended human and machine translation models over the coming years. Theres an increasing need to translate faster without a drop in quality, for example in relation to thespread of coronavirus. We need to ensure a smooth, rapid flow of accurate information from country to country in order to tackle the situation as a global issue and not a series of local ones. Thats where both machines and humans can support the delivery of high quality, fast translation services, by working together to achieve maximum efficiency.

AI has had a major impact on the translation industry over the past ten years and I expect the pace of change over the next ten to be even greater, as the technology continues to advance.

Also Read: Proactive vs Reactive: Eliminating Passive Safety Systems With New Technology Trends

Excerpt from:

AI in the Translation Industry The 5-10 Year Outlook - AiThority

Google phone cameras will read heart, breathing rates with AI help – Reuters

FILE PHOTO: The new Google Pixel 4 smartphone is displayed during a Google launch event in New York City, New York, U.S., October 15, 2019. REUTERS/Eduardo Munoz/File Photo

(Reuters) - Cameras on Google Pixel smartphones will be able to measure heart and breathing rates starting next month, in one of the first applications of Alphabet Incs artificial intelligence technology to its wellness services.

Health programs available on Google Plays store and Apple Incs App Store for years have provided the same functionality. But a study in 2017 found accuracy varied and adoption of the apps remains low.

Google Health leaders told reporters earlier this week they had advanced the AI powering the measurements and plan to detail its method and clinical trial in an academic paper in the coming weeks. The company expects to roll out the feature to other Android smartphones at an unspecified time, it said in a blog post on Thursday, but plans for iPhones are unclear.

Apples Watch, Googles Fitbit and other wearables have greatly expanded the reach of continuous heart rate sensing technologies to a much larger population.

The smartphone camera approach is more ad hoc - users who want to take a pulse place their finger over the lens, which catches subtle color changes that correspond to blood flow. Respiration is calculated from video of upper torso movements.

Google Health product manager Jack Po said that the company wanted to give an alternative to manual pulse checks for smartphone owners who only want to monitor their condition occasionally but cannot afford a wearable.

Po said the technology, which can mistake heart rates by about 2%, requires further testing before it could be used in medical settings.

The new feature will be available as an update to the Google Fit app.

Google consolidated its health services about two years ago, aiming to better compete with Apple, Samsung Electronics Co and other mobile technology companies that have invested heavily in marketing wellness offerings.

Reporting by Paresh Dave; Editing by Sam Holmes

Read the rest here:

Google phone cameras will read heart, breathing rates with AI help - Reuters

Passengers threaten to open cockpit door on AI flight; DGCA seeks action – Times of India

NEW DELHI: The Directorate General of Civil Aviation has asked Air India to act against unruly passengers who banged on the cockpit door and misbehaved with crew of a delayed Delhi-Mumbai flight on Thursday (Jan 2).

While threatening to break open the door, some passengers had asked the Boeing 747s pilots to come out of the cockpit and explain the situation a few hours after the jumbo jet had returned to the bay at IGI Airport from the runway due to a technical snag. AI is yet to take a call on the issue of beginning proceedings under the strict no fly list against the unruly passengers of this flight.

DGCA chief Arun Kumar said: "We have asked the airline to act against the unruly behaviour."

AI spokesman Dhananjay Kumar said: "A video of few passengers of AI 865 of January 2 is being widely circulated in different forums.That flight was considerably delayed due to technical reasons. AI management has asked the operating crew for a detailed report on the reported misbehaviour by some passengers. Further action would be considered after getting the report."

The 24-year-old B747 (VT-EVA) was to operate as AI 865 at 10.10 am on Thursday. "Passengers had boarded the flight by 9.15 am. The aircraft taxied out at 10 am and returned from the taxiway in about 10 minutes. Attempts were made to rectify the snag. Finally passengers were asked to alight from this ane at about 2.20 pm and sent to Mumbai by another aircraft at 6 pm Thursday," said an AI official. So there was a delay of 8 hours in the passengers taking off for their destination.

While airlines should do their best to minimise passenger woes during flight delays, unruly behaviour by flyers targeting crew is unacceptable globally. India also now has a no fly list where disruptive passengers can be barred from flying for upto a lifetime depending on the gravity of their unruly behaviour.

Problems on board the B747 (VT-EVA) named Agra began when passengers got restive after waiting for more than a couple of hours for the snag to be rectified.

Videos have emerged showing some young passengers banging on the cockpit door, asking the pilots to come out. Captain please one out Loser come out Come out or we will break the door they yell to the cockpit crew. Cockpit is on the upper deck of B747s where AI has its business class.

See more here:

Passengers threaten to open cockpit door on AI flight; DGCA seeks action - Times of India

Facebook and NYU use artificial intelligence to make MRI scans four times faster – The Verge

If youve ever had an MRI scan before, youll know how unsettling the experience can be. Youre placed in a claustrophobia-inducing tube and asked to stay completely still for up to an hour while unseen hardware whirs, creaks, and thumps around you like a medical poltergeist. New research, though, suggests AI can help with this predicament by making MRI scans four times faster, getting patients in and out of the tube quicker.

The work is a collaborative project called fastMRI between Facebooks AI research team (FAIR) and radiologists at NYU Langone Health. Together, the scientists trained a machine learning model on pairs of low-resolution and high-resolution MRI scans, using this model to predict what final MRI scans look like from just a quarter of the usual input data. That means scans can be done faster, meaning less hassle for patients and quicker diagnoses.

Its a major stepping stone to incorporating AI into medical imaging, Nafissa Yakubova, a visiting biomedical AI researcher at FAIR who worked on the project, tells The Verge.

The reason artificial intelligence can be used to produce the same scans from less data is that the neural network has essentially learned an abstract idea of what a medical scan looks like by examining the training data. It then uses this to make a prediction about the final output. Think of it like an architect whos designed lots of banks over the years. They have an abstract idea of what a bank looks like, and so they can create a final blueprint faster.

The neural net knows about the overall structure of the medical image, Dan Sodickson, professor of radiology at NYU Langone Health, tells The Verge. In some ways what were doing is filling in what is unique about this particular patients [scan] based on the data.

The fastMRI team has been working on this problem for years, but today, they are publishing a clinical study in the American Journal of Roentgenology, which they say proves the trustworthiness of their method. The study asked radiologists to make diagnoses based on both traditional MRI scans and AI-enhanced scans of patients knees. The study reports that when faced with both traditional and AI scans, doctors made the exact same assessments.

The key word here on which trust can be based is interchangeability, says Sodickson. Were not looking at some quantitative metric based on image quality. Were saying that radiologists make the same diagnoses. They find the same problems. They miss nothing.

This concept is extremely important. Although machine learning models are frequently used to create high-resolution data from low-resolution input, this process can often introduce errors. For example, AI can be used to upscale low-resolution imagery from old video games, but humans have to check the output to make sure it matches the input. And the idea of AI imagining an incorrect MRI scan is obviously worrying.

The fastMRI team, though, says this isnt an issue with their method. For a start, the input data used to create the AI scans completely covers the target area of the body. The machine learning model isnt guessing what a final scan looks like from just a few puzzle pieces. It has all the pieces it needs, just at a lower resolution. Secondly, the scientists created a check system for the neural network based on the physics of MRI scans. That means at regular intervals during the creation of a scan, the AI system checks that its output data matches what is physically possible for an MRI machine to produce.

We dont just allow the network to create any arbitrary image, says Sodickson. We require that any image generated through the process must have been physically realizable as an MRI image. Were limiting the search space, in a way, making sure that everything is consistent with MRI physics.

Yakubova says it was this particular insight, which only came about after long discussions between the radiologists and the AI engineers, that enabled the projects success. Complementary expertise is key to creating solutions like this, she says.

The next step, though, is getting the technology into hospitals where it can actually help patients. The fastMRI team is confident this can happen fairly quickly, perhaps in just a matter of years. The training data and model theyve created are completely open access and can be incorporated into existing MRI scanners without new hardware. And Sodickson says the researchers are already in talks with the companies that produce these scanners.

Karin Shmueli, who heads the MRI research team at University College London and was not involved with this research, told The Verge this would be a key step to move forward.

The bottleneck in taking something from research into the clinic, is often adoption and implementation by manufacturers, says Shmueli. She added that work like fastMRI was part of a wider trend incorporating artificial intelligence into medical imaging that was extremely promising. AI is definitely going to be more in use in the future, she says.

Read the rest here:

Facebook and NYU use artificial intelligence to make MRI scans four times faster - The Verge

Samsung AI Forum 2020: Humanity Takes Center Stage in Discussing the Future of AI – Samsung Global Newsroom

Each year, Samsung Electronics AI Forum brings together experts from all over the world to discuss the latest advancements in artificial intelligence (AI) and share ideas on the next directions for the development of these technologies.

This November 2 and 3, experts, researchers and interested viewers alike convened virtually to share the latest developments in AI research and discussed some of the most pressing and relevant issues facing AI research today.

AI technologies have developed remarkably in recent years, thanks in no small part to the hard work and diverse research projects being done by academic and corporate researchers alike all around the world. But given the rapid and significant changes brought on by the recent global pandemic, attention has recently been turning to how AI can be used to help solve real-life problems, and what methods might be most effective in order to create such solutions.

The first day of the forum, organized by the Samsung Advanced Institute of Technology (SAIT), was opened with a keynote speech by Dr. Kinam Kim, Vice Chairman and CEO of Device Solutions at Samsung Electronics, who acknowledged the importance of the discussions set to take place at this years AI Forum around the past, present and future of the role of AI. Dr. Kim also affirmed Samsung Electronics dedication to working with global researchers in order to develop products and services with meaningful real-world impact.

The first day of the Forum then continued with a series of fascinating invited talks given by several global leading academics and professionals. Professor Yoshua Bengio of University of Montreal, Professor Yann LeCun of New York University and Professor Chelsea Finn of Stanford University were the first three to present, following which the Samsung AI Researcher of the Year awards were presented. After this ceremony, SAIT Fellow Professor Donhee Ham of Harvard University, Dr. Tara Sainath of Google Research and Dr. Jennifer Wortman Vaughan of Microsoft Research gave their talks.

The first days invited talks were followed by a virtual live panel discussion, moderated by Young Sang Choi, Vice President of Samsung Electronics, and attended by Professor Bengio, Professor LeCun, Professor Finn, Dr. Sainath, Dr. Wortman Vaughan and Dr. Inyup Kang, President of Samsung Electronics System LSI business. It is my great pleasure to join this Forum, noted Dr. Kang. I feel as if I am standing on the shoulders of giants.

Questions were given to the panel that invited the experts to discuss the ways in which computational bottlenecks can be overcome in order to take AI systems to the next level and be developed to possess the same intelligibility as the human brain. The panelists weighed the benefits of scaling neural nets as opposed to searching for new algorithms, with Dr. Kang noting that, We have to try both. Given the scale of human synapses, I doubt that we can achieve the human level of intelligibility using just current technologies. Eventually we will get there, but we definitely need new algorithms, too.

Professor LeCun noted how AI research is not just constrained by current scaling methods. We are missing some major pieces to being able to reach human-level intelligence, or even just animal-level intelligence, he said, adding that perhaps, in the near future, we might be able to develop machines that can at least reach the scale of an animal such as a cat. Professor Finn concurred with Professor LeCun. We still dont even have the AI capabilities to make a bowl of cereal, she noted. Such basic things are still beyond what our current algorithms are capable of.

Building on the topic of his invited talk, Professor Bengio added that, in order for future systems to have intelligence comparable to that of the way humans learn as children, a world model will need to be developed that is based on unsupervised learning. Our models need to act like human babies in order to go after knowledge in an active way, he explained.

The panel discussion then moved on to the ways in which the community can bridge the gaps between current technologies and future, human-intelligence level technologies, with all the experts agreeing that there is still much work to be done in developing systems that mimic the way human synapses work. A lot of current research directions are trying to address these gaps, reassured Professor Bengio.

Next, the panel shared their thoughts on how to make AI fairer given the inherent biases possessed by todays societies, with the experts debating the balance that needs to be struck between systems development reform, institutional regulation and corporate interest. Dr. Wortman Vaughan made the case for introducing a diversity of viewpoints across all parts of the system building process. I would like to see regulation around processes for people to follow when designing machine learning systems rather than trying to make everyone meet the same outcomes.

The final question given to the panel asked for their thoughts on which field will be the next successful application area for end-to-end models. End-to-end models changed the field of speech recognition by reducing latency and removing the need for internet connection, noted Dr. Sainath. Thanks to this breakthrough, going forward, youre going to see applications of end-to-end models for such purposes as long meeting transcriptions. We always speak of having one model to rule them all, and this is a challenging and interesting research area that has been expanded by the possibilities of end-to-end models as we look to develop a model capable of recognizing all the languages in the world.

The second day of the AI Forum 2020 was hosted by Samsung Research, the advanced R&D hub of Samsung Electronics that leads the development of future technologies for the companys end-product business.

In his opening keynote speech, Dr. Sebastian Seung, President and Head of Samsung Research, outlined the areas in which Samsung has been accelerating its AI research to the end of providing real-world benefits to their users, including more traditional AI fields (vision and graphics, speech and language, robotics), on-device AI and the health and wellness field.

After showcasing a range of Samsung products bolstered with AI technologies, Dr. Seung affirmed that, in order to best extend the capabilities of AI to truly help people in meaningful ways, academic researchers and corporations need to come together to find best-practice solutions.

Following Dr. Seungs speech, the second day of the Forum proceeded with a series of invited talks around the theme of Human-Centric AI by Professor Christopher Manning of Stanford University, Professor Devi Parikh of the Georgia Institute of Technology, Professor Subbarao Kambhampati of Arizona State University and Executive Vice President of Samsung Research Daniel D. Lee, Head of Samsungs AI Center in New York and Professor at Cornell Tech.

The expert talks were followed by a live panel discussion, moderated by Dr. Seung and joined by Professor Manning, Professor Parikh, Professor Kambhampati and EVP Lee. Dr. Seung kicked off the discussion with a question about a topic raised in Professor Kambhampatis speech around the potential issues that could lead to the risk of data manipulation as AI develops. As AI technology continues to develop, it is important that we stay vigilant about the potential for manipulation and work to solve the issues of any AI systems inadvertent data manipulations, explained Professor Kambhampati.

Dr. Seung then posed a much-requested viewer question to the panel. Given that one of the most practical concerns in AI research is the obtaining of data, the experts were asked whether they believe that companies or academic researchers need to develop new means of handling and managing data. Acknowledging that academics often struggle to secure data while companies possess alleviated data shortage problems yet elevated restraints around the usage of their data, Professor Parikh made a case for the need of new research methods that can be modeled with insufficient data or with cooperation between academia and industry, including open research methods. In many areas, there are big public data sets available, she noted. Researchers outside of companies are able to access and use these. But further to this, some of the most interesting fields in AI today are the ones where we dont have much data these represent some of the most cutting-edge problems and approaches.

The final question took the panel back to the theme of the AI Forums second day, Human-Centered AI, wherein the panelists were asked whether or not they believe that AI will be capable of equaling human intelligence in the next 70 years, since that is the period of time it has taken us to get to where we are today in the field of AI research. EVP Lee reasoned that AI still has a way to go but that 70 years is a long time. I am optimistic, noted EVP Lee, but there are lots of hard problems in the way. We need to have academics and companies working on a goal like this together.

We are currently reaching the limits of the range of problems we can solve using just lots of data, summarized Professor Manning. Before we see AI developments like this on a large scale, an area that we should emphasize is the production of AI systems that work for regular people, not just huge corporations, he concluded.

The Samsung AI Forum 2020 ended with a warm thanks to all the esteemed experts who had taken part in the two-day Forum and a shared hope to hold next years Forum offline. All the sessions and invited talks from the AI Forum 2020 are available to watch on the official Samsung YouTube channel.

More:

Samsung AI Forum 2020: Humanity Takes Center Stage in Discussing the Future of AI - Samsung Global Newsroom