‘Brazil needs to do an America’: Twitter slams Jair Bolsonaro for saying he will not take COVID-19 vaccine – Free Press Journal

He also remained skeptical of the effectiveness of wearing a mask and said that there was no concrete evidence to prove the effectiveness of a mask.

He also reiteriated that Brazilians will not be required to take a vaccine shot as and when the vaccine is made available.

One should also note that in October, he had said on Twitter that vaccination was for needed only for his dog.

Well, his statement comes as the Latin American country witnessed a rise in both categories in November, accompanied by an increase in hospital occupancy in large cities.

The Brazilian Ministry of Health on Thursday reported 37,614 new COVID-19 cases with 691 more deaths, raising the national count to 6,204,220 with 171,460 deaths.

Brazil has the world's second-highest COVID-19 death toll, after the United States, and the third-largest caseload, next to the United States and India.

Notably, Bolsonaro, along with US President Donald Trump for the longest time did not believe that the coronavirus pandemic was not such a big threat. Violating all social distancing norms, Bolsonaro attended public rallies without wearing a mask, and was even caught jet skiing on a day Brazil announced its single highest fatality rate.

Now, netizens have slammed Bolsonaro for his statement. They pointed out that by not wearing a mask, he is not only putting his life at risk but also others around him.

He's even being compared to Trump who refuses to wear a mask. And now, a Twitter user said that "Brazil should do an America and get rid of this one."

Check out reactions here:

Read the original:

'Brazil needs to do an America': Twitter slams Jair Bolsonaro for saying he will not take COVID-19 vaccine - Free Press Journal

Hacker runs virtualized Windows 10 on ARM on the Apple M1 processor – MSPoweruser – MSPoweruser

For more than 10 years MacOS users have been able to run Windows on their computers via Bootcamp and virtualization solutions like Parallels, allowing them to access the large library of software and games which are best on Windows.

This changed with the introduction of Apples new Apple M1-powered Macbooks for which there is not a currently supported version of Windows 10, by either Microsoft or Apple.

Today developer Alexander Graf has however revealed that he has managed to get Windows 10 on ARM to successfully run in the open-source QEMU virtualized, with a patch which means the ARM code executes directly on the Apple M1 processor without being first translated to x86 code and then back to ARM.

The solution gets around the lack of drivers for the new Apple hardware (normally provided by Bootcamp) while still allowing for native-level performance. Because Windows 10 on ARM includes its own x86 translator you can even run 32bit x86 Windows apps on your ARM-powered laptop also reportedly with pretty good performance.

Graf ran the Windows ARM64 Insider Preview through the Hypervisor.framework and via a custom patch to the QEMU virtualizer, was able to achieving near-native performance by executing the guest code directly on the host CPU.

There is still some work to do, and companies like Parallels have already said they are working on support for the Apple M1 processor, which suggests new Macbook owners will soon have more software options. It is notable however that Microsoft does not make a standalone version of Windows 10 on ARM available, which may be a barrier to routine deployment of such a solution.

via Macrumors

The rest is here:

Hacker runs virtualized Windows 10 on ARM on the Apple M1 processor - MSPoweruser - MSPoweruser

Advanced Analytics The Key to Mitigating Big Data Risks – JD Supra

Big data sets are the new normal of discovery and bring with them six sinister large data set challenges, as recently detailed in my colleague Nicks article. These challenges range from classics like overly broad privileged screens, to newer risks in ensuring sensitive information (such as personally identifiable information (PII) or proprietary information such as source code) does not inadvertently make its way into the hands of opposing parties or government regulators. While these challenges may seem insurmountable due to ever-increasing data volumes (and also tend to keep discovery program managers and counsel up at night) there are new solutions that can help mitigate these risks and optimize workflows.

As I previously wrote, ediscovery is actually a big data challenge. Advances in AI and machine learning, when applied to ediscovery big data, can help mitigate and reduce these sinister risks by breaking down the silos of individual cases, learning from a wealth of prior case data, and then transferring these learnings to new cases. Having the capability to analyze and understand large data sets at scale combined with state-of-the-art methods provides a number of benefits, five of which I have outlined below.

One key tip to remember - you do not need to try to implement this all at once! Start by identifying a key area where you want to make improvements, determine how you can measure the current performance of the process, then apply some of these methods and measure the results. Innovation is about getting a win in order to perpetuate the next.

[View source.]

More here:

Advanced Analytics The Key to Mitigating Big Data Risks - JD Supra

Keeping the Software Supply Chain Secure – BankInfoSecurity.com

Application Security , Endpoint Security , Internet of Things Security

IoT devices and software applications often use a range of components, including third-party libraries and open source code. All of those pose risks if vulnerabilities are discovered.

See Also: Business Case for PAM Playbook for CISOs

Ensuring devices and services are secure requires keeping track of the status of those software ingredients, promptly applying patches when available. But that can be challenging, says Steve Springett, creator of the open source project called Dependency-Track, a supply chain component analysis platform.

"Whenever you use third-party and open source software, you're ultimately using code that you didn't write yourself," Springett says. "In many cases, code can be slipped in, and you're not even aware that you were using it in the first place. Even when you include your first-level dependencies, those dependencies also have dependencies in many cases."

Dependency-Track, which is part of the Online Web Application Security Project, is a free application that helps identify out-of-date and risky software components by using a software bill of materials, which describes the exact software components that an application contains.

Springett also created CycloneDX, a vendor agnostic specification for creating a software bill of materials.

In this video interview with Information Security Media Group, Springett discusses:

Springett, creator of Dependency-Track, is a senior security architect with ServiceNow in Chicago.

Read the original:
Keeping the Software Supply Chain Secure - BankInfoSecurity.com

How Artificial Intelligence Will Revolutionize the Way Video Games are Developed – insideBIGDATA

AI has been a part of the gaming industry for quite some time now. It has been featured in genres like strategy games, shooting games, and even racing games. The whole idea of using AI in gaming is to give the player a realistic experience while playing even on a virtual platform. However, with the recent advancements in AI, the gaming industry and game developers are coming up with more lucrative ways of using AI in games. This article will see how Artificial Intelligence is making a drastic change in the gaming industry.

What do Experts Have to Say About the Change?

Experts have done a lot of research to see where and how AI can take the gaming to a new level. As per the studies and market research, they say that you can expect the gaming industry to change drastically in the next few years.

Moreover, market researchers have seen a drastic change in the way people look at games. Now developers have a more significant challenge to stay connected to the extreme and fast-paced changes. Every year, the research to find out the trends, market value, key players, etc.

What do Market Studies and Research Reveal?

As of 2019, the market worth of the gaming industry was close to 150 Billion dollars. With the introduction of technologies like Artificial Intelligence, Augmented Reality and Virtual Reality, the numbers are set to cross more than 250 billion by 2021-2022.

Artificial Intelligence will be the stepping stone and equally important component in the evolution of the gaming industry. The key players to be at the top on this front include Tencent, Sony, EA, Google, Playtika, Nintendo, etc. Moreover, the market will also see the rise of new players that will specialize purely in developing games with advanced AI environments. Some of the main elements that would be included are:

A Look at How AI was Introduced in the Gaming Industry

The term Artificial Intelligence is broad and is not limited or restricted to just a particular industry. Even in the gaming sector, AI was introduced a long time back, although, at that time, no one knew that it would become so popular.

Ever since its inception, AI was introduced to the gaming industry. The 1951 game Nim is one such example of the earlier use of AI. Although at that time, artificial intelligence was not as advanced as it is now, it was still a game that was way ahead of its time.

Then in the 1970s, came the era of arcade gaming, even in this there were various AI elements in different games. Speed Racing, Pursuit, Quack, etc. were some of the most popular games. This was also the era when Artificial Intelligence gained popularity. In the 1980s, games like Pac-Man and Maze based games took things to a different level.

Using Artificial Intelligence in Game Development and Programing

If you are wondering- How does artificial intelligence make a difference in gaming?

The answer is simple; all the data is stored in an AI environment, each character uses this environment to transform accordingly. You can also create a virtual environment with the information that is stored. This information will include various scenarios, motives, actions, etc. making the characters more realistic and natural. So, how is Artificial Intelligence changing the gaming industry? Read on to find out.

With the help of AI, game developers are coming up with new techniques like reinforcement learning and pattern recognition. These techniques will help the game characters evolve through self-learning of their actions. A player will notice a vast difference when they play a game in the AI environment.

With AI, games will become more interesting. A player can dispose-off or slow down the game to suit their needs. You will hear characters, even talking, just like how humans do.The overall accessibility, intelligence, and visual appearance will make a significant impact on the player. Some live examples of these techniques are presently seen in games like The Sims and FEAR.

Over the past ten years, we have seen a drastic change in the gaming industry. The revolution of the sector has already started with the introduction of AI. Compared to the earlier methods of development, it is easy to develop games in an AI environment.Today, it is prevalent to find games with 3D effects and other such visualization techniques. AI is taking the gaming industry into a new era and heights. Very soon, it will not just be about good graphics, but also about interpreting and responding to the players actions.

Games like FIFA give you a real-world feel when you play them. The graphics make the game come to life. Now imagine having this experience taken a step higher with the help of AI. The experience will be at a different level.

Similarly, an action game will feel real with the help of artificial intelligence. In short, the players gaming experience will be very different from what it presently is. Moreover, the blend of AI and virtual reality will make a deadly combination.

Players do not feel that they are playing a game. Instead, they think that things are happening in real life. In todays times, game developers are paying attention to minor details. It is no longer about just the visual appearance or graphics.

Gaming developers have to develop their skills regularly. They are always adapting to new changes and techniques while developing games. This, in turn, will also help them to improve their creative skills and enhance their creativity.

With the help of Artificial Intelligence, developers can take their skills to a whole new level. They benefit from using cutting-edge technology to bring in unique aspects and ways to develop games.Even traditional game developers are using AI to bring in a difference in their games. They may not work with hi-tech environments; nevertheless, they develop games with various AI elements.

Today, the world is more in-tune with mobile games. The convenience of playing when you are on the go or waiting for a meeting to start makes it more in-demand. With the help of AI, mobile gamers will have a better experience when playing their favorite game.

Even the introduction of various tools on this front will contribute towards the overall experience of playing games on the mobile. Various changes happen automatically based on the interaction of the game.

When AI is used in a gaming environment, it brings in something new and different. The days of traditional gaming are gone. Now, game lovers want a lot more from their games instead of the norm.

Keeping this in mind, gaming developers are now coming up with programs and codes that deliver this. These codes and programs do not require any human interference. They create virtual worlds automatically. Many complex systems are designed to generate the results.

By doing so, this system results in amazing outcomes. One such example of a game on this front is Red Dead Redemption 2. When you look at this game, the players have the flexibility of interacting in myriad ways with non-playable characters. It also includes actions like bloodstains on a hat or wearing a hat.

In the gaming industry, a lot of time and money is invested while developing a game. Moreover, the strain of whether or not the game will be accepted is always on the air. Even before a game hits the market, it undergoes various checks until the developing team is sure that it is ready.

The entire process can take up to months or even years, depending on the kind of game. With the help of AI, the time taken to develop a game reduces drastically. This also saves a lot of resources that are needed to create the game.

Even the cost of labor reduces drastically. This means that gaming development companies can hire better and technically advanced game developers to get the job done. Considering that the demand for game developers is so high, the market gets competitive.

Players want their games to take them to new heights. The introduction of AI in the gaming industry has brought in this change. Gamers can experience a lot more with the games of today than what was developed earlier.

Moreover, the games are a lot more exciting and fun to play. AI has given players something to look out for. Gamers get the benefit of taking their game to a whole new level. It also takes games to a different dimension.

Furthermore, with an AI platform, gaming companies can create better playing environments. For instance, motion stimulation can come in handy to make each character have different movements. It can also help to develop further levels and maps, all without human interference.

The Benefits of Using Artificial Intelligence into the Gaming Industry

In the modern age, you will often find reviews about various games. When you look at the difference between the reviews of traditional games vs. those developed in an AI environment, you will find a lot of difference between the two.

A review on an AI-based game will tell you a lot about the game in detail. When developed in the right AI environment, you will find the reviews not revealing the real game. However, when it comes to a bad review, every mistake will be pointed out. This is why it becomes imperative for gaming developers to do an excellent job while developing a game. But what are the benefits of using AI for game development?

A Final Thought

The gaming industry is changing at a drastic pace. Moreover, the demand for new and improved games is increasing every day. Today, gamers do not want a traditional game to play on. They are looking for a lot more than just that.

AI has brought a change in the gaming industry ever since its inception. Over the years, we have seen drastic changes in the way games are developed. In todays technologically advanced world, games have become more challenging and exciting by providing human like experiences.

About the Author

Saurabh Hooda is co-founder of Hackr.io. He has worked globally for telecom and finance giants in various capacities. After working for a decade in Infosys and Sapient, he started his first startup, Lenro, to solve hyperlocal book-sharing problem. He is interested in product, marketing, and analytics.

Sign up for the free insideBIGDATAnewsletter.

View post:
How Artificial Intelligence Will Revolutionize the Way Video Games are Developed - insideBIGDATA

Organized Crime Has a New Tool in Its Belts – Artificial Intelligence – OCCRP

As new technologies offer a world of opportunities and benefits in many sectors, so too do they offer new avenues and for organized crime. It was true at the advent of the internet, and its true for the growing field of artificial intelligence and machine learning, according to a new joint reportby Europol and the United Nations Interregional Crime and Justice Research Center.

In the past social engineering scams had to be somewhat tailored to specific targets or audiences, through artificial intelligence they can be deployed en masse. (Source: Pixabay.com)At its simplest, artificial intelligences are human designed systems that, within a defined set of rules can absorb data, recognize patterns, and duplicate or alter them. In effect they are learning so that they can automate more and more complex tasks which in the past required human input.

However, the promise of more efficient automation and autonomy is inseparablefrom the different schemes that malicious actors are capable of, the document warned. Criminals and organized crime groups (OCGs) have been swiftly integrating new technologies into their modi operandi.

AI is particularly useful in the increasingly digitised world of organized crime that has unfolded due to the novel coronavirus pandemic.

AI-supported or AI-enhanced cyberattack techniques that have been studied are proof that criminals are already taking steps to broaden the use of AI, the report said.

One such example is procedurally generated fishing emails designed to bypass spam filters.

Despite the proliferation of new and powerful technologies, a cybercriminal's greatest asset is still his marks propensity for human error and the most common types of cyber scams are still based around so-called social engineering, i.e taking advantage of empathy, trust or naivete.

While in the past social engineering scams had to be somewhat tailored to specific targets or audiences, through artificial intelligence they can be deployed en masse and use machine learning to tailor themselves to new audiences.

Unfortunately, criminals already have enough experience and sample texts to build their operations on, the report said. An innovative scammer can introduce AI systems to automate and speed up the detection rate at which the victims fall in or out of the scam. This allows them to focus only on those potential victims who are easy to deceive. Whatever false pretense a scammer chooses to persuade the target to participate in, an ML algorithm would be able to anticipate a targets most common replies to the chosen pretense, the report explained.

Most terrifying of all however, is the concept of the so-called deepfakes. Through deepfakes, with little source material, machine learning can be used to generate incredibly realistic human faces or voices and impose them into any video.

The technology has been lauded as a powerful weapon in todays disinformation wars, whereby one can no longer rely on what one sees or hears. the report said. One side effect of the use of deepfakes for disinformation is the diminished trust of citizens in authorityand information media.

Flooded with increasingly AI-generated spam and fake news that build on bigoted text, fake videos, and a plethora of conspiracy theories, people might feel that a considerable amount of information, including videos, simply cannot be trusted. The result is a phenomenon termed as information apocalypse or reality apathy.

One of the most infamous uses of deepfake technology has been to superimpose the faces of unsuspecting women onto pornographic videos.

Read the rest here:
Organized Crime Has a New Tool in Its Belts - Artificial Intelligence - OCCRP

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks – Quality Magazine

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks | 2020-11-27 | Quality Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

The rest is here:
How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks - Quality Magazine

Top 3 Emerging Technologies in Artificial Intelligence in the 2020s – Analytics Insight

Artificial Intelligence or popularly known as AI, has been the main driver of bringing disruption to todays tech world. While its applications like machine learning, neural network, deep learning have already earned huge recognition with their wide-ranging applications and use cases, AI is still in a nascent stage. This means, new developments are simultaneously taking place in this discipline, which can soon transform the AI industry and lead to new possibilities. So, some of the AI technologies today may become obsolete in the next ten years, and others may pave the way to even better versions of themselves. Let us have a look at some of the promising AI technologies of tomorrow.

Recent advances inAI have allowed many companies to develop algorithms and tools to generate artificial 3D and 2D images automatically. These algorithms essentially form Generative AI, which enables machines to use things like text, audio files, and images to create content. The MIT Technology review described generative AI as one of the most promising advances in the world of AI in the past decade. It is poised for the next generation of apps for auto programming, content development, visual arts, and other creative, design, and engineering activities. For instance, NVIDIA has developed a software that cangenerate new photorealistic facesstarting from few pictures of real people. A generative AI-enabled campaign byMalaria Must Die featuredDavid Beckham speaking in 9 different languages to generate awareness for the cause.

It can also be used to provide better customer service, facilitate and speed up check-ins, enable performance monitoring, seamless connectivity, and quality control, and help find new networking opportunities. It also helps in film preservation and colorizations.

Generative AI can also help in healthcare by rendering prosthetic limbs,organic molecules, and other items from scratch when actuated through3D printing,CRISPR, and other technologies. It can also enable early identification of potential malignancy to more effective treatment plans. For instance, in the case of diabetic retinopathy, generative AInot only offers a pattern-based hypothesis but can also construe the scan and generate content, which can help to inform the physicians next steps.Even IBM is using this technology for researching on antimicrobial peptide (AMP)to find drugs for COVID-19.

Generative AI also leverages neural networks by exploiting the generative adversarial networks(GANs). GANs share similar functionalities and applications like generative AI, but it is also notorious for being misused to create deepfakes for cybercrimes. GANs are also used in research areas for projecting astronomical simulations, interpreting large data sets and much more.

According to Googles research paper titled,Communication-Efficient Learning of Deep Networks from Decentralized Data, federated learning is defined as a learning technique that allows users to collectively reap the benets of shared models trained from [this] rich data, without the need to centrally store it.In simpler technical parlance, it distributes the machine learning process over to the edge.

Data is an essential key to training machine learning models. This process involves setting up servers at points where models are trained on data via a cloud computing platform. Federated learning brings machine learning models to the data source (or edge nodes) rather than bringing the data to the model.It then links together multiple computational devices into a decentralized system that allows the individual devices that collect data to assist in training the model. This enables devices to collaboratively learn a shared prediction model while keeping all the training data on the individual device itself. This primarily cuts the necessity to move large amounts of data to a central server for training purposes. Thus, it addresses our data privacy woes.

Federated learning is used to improve Siris voice recognition capabilities. Google had initially employed federated learning to augment word recommendation inGoogles Android keyboardwithout uploading the users text data to the cloud. According to Googles blog, when Gboard shows a suggested query, our phone locally stores information about the current context and whether we clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboards query suggestion model.

Medical organizations generally are unable to share data due to privacy restrictions. Federated learning can help address this concern through decentralization by removing the need to pool data into a single location and training in multiple iterations at different sites.

Intel recently had teamed up with the University of Pennsylvania Medical School to deploy federated learning across 29 international healthcare and research institutions to identify brain tumors. The team published their findings onFederated Learning and its applications in healthcare in Nature and presented it at their Supercomputing 2020 event last week. According to the published paper, the team achieved 99% accuracy in identifying brain tumors compared to a traditional model.

Intel announced that this breakthrough could help in earlier detection and better outcomes for the more than 80,000 people diagnosed with a brain tumor each year.

AI made rapid progressions in analyzing big data by leveraging deep neural network (DNN). However, the key disadvantage of any neural network is that it is computationally intensive and memory intensive, which makes it difficult to deploy on embedded systems with limited hardware resources. Further, with the increasing size of the DNN for carrying complex computation, the storage needs are also rising. To address these issues, researchers have come with an AI technique called neural network compression.

Generally, a neural network contains far more weights, represented at higher precision than are required for the specific task, which they are trained to perform. If we wish to bring real-time intelligence or boost edge applications, neural network models must be smaller. For compressing the models, researchers rely on the following methods: parameter pruning and sharing, quantization, low-rank factorization, transferred or compact convolutional filters, and knowledge distillation.

Pruning identifies and removes unnecessary weights or connections or parameters, leaving the network with important ones. Quantization compresses the model by reducing the number of bits that represent each connection. Low-rank factorization leverages matrix decomposition to estimate the informative parameters of the DNNs. Compact convolutions filters help to filter unnecessary weight or parameter space and retain the important ones required to carry out convolution, hence saving the storage space. And knowledge distillation aids in training a more compact neural network to mimic a larger networks output.

Recently, NVIDIA developed a new type of video compression technology that replaces the traditional video codec with a neural network to reduce video bandwidth drastically. Dubbed as NVIDIA Maxine, this platform uses AI to improve the quality and experience of video-conferencing applications in real-time. NVIDIA claims Maxine can reduce the bandwidth load down to one-tenth of H.264 using AI video compression. Further, it is also cloud-based, which makes it easier to deploy the solution for everyone.

Follow this link:
Top 3 Emerging Technologies in Artificial Intelligence in the 2020s - Analytics Insight

Pentagon Teams with Howard University to Steer Artificial Intelligence Center of Excellence – Nextgov

The Defense Department, Army and Howard University linked up to collectively push forward artificial intelligence and machine learning-rooted research, technologies and applications through a recently unveiled center of excellence.

Work it will underpin will shape the future, according to an announcement Monday from the Army Research Laboratoryand the $7.5 million center also marks a move by the Pentagon to help expand its pipeline for future personnel.

Diversity of science and diversity of the future [science and technology] talent base go hand-in-hand in this new and exciting partnership, Dr. Brian Sadler, Army senior research scientist for intelligent systems said. Tapped to manage the partnership, Sadler added that Howard University is an intellectual center for the nation.

Encompassing 13 schools and colleges, the institution is a private, historically Black research university that was founded in 1867. Fulbright recipients, Rhodes scholars and other notable experts were educated at Howard, which also produces more on-campus African-American Ph.D. recipients than any other in America, the release noted. In early 2020, the Armys Combat Capabilities Development Command previously partnered with the university, to support science, technology, engineering, and mathematics [or STEM] educational assistance and advancement among underrepresented groups.

Computer Science Prof. Danda Rawat, who also serves as director of Howards Data Science & Cybersecurity Center will lead the CoE, and the programs execution will be managed by the Army Research Laboratory, or ARL.

This center of excellence is a big win for the Army and [Defense Department] on many fronts, Sadler said. The research is directly aligned with Army priorities and will address pressing problems in both developing and applying AI tools and techniques in several key applications.

A kickoff meeting was set for mid-November, to jumpstart the research and work. ARLs release said the effort will explore vital civilian applications and multi-domain military operations spanning three specific areas of focus: key AI applications for Defense, technological foundation for trustworthy AI technologies, and infrastructure for AI research and development.

U.S. graduate students and early-career research faculty with expertise in STEM fields will gain fellowship and scholarship opportunities through the laboratory, and the government and academic partners also intend to collaborate on research and publications, mentoring, internships, workshops and seminars. Educational training and research exchange visits at both the lab and school will also be offered.

An ARL spokesperson told Nextgov Tuesday that officials involved expect to share program updates after the new year.

Originally posted here:
Pentagon Teams with Howard University to Steer Artificial Intelligence Center of Excellence - Nextgov

Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted – ScienceAlert

How might The Terminator have played out if Skynet had decided it probably wasn't responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they're untrustworthy.

These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don't have the capacity to analyse.

While Skynet might still be some way off, AI is already making decisions in fields that affect human lives like autonomous driving and medical diagnosis, and that means it's vital that they're as accurate as possible. To help towards this goal, this newly created neural network system can generate its confidence level as well as its predictions.

"We need the ability to not only have high-performance models, but also to understand when we cannot trust those models," says computer scientist Alexander Aminifrom the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

This self-awareness of trustworthiness has been given the name Deep Evidential Regression, and it bases its scoring on the quality of the available data it has to work with the more accurate and comprehensive the training data, the more likely it is that future predictions are going to work out.

The research team compares it to a self-driving car having different levels of certainty about whether to proceed through a junction or whether to wait, just in case, if the neural network is less confident in its predictions. The confidence rating even includes tips for getting the rating higher (by tweaking the network or the input data, for instance).

While similar safeguards have been built into neural networks before, what sets this one apart is the speed at which it works, without excessive computing demands it can be completed in one run through the network, rather than several, with a confidence level outputted at the same time as a decision.

"This idea is important and applicable broadly," says computer scientist Daniela Rus. "It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model."

The researchers tested their new system by getting it to judge depths in different parts of an image, much like a self-driving car might judge distance. The network compared well to existing setups, while also estimating its own uncertainty the times it was least certain were indeed the times it got the depths wrong.

As an added bonus, the network was able to flag up times when it encountered images outside of its usual remit (so very different to the data it had been trained on) which in a medical situation could mean getting a doctor to take a second look.

Even if a neural network is right 99 percent of the time, that missing 1 percent can have serious consequences, depending on the scenario. The researchers say they're confident that their new, streamlined trust test can help improve safety in real time, although the work has not yet been peer-reviewed.

"We're starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences," says Amini.

"Any user of the method, whether it's a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision."

The research is being presented at the NeurIPS conference in December, and anonline paperis available.

More here:
Artificial Intelligence Is Now Smart Enough to Know When It Can't Be Trusted - ScienceAlert