Latest Study explores the Artificial Intelligence and Machine Learning Market Witness Highest Growth in near future – News by Decresearch

The 'Artificial Intelligence and Machine Learning market' research report now available with Market Study Report, LLC, is a compilation of pivotal insights pertaining to market size, competitive spectrum, geographical outlook, contender share, and consumption trends of this industry. The report also highlights the key drivers and challenges influencing the revenue graph of this vertical along with strategies adopted by distinguished players to enhance their footprints in the Artificial Intelligence and Machine Learning market.

The latest report on Artificial Intelligence and Machine Learning market undertakes an elaborate assessment of this business space to enhance the remuneration scale and ascent the growth rate. It focuses on growth history and major developments in the market space in every part of the world. The research has been conducted and documented in a way that it assists businesses in making sound business decisions.

Request a sample Report of Artificial Intelligence and Machine Learning Market at:https://www.marketstudyreport.com/request-a-sample/2684563?utm_source=decresearch.com&utm_medium=AG

Not only does this report emphasize on key triggers of growth rate, but also the opportunities that will have a significant role in augmenting industry gains over the analysis period. Besides this, the research document offers a clear understanding of the challenges faced by businesses operating in this industry.

Furthermore, the report gives insights about the effects of COVID-19 pandemic on global as well as regional markets and points out the methods that can be useful to cater the situation created due to the pandemic.

Key Highlights of the Table of Contents:

Product Spectrum

Application scope

Regional Overview

Ask for Discount on Artificial Intelligence and Machine Learning Market Report at:https://www.marketstudyreport.com/check-for-discount/2684563?utm_source=decresearch.com&utm_medium=AG

Competitive terrain:

In a nutshell, the research report on Artificial Intelligence and Machine Learning market is a holistic knowledge of pivotal growth indicators and propellers impacting the future stance of this business sphere. Other details such as sales channel, supply chain, data regarding the distributors, downstream buyers, raw materials, upstream suppliers are also discussed in the market.

Table of Contents:

For More Details On this Report: https://www.marketstudyreport.com/reports/covid-19-impact-on-global-artificial-intelligence-and-machine-learning-market-size-status-and-forecast-2020-2026

Some of the Major Highlights of TOC covers:

Executive Summary

Manufacturing Cost Structure Analysis

Development and Manufacturing Plants Analysis of Artificial Intelligence and Machine Learning

Key Figures of Major Manufacturers

Related Reports:

2. Global Drone Identification System Market Size, Status and Forecast 2020-2026Drone Identification System Market Report covers a valuable source of perceptive information for business strategists. Drone Identification System Industry provides the overview with growth analysis and historical & futuristic cost, revenue, demand and supply data (as applicable). The research analysts provide an elegant description of the value chain and its distributor analysis.Read More: https://www.marketstudyreport.com/reports/global-drone-identification-system-market-size-status-and-forecast-2020-2026

Read More Reports On: https://www.marketwatch.com/press-release/Smart-Refrigerator-Market-Size-Outlook-2025-Top-Companies-Trends-Growth-Factors-Details-by-Regions-Types-and-Applications-2020-11-30

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

See the article here:
Latest Study explores the Artificial Intelligence and Machine Learning Market Witness Highest Growth in near future - News by Decresearch

How the Food and Beverage Industry is Affected by Machine Learning and AI – IoT For All

In general, when thinking about the food industry, we are likely to think about customer service and takeaway gig-economy services. More recently, the COVID-19 pandemic and how it ties into making or breaking food businesses are at the forefront. Perhaps one of the last things to come to mind when discussing the food industry is modern technology, especially artificial intelligence, and machine learning. However, these technologies have a massive impact on the food and drink industry, and today were going to explore how.

Whether youre looking at the food or the industrys beverage side, every aspect of the process is impacted by machine learning or AI. Hygiene is a massive and important part of the food industry process, specifically when minimizing cross-contamination and maintaining high standards during a pandemic.

In the past, these tasks would be tedious, time and resource-intensive, and potentially expensive if a mistake was made or overlooked. In large manufacturing plants, complex machines would actually need to be disassembled and then put back together for them to be cleaned properly and pumping a large volume of substances through them.

However, with modern technology, this is no longer the case.

Using a technology known as SOCIP, or Self-Cleaning-in-Place, machines can use powerful ultrasonic sensors and fluorescence optical imaging to track food remains on machinery, as well as microbial debris of the equipment, meaning machines only need to be cleaned when they need to, and only in the parts that need cleaning. While this is a new technology and the current problem of overcleaning, it will still save the UK food industry alone around 100 million pounds a year.

Of course, the food and drink industrys waste aspect is a highly debated and criticized part of the industry. The foodservice industry in the UK alone loses around 2.4 billion in wasted food alone, so its only natural that technology is being used to save this money.

Throughout the worlds supply chains, AI is being used to track every single stage of the manufacturing and supply chain process, such as tracking prices, managing inventory stock levels, and even countries of origin.

Solutions that already exist, such as Symphony Retail AI, uses this information to track transportation costs accurately, all pricing mentioned above, and inventory levels to estimate how much food is needed and where to minimize the waste produced.

No matter where you go in the world, food safety standards are always important to follow, and regulations seem to be becoming stricter all the time. In the US, the Food Safety Modernization Act ensures this happens, especially with COVID-19, and countries become more aware of how contaminated food can be.

Fortunately, robots that use AI and machine learning can handle and process food, basically eliminating the chances that contamination can take place through touch. Robots and machinery cannot transmit diseases and such in a way that humans can, thus minimizing the risk of it becoming a problem.

Even in food testing facilities, robot solutions, such as Next Generation Sequencing, a DNA testing solution for food data capturing, and Electric Noses, a machine solution that tests and records the odors of food, are being used in place for humans for more accurate results. At the time of writing, its estimated that around 30% of the food industry currently works with AI and Machine Learning in this way, although this number is set to grow over the coming years.

Theres no doubt that food production uses a ton of water and resources, especially in the meat and livestock industries. This is extremely unsustainable for the planet and very expensive for the producers. To help curb costs and become more sustainable, AI is being used to manage the power and water consumption needed, thus making it as accurate as possible.

This creates instant benefits to the costs of production and profit margins in all areas of the food and drink sector. When you start adding the ability to manage light sources, food for plants and ingredients, and basically introducing a smart way to grow food at its core, then you really start to see better food, more sustainable production practices, and more profits and savings at each stage of the food chain.

More here:
How the Food and Beverage Industry is Affected by Machine Learning and AI - IoT For All

JEE toppers opt for Computer Science over Artificial Intelligence, heres why – The Indian Express

Written by Shyna Kalra | New Delhi | Updated: November 30, 2020 9:09:02 pmDated content, faculty shortage, and limited capacity to match the current and incoming student population are some of the critical constraints the Indian higher education faces," said Raghav Gupta, Managing Director, India and APAC, Coursera. (Image Designed by Gargi Singh)

Though there is a lot of buzz around Artificial Intelligence (AI), the traditional computer science engineering (CSE) course seems to be the first choice of JEE toppers. There are a few Indian institutes that offer an on-campus traditional course, including at an undergraduate level, in AI, but not many opt for this. This year too, most toppers, including AIR 2, have opted for computer science. Chirag Falor, who secured the top rank, went to the MIT, US.

Indraprastha Institute of Information Technology (IIIT) Delhi, one of the few institutes that offer a full-time BTech course in AI, said that it has seen high-scoring students take admission for this course, but the top rankers still chose the traditional CSE. Ashutosh Brahma, assistant manager (academics), IIIT-Delhi told indianexpress.com, AI is an ongoing trend. Students are opting for AI both at the masters and BTech level. While in MTech, there is more traction than at the undergraduate level. This is because traditionally in Indian education set-up students learn about a broad domain at UG level and go into specification at PG. Likewise, students tend to study CSE at UG and AI at PG level.

He, however, believes the trend is changing. AI is expected to follow the graph of research. Like earlier research education was introduced at masters level and PhD level, now even undergraduate level students are encouraged to study research. Likewise, with AI having multiple facets, its application is more evident, it will be more and more popular for ug courses too. We are slowly seeing the trend, he said.

IIT-Delhi, one of the highest-ranking Indian institutes has introduced a school in AI. However, the courses are offered at a postgraduate level. Prof Mausam, Jai Gupta Chair, Department of Computer Science and Engineering at IIT-Delhi, who is also the founding head of the School of Artificial Intelligence at the institute, said the lack of courses in AI at UG level stems from the debate among academicians on considering AI as a multidisciplinary field or a field on its own. Some academicians believe that because AI brings in philosophical issues such as cognitive science into the study, the core fundamentals of it are slightly different than computer science and it should be taught as a different field. Others believe that even though AI is a multidisciplinary field its fundamentals are based in CSE, math, and related fields. Those who take the latter approach tend to offer the courses at a masters and a higher level of education.

For institutes like IITs, IIMs, NITs, IIITs, and centrally-funded institutes, online platforms come in handy while offering such courses. Many cite the lack of trained faculty and curated courses as one of the main reasons behind much fewer courses than the demand.

Dated content, faculty shortage, and limited capacity to match the current and incoming student population are some of the critical constraints the Indian higher education faces, said Raghav Gupta, Managing Director, India and APAC, Coursera. The ed-tech platform has collaborated with over 3700 universities and is serving more than 2.4 million students who have enrolled in 21.4 million courses under Coursera for Campus initiative. Most of these collaborations are in the emerging technologies field.

Read| Emerging courses to pursue:Virology|Actuarial science| Pharma Marketing|FinTech|Coronavirus|Robotics | Healthcare Engineering | Cyber Security| Data Science | Petroleum and Energy | Design Strategy | Business analytics | Digital auditing | Digital marketing | Luxury management | Machine learning | Gaming Industry | Product design | Transport mobility design

Academicians believe that ed-techs can evolve faster than traditional institutes and can offer more courses in the emerging domains. The reason is that in general academia is a little risk-averse. We invest time in finding the relevant material and contextual case studies. We need to learn to evolve at the pace of the industry. And it needs to happen systematically, said Janat Shah, director, IIM-Udaipur. The institute, like several others, has a tie-up with the online platform Coursera to offer online courses in the emerging domain.

The challenge, however, is that the emerging digital fields are changing at a much faster pace than we anticipated, and institutes need to be responsive. At IIM-U, we are also using ed-tech and MOOC, especially for such fields. Institutions will have to develop the competencies to leverage ed-tech for the short and medium-term. The ed-tech can be leveraged systematically to address this problem, he said, adding that tier-II and III cities can leverage more from these platforms.

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Education News, download Indian Express App.

IE Online Media Services Pvt Ltd

Read more here:
JEE toppers opt for Computer Science over Artificial Intelligence, heres why - The Indian Express

How Artificial Intelligence Will Revolutionize the Way Video Games are Developed – insideBIGDATA

AI has been a part of the gaming industry for quite some time now. It has been featured in genres like strategy games, shooting games, and even racing games. The whole idea of using AI in gaming is to give the player a realistic experience while playing even on a virtual platform. However, with the recent advancements in AI, the gaming industry and game developers are coming up with more lucrative ways of using AI in games. This article will see how Artificial Intelligence is making a drastic change in the gaming industry.

What do Experts Have to Say About the Change?

Experts have done a lot of research to see where and how AI can take the gaming to a new level. As per the studies and market research, they say that you can expect the gaming industry to change drastically in the next few years.

Moreover, market researchers have seen a drastic change in the way people look at games. Now developers have a more significant challenge to stay connected to the extreme and fast-paced changes. Every year, the research to find out the trends, market value, key players, etc.

What do Market Studies and Research Reveal?

As of 2019, the market worth of the gaming industry was close to 150 Billion dollars. With the introduction of technologies like Artificial Intelligence, Augmented Reality and Virtual Reality, the numbers are set to cross more than 250 billion by 2021-2022.

Artificial Intelligence will be the stepping stone and equally important component in the evolution of the gaming industry. The key players to be at the top on this front include Tencent, Sony, EA, Google, Playtika, Nintendo, etc. Moreover, the market will also see the rise of new players that will specialize purely in developing games with advanced AI environments. Some of the main elements that would be included are:

A Look at How AI was Introduced in the Gaming Industry

The term Artificial Intelligence is broad and is not limited or restricted to just a particular industry. Even in the gaming sector, AI was introduced a long time back, although, at that time, no one knew that it would become so popular.

Ever since its inception, AI was introduced to the gaming industry. The 1951 game Nim is one such example of the earlier use of AI. Although at that time, artificial intelligence was not as advanced as it is now, it was still a game that was way ahead of its time.

Then in the 1970s, came the era of arcade gaming, even in this there were various AI elements in different games. Speed Racing, Pursuit, Quack, etc. were some of the most popular games. This was also the era when Artificial Intelligence gained popularity. In the 1980s, games like Pac-Man and Maze based games took things to a different level.

Using Artificial Intelligence in Game Development and Programing

If you are wondering- How does artificial intelligence make a difference in gaming?

The answer is simple; all the data is stored in an AI environment, each character uses this environment to transform accordingly. You can also create a virtual environment with the information that is stored. This information will include various scenarios, motives, actions, etc. making the characters more realistic and natural. So, how is Artificial Intelligence changing the gaming industry? Read on to find out.

With the help of AI, game developers are coming up with new techniques like reinforcement learning and pattern recognition. These techniques will help the game characters evolve through self-learning of their actions. A player will notice a vast difference when they play a game in the AI environment.

With AI, games will become more interesting. A player can dispose-off or slow down the game to suit their needs. You will hear characters, even talking, just like how humans do.The overall accessibility, intelligence, and visual appearance will make a significant impact on the player. Some live examples of these techniques are presently seen in games like The Sims and FEAR.

Over the past ten years, we have seen a drastic change in the gaming industry. The revolution of the sector has already started with the introduction of AI. Compared to the earlier methods of development, it is easy to develop games in an AI environment.Today, it is prevalent to find games with 3D effects and other such visualization techniques. AI is taking the gaming industry into a new era and heights. Very soon, it will not just be about good graphics, but also about interpreting and responding to the players actions.

Games like FIFA give you a real-world feel when you play them. The graphics make the game come to life. Now imagine having this experience taken a step higher with the help of AI. The experience will be at a different level.

Similarly, an action game will feel real with the help of artificial intelligence. In short, the players gaming experience will be very different from what it presently is. Moreover, the blend of AI and virtual reality will make a deadly combination.

Players do not feel that they are playing a game. Instead, they think that things are happening in real life. In todays times, game developers are paying attention to minor details. It is no longer about just the visual appearance or graphics.

Gaming developers have to develop their skills regularly. They are always adapting to new changes and techniques while developing games. This, in turn, will also help them to improve their creative skills and enhance their creativity.

With the help of Artificial Intelligence, developers can take their skills to a whole new level. They benefit from using cutting-edge technology to bring in unique aspects and ways to develop games.Even traditional game developers are using AI to bring in a difference in their games. They may not work with hi-tech environments; nevertheless, they develop games with various AI elements.

Today, the world is more in-tune with mobile games. The convenience of playing when you are on the go or waiting for a meeting to start makes it more in-demand. With the help of AI, mobile gamers will have a better experience when playing their favorite game.

Even the introduction of various tools on this front will contribute towards the overall experience of playing games on the mobile. Various changes happen automatically based on the interaction of the game.

When AI is used in a gaming environment, it brings in something new and different. The days of traditional gaming are gone. Now, game lovers want a lot more from their games instead of the norm.

Keeping this in mind, gaming developers are now coming up with programs and codes that deliver this. These codes and programs do not require any human interference. They create virtual worlds automatically. Many complex systems are designed to generate the results.

By doing so, this system results in amazing outcomes. One such example of a game on this front is Red Dead Redemption 2. When you look at this game, the players have the flexibility of interacting in myriad ways with non-playable characters. It also includes actions like bloodstains on a hat or wearing a hat.

In the gaming industry, a lot of time and money is invested while developing a game. Moreover, the strain of whether or not the game will be accepted is always on the air. Even before a game hits the market, it undergoes various checks until the developing team is sure that it is ready.

The entire process can take up to months or even years, depending on the kind of game. With the help of AI, the time taken to develop a game reduces drastically. This also saves a lot of resources that are needed to create the game.

Even the cost of labor reduces drastically. This means that gaming development companies can hire better and technically advanced game developers to get the job done. Considering that the demand for game developers is so high, the market gets competitive.

Players want their games to take them to new heights. The introduction of AI in the gaming industry has brought in this change. Gamers can experience a lot more with the games of today than what was developed earlier.

Moreover, the games are a lot more exciting and fun to play. AI has given players something to look out for. Gamers get the benefit of taking their game to a whole new level. It also takes games to a different dimension.

Furthermore, with an AI platform, gaming companies can create better playing environments. For instance, motion stimulation can come in handy to make each character have different movements. It can also help to develop further levels and maps, all without human interference.

The Benefits of Using Artificial Intelligence into the Gaming Industry

In the modern age, you will often find reviews about various games. When you look at the difference between the reviews of traditional games vs. those developed in an AI environment, you will find a lot of difference between the two.

A review on an AI-based game will tell you a lot about the game in detail. When developed in the right AI environment, you will find the reviews not revealing the real game. However, when it comes to a bad review, every mistake will be pointed out. This is why it becomes imperative for gaming developers to do an excellent job while developing a game. But what are the benefits of using AI for game development?

A Final Thought

The gaming industry is changing at a drastic pace. Moreover, the demand for new and improved games is increasing every day. Today, gamers do not want a traditional game to play on. They are looking for a lot more than just that.

AI has brought a change in the gaming industry ever since its inception. Over the years, we have seen drastic changes in the way games are developed. In todays technologically advanced world, games have become more challenging and exciting by providing human like experiences.

About the Author

Saurabh Hooda is co-founder of Hackr.io. He has worked globally for telecom and finance giants in various capacities. After working for a decade in Infosys and Sapient, he started his first startup, Lenro, to solve hyperlocal book-sharing problem. He is interested in product, marketing, and analytics.

Sign up for the free insideBIGDATAnewsletter.

View post:
How Artificial Intelligence Will Revolutionize the Way Video Games are Developed - insideBIGDATA

Organized Crime Has a New Tool in Its Belts – Artificial Intelligence – OCCRP

As new technologies offer a world of opportunities and benefits in many sectors, so too do they offer new avenues and for organized crime. It was true at the advent of the internet, and its true for the growing field of artificial intelligence and machine learning, according to a new joint reportby Europol and the United Nations Interregional Crime and Justice Research Center.

In the past social engineering scams had to be somewhat tailored to specific targets or audiences, through artificial intelligence they can be deployed en masse. (Source: Pixabay.com)At its simplest, artificial intelligences are human designed systems that, within a defined set of rules can absorb data, recognize patterns, and duplicate or alter them. In effect they are learning so that they can automate more and more complex tasks which in the past required human input.

However, the promise of more efficient automation and autonomy is inseparablefrom the different schemes that malicious actors are capable of, the document warned. Criminals and organized crime groups (OCGs) have been swiftly integrating new technologies into their modi operandi.

AI is particularly useful in the increasingly digitised world of organized crime that has unfolded due to the novel coronavirus pandemic.

AI-supported or AI-enhanced cyberattack techniques that have been studied are proof that criminals are already taking steps to broaden the use of AI, the report said.

One such example is procedurally generated fishing emails designed to bypass spam filters.

Despite the proliferation of new and powerful technologies, a cybercriminal's greatest asset is still his marks propensity for human error and the most common types of cyber scams are still based around so-called social engineering, i.e taking advantage of empathy, trust or naivete.

While in the past social engineering scams had to be somewhat tailored to specific targets or audiences, through artificial intelligence they can be deployed en masse and use machine learning to tailor themselves to new audiences.

Unfortunately, criminals already have enough experience and sample texts to build their operations on, the report said. An innovative scammer can introduce AI systems to automate and speed up the detection rate at which the victims fall in or out of the scam. This allows them to focus only on those potential victims who are easy to deceive. Whatever false pretense a scammer chooses to persuade the target to participate in, an ML algorithm would be able to anticipate a targets most common replies to the chosen pretense, the report explained.

Most terrifying of all however, is the concept of the so-called deepfakes. Through deepfakes, with little source material, machine learning can be used to generate incredibly realistic human faces or voices and impose them into any video.

The technology has been lauded as a powerful weapon in todays disinformation wars, whereby one can no longer rely on what one sees or hears. the report said. One side effect of the use of deepfakes for disinformation is the diminished trust of citizens in authorityand information media.

Flooded with increasingly AI-generated spam and fake news that build on bigoted text, fake videos, and a plethora of conspiracy theories, people might feel that a considerable amount of information, including videos, simply cannot be trusted. The result is a phenomenon termed as information apocalypse or reality apathy.

One of the most infamous uses of deepfake technology has been to superimpose the faces of unsuspecting women onto pornographic videos.

Read the rest here:
Organized Crime Has a New Tool in Its Belts - Artificial Intelligence - OCCRP

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks – Quality Magazine

How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks | 2020-11-27 | Quality Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

The rest is here:
How Artificial Intelligence overcomes major obstacles standing in the way of automating complex visual inspection tasks - Quality Magazine

Top 3 Emerging Technologies in Artificial Intelligence in the 2020s – Analytics Insight

Artificial Intelligence or popularly known as AI, has been the main driver of bringing disruption to todays tech world. While its applications like machine learning, neural network, deep learning have already earned huge recognition with their wide-ranging applications and use cases, AI is still in a nascent stage. This means, new developments are simultaneously taking place in this discipline, which can soon transform the AI industry and lead to new possibilities. So, some of the AI technologies today may become obsolete in the next ten years, and others may pave the way to even better versions of themselves. Let us have a look at some of the promising AI technologies of tomorrow.

Recent advances inAI have allowed many companies to develop algorithms and tools to generate artificial 3D and 2D images automatically. These algorithms essentially form Generative AI, which enables machines to use things like text, audio files, and images to create content. The MIT Technology review described generative AI as one of the most promising advances in the world of AI in the past decade. It is poised for the next generation of apps for auto programming, content development, visual arts, and other creative, design, and engineering activities. For instance, NVIDIA has developed a software that cangenerate new photorealistic facesstarting from few pictures of real people. A generative AI-enabled campaign byMalaria Must Die featuredDavid Beckham speaking in 9 different languages to generate awareness for the cause.

It can also be used to provide better customer service, facilitate and speed up check-ins, enable performance monitoring, seamless connectivity, and quality control, and help find new networking opportunities. It also helps in film preservation and colorizations.

Generative AI can also help in healthcare by rendering prosthetic limbs,organic molecules, and other items from scratch when actuated through3D printing,CRISPR, and other technologies. It can also enable early identification of potential malignancy to more effective treatment plans. For instance, in the case of diabetic retinopathy, generative AInot only offers a pattern-based hypothesis but can also construe the scan and generate content, which can help to inform the physicians next steps.Even IBM is using this technology for researching on antimicrobial peptide (AMP)to find drugs for COVID-19.

Generative AI also leverages neural networks by exploiting the generative adversarial networks(GANs). GANs share similar functionalities and applications like generative AI, but it is also notorious for being misused to create deepfakes for cybercrimes. GANs are also used in research areas for projecting astronomical simulations, interpreting large data sets and much more.

According to Googles research paper titled,Communication-Efficient Learning of Deep Networks from Decentralized Data, federated learning is defined as a learning technique that allows users to collectively reap the benets of shared models trained from [this] rich data, without the need to centrally store it.In simpler technical parlance, it distributes the machine learning process over to the edge.

Data is an essential key to training machine learning models. This process involves setting up servers at points where models are trained on data via a cloud computing platform. Federated learning brings machine learning models to the data source (or edge nodes) rather than bringing the data to the model.It then links together multiple computational devices into a decentralized system that allows the individual devices that collect data to assist in training the model. This enables devices to collaboratively learn a shared prediction model while keeping all the training data on the individual device itself. This primarily cuts the necessity to move large amounts of data to a central server for training purposes. Thus, it addresses our data privacy woes.

Federated learning is used to improve Siris voice recognition capabilities. Google had initially employed federated learning to augment word recommendation inGoogles Android keyboardwithout uploading the users text data to the cloud. According to Googles blog, when Gboard shows a suggested query, our phone locally stores information about the current context and whether we clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboards query suggestion model.

Medical organizations generally are unable to share data due to privacy restrictions. Federated learning can help address this concern through decentralization by removing the need to pool data into a single location and training in multiple iterations at different sites.

Intel recently had teamed up with the University of Pennsylvania Medical School to deploy federated learning across 29 international healthcare and research institutions to identify brain tumors. The team published their findings onFederated Learning and its applications in healthcare in Nature and presented it at their Supercomputing 2020 event last week. According to the published paper, the team achieved 99% accuracy in identifying brain tumors compared to a traditional model.

Intel announced that this breakthrough could help in earlier detection and better outcomes for the more than 80,000 people diagnosed with a brain tumor each year.

AI made rapid progressions in analyzing big data by leveraging deep neural network (DNN). However, the key disadvantage of any neural network is that it is computationally intensive and memory intensive, which makes it difficult to deploy on embedded systems with limited hardware resources. Further, with the increasing size of the DNN for carrying complex computation, the storage needs are also rising. To address these issues, researchers have come with an AI technique called neural network compression.

Generally, a neural network contains far more weights, represented at higher precision than are required for the specific task, which they are trained to perform. If we wish to bring real-time intelligence or boost edge applications, neural network models must be smaller. For compressing the models, researchers rely on the following methods: parameter pruning and sharing, quantization, low-rank factorization, transferred or compact convolutional filters, and knowledge distillation.

Pruning identifies and removes unnecessary weights or connections or parameters, leaving the network with important ones. Quantization compresses the model by reducing the number of bits that represent each connection. Low-rank factorization leverages matrix decomposition to estimate the informative parameters of the DNNs. Compact convolutions filters help to filter unnecessary weight or parameter space and retain the important ones required to carry out convolution, hence saving the storage space. And knowledge distillation aids in training a more compact neural network to mimic a larger networks output.

Recently, NVIDIA developed a new type of video compression technology that replaces the traditional video codec with a neural network to reduce video bandwidth drastically. Dubbed as NVIDIA Maxine, this platform uses AI to improve the quality and experience of video-conferencing applications in real-time. NVIDIA claims Maxine can reduce the bandwidth load down to one-tenth of H.264 using AI video compression. Further, it is also cloud-based, which makes it easier to deploy the solution for everyone.

Follow this link:
Top 3 Emerging Technologies in Artificial Intelligence in the 2020s - Analytics Insight

Pentagon Teams with Howard University to Steer Artificial Intelligence Center of Excellence – Nextgov

The Defense Department, Army and Howard University linked up to collectively push forward artificial intelligence and machine learning-rooted research, technologies and applications through a recently unveiled center of excellence.

Work it will underpin will shape the future, according to an announcement Monday from the Army Research Laboratoryand the $7.5 million center also marks a move by the Pentagon to help expand its pipeline for future personnel.

Diversity of science and diversity of the future [science and technology] talent base go hand-in-hand in this new and exciting partnership, Dr. Brian Sadler, Army senior research scientist for intelligent systems said. Tapped to manage the partnership, Sadler added that Howard University is an intellectual center for the nation.

Encompassing 13 schools and colleges, the institution is a private, historically Black research university that was founded in 1867. Fulbright recipients, Rhodes scholars and other notable experts were educated at Howard, which also produces more on-campus African-American Ph.D. recipients than any other in America, the release noted. In early 2020, the Armys Combat Capabilities Development Command previously partnered with the university, to support science, technology, engineering, and mathematics [or STEM] educational assistance and advancement among underrepresented groups.

Computer Science Prof. Danda Rawat, who also serves as director of Howards Data Science & Cybersecurity Center will lead the CoE, and the programs execution will be managed by the Army Research Laboratory, or ARL.

This center of excellence is a big win for the Army and [Defense Department] on many fronts, Sadler said. The research is directly aligned with Army priorities and will address pressing problems in both developing and applying AI tools and techniques in several key applications.

A kickoff meeting was set for mid-November, to jumpstart the research and work. ARLs release said the effort will explore vital civilian applications and multi-domain military operations spanning three specific areas of focus: key AI applications for Defense, technological foundation for trustworthy AI technologies, and infrastructure for AI research and development.

U.S. graduate students and early-career research faculty with expertise in STEM fields will gain fellowship and scholarship opportunities through the laboratory, and the government and academic partners also intend to collaborate on research and publications, mentoring, internships, workshops and seminars. Educational training and research exchange visits at both the lab and school will also be offered.

An ARL spokesperson told Nextgov Tuesday that officials involved expect to share program updates after the new year.

Originally posted here:
Pentagon Teams with Howard University to Steer Artificial Intelligence Center of Excellence - Nextgov

Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted – ScienceAlert

How might The Terminator have played out if Skynet had decided it probably wasn't responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they're untrustworthy.

These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don't have the capacity to analyse.

While Skynet might still be some way off, AI is already making decisions in fields that affect human lives like autonomous driving and medical diagnosis, and that means it's vital that they're as accurate as possible. To help towards this goal, this newly created neural network system can generate its confidence level as well as its predictions.

"We need the ability to not only have high-performance models, but also to understand when we cannot trust those models," says computer scientist Alexander Aminifrom the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

This self-awareness of trustworthiness has been given the name Deep Evidential Regression, and it bases its scoring on the quality of the available data it has to work with the more accurate and comprehensive the training data, the more likely it is that future predictions are going to work out.

The research team compares it to a self-driving car having different levels of certainty about whether to proceed through a junction or whether to wait, just in case, if the neural network is less confident in its predictions. The confidence rating even includes tips for getting the rating higher (by tweaking the network or the input data, for instance).

While similar safeguards have been built into neural networks before, what sets this one apart is the speed at which it works, without excessive computing demands it can be completed in one run through the network, rather than several, with a confidence level outputted at the same time as a decision.

"This idea is important and applicable broadly," says computer scientist Daniela Rus. "It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model."

The researchers tested their new system by getting it to judge depths in different parts of an image, much like a self-driving car might judge distance. The network compared well to existing setups, while also estimating its own uncertainty the times it was least certain were indeed the times it got the depths wrong.

As an added bonus, the network was able to flag up times when it encountered images outside of its usual remit (so very different to the data it had been trained on) which in a medical situation could mean getting a doctor to take a second look.

Even if a neural network is right 99 percent of the time, that missing 1 percent can have serious consequences, depending on the scenario. The researchers say they're confident that their new, streamlined trust test can help improve safety in real time, although the work has not yet been peer-reviewed.

"We're starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences," says Amini.

"Any user of the method, whether it's a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision."

The research is being presented at the NeurIPS conference in December, and anonline paperis available.

More here:
Artificial Intelligence Is Now Smart Enough to Know When It Can't Be Trusted - ScienceAlert

Artificial intelligence re-imagined to tackle society’s challenges with people at its heart – University of Southampton

Home>>>

Published:27 November 2020

AI systems will be re-designed to value people as more than passive providers of data in a prestigious new Turing Artificial Intelligence Acceleration Fellowship at the University of Southampton.

The novel research, led by Electronics and Computer Science's Dr Sebastian Stein, will create AI systems that are aware of citizens' preferences and act to maximise the benefit to society.

In these systems, citizens are supported by trusted personal software agents that learn an individuals preferences. Importantly, rather than share this data with a centralised system, the AI agents keep it safe on private smart devices and only use it in their owners' interests.

Over the next five years, the 1.4m fellowship will develop and trial citizen-centric AI systems in a range of application areas, such as smart home energy management, on-demand mobility and disaster response, including for the provision of advice and medical support during epidemics like COVID-19.

Dr Stein, of the Agents, Interaction and Complexity (AIC) research group, says: "AI systems are increasingly used to support and often automate decision-making on an unprecedented scale. Such AI systems can draw on a vast range of data sources to make fast, efficient, data-driven decisions to address important societal challenges and potentially benefit millions of people.

"However, building AI systems on such a large and pervasive scale raises a range of important challenges. First, these systems may need access to relevant information from people, such as health-related data, which raises privacy issues and may also encourage people to misrepresent their requirements for personal benefit. Furthermore, the systems must be trusted to act in a manner that aligns with societys ethical values. This includes the minimisation of discrimination and the need to make equitable decisions.

"Novel approaches are needed to build AI systems that are trusted by citizens, that are inclusive and that achieve their goals effectively. To enable this, citizens must be viewed as first-class agents at the centre of AI systems, rather than as passive data sources."

The new vision for AI systems will be achieved by developing techniques that learn the preferences, needs and constraints of individuals to provide personalised services, incentivise socially-beneficial behaviour changes, make choices that are fair, inclusive and equitable, and provide explanations for these decisions.

The Southampton team will draw upon a unique combination of research in multi-agent systems, mechanism design, human-agent interaction and responsible AI.

Dr Stein will work with a range of high-profile stakeholders over the duration of the fellowship. This will include citizen end-users, to ensure the research aligns with their needs and values, as well as industrial partners, to put the research into practice.

Specifically, collaboration with EA Technology and Energy Systems Catapult will generate incentive-aware smart charging mechanisms for electric vehicles. Meanwhile, work with partners including Siemens Mobility, Thales and Connected Places Catapult will develop new approaches for trusted on-demand mobility. Within the Southampton region, the fellowship will engage with the Fawley Waterside development to work on citizen-centric solutions to smart energy and transportation.

The team will also work with Dstl to create disaster response applications that use crowdsourced intelligence from citizens to provide situational awareness, track the spread of infectious diseases or issue guidance to citizens. Further studies with Dstl and Thales will explore applications in national security and policing, and joint work with UTU Technologies will investigate how citizens can share their preferences and recommendations with trusted peers while retaining control over what data is shared and with whom.

Finally, with IBM Research, Dr Stein will develop new explainability and fairness tools, and integrate these with their existing open source frameworks.

Turing Artificial Intelligence Acceleration Fellowships, named after AI pioneer Alan Turing, are supported by a 20 million government investment in AI being delivered by UK Research and Innovation (UKRI), in partnership with the Department for Business Energy and Industrial Strategy, Office for AI and the Alan Turing Institute.

Science Minister, Amanda Solloway, says: "The UK is the birthplace of artificial intelligence and we therefore have a duty to equip the next generation of Alan Turings, like Southampton's Dr Sebastian Stein, with the tools that will keep the UK at the forefront of this remarkable technological innovation. "The inspiring AI project we are backing today to will help inform UK citizens in their decision making - from managing their energy needs to advising which mode of transport to take - transforming the way we live and work, while cementing the UK's status as a world leader in AI and data."

Digital Minister, Caroline Dinenage, says: "The UK is a nation of innovators and this government investment will help our talented academics use cutting-edge technology to improve people's daily lives - from delivering better disease diagnosis to managing our energy needs."

The University of Southampton has placed Machine Intelligence at the centre of its research activities for more than 20 years and has generated over 50m of funding for associated technologies in the last 10 years across 30 medium to large projects. Southampton draws together researchers and practitioners through its Centre for Machine Intelligence, trains the next generation of AI researchers via its UKRI Centre for Doctoral Training in Machine Intelligence for Nano- Electronic Devices and Systems (MINDS), and was recently chosen to host the UKRI Trustworthy Autonomous Systems (TAS) Hub.

Southampton is also a leading member of the UK national Alan Turing Institute with activities co-ordinated by the Universitys Web Science Institute.

View post:
Artificial intelligence re-imagined to tackle society's challenges with people at its heart - University of Southampton