Daily Archives: June 28, 2021

Politics Podcast Mailbag Edition: What Alaska And Mars Have In Common – FiveThirtyEight

Posted: June 28, 2021 at 10:38 pm

In this installment of the FiveThirtyEight Politics podcast, Galen Druke and Nate Silver open the mailbag to answer listeners questions about politics, polling and hot dogs. Specifically, listeners want to know what to make of New York Citys mayoral race, whether primary elections tell us anything about the midterm elections, which voting system is the best, the likelihood of filibuster reform and, of course, whether hot dogs are considered sandwiches.

You can listen to the episode by clicking the play button in the audio player above or bydownloading it in iTunes, theESPN Appor your favorite podcast platform. If you are new to podcasts,learn how to listen.

The FiveThirtyEight Politics podcast is recorded Mondays and Thursdays. Help new listeners discover the show byleaving us a rating and review on iTunes. Have a comment, question or suggestion for good polling vs. bad polling? Get in touch by email,on Twitteror in the comments.

Follow this link:

Politics Podcast Mailbag Edition: What Alaska And Mars Have In Common - FiveThirtyEight

Posted in Mars | Comments Off on Politics Podcast Mailbag Edition: What Alaska And Mars Have In Common – FiveThirtyEight

Learn about Artificial Intelligence (AI) | Code.org

Posted: at 10:37 pm

AI and Machine Learning impact our entire world, changing how we live and how we work. That's why its critical for all of us to understand this increasingly important technology, including not just how its designed and applied, but also its societal and ethical implications.

Join us to explore AI in a new video series, train AI for Oceans in 25+ languages, discuss ethics, and more!

Learn about how AI works and why it matters with this series of short videos. Featuring Microsoft CEO Satya Nadella and a diverse cast of experts.

Students reflect on the ethical implications of AI, then work together to create an AI Code of Ethics resource for AI creators and legislators everywhere.

We thank Microsoft for supporting our vision and mission to ensure every child has the opportunity to learn computer science and the skills to succeed in the 21st century.

With an introduction by Microsoft CEO Satya Nadella, this series of short videos will introduce you to how artificial intelligence works and why it matters. Learn about neural networks, or how AI learns, and delve into issues like algorithmic bias and the ethics of AI decision-making.

Go deeper with some of our favorite AI experts! This panel discussion touches on important issues like algorithmic bias and the future of work. Pair it with our AI & Ethics lesson plan for a great introduction to the ethics of artificial intelligence!

Resources to inspire students to think deeply about the role computer science can play in creating a more equitable and sustainable world.

This global AI for Good challenge introduces students to Microsofts AI for Good initiatives, empowering them to solve a problem in the world with the power of AI.

Levels 2-4 use a pretrained model provided by the TensorFlow MobileNet project. A MobileNet model is a convolutional neural network that has been trained on ImageNet, a dataset of over 14 million images hand-annotated with words such as "balloon" or "strawberry". In order to customize this model with the labeled training data the student generates in this activity, we use a technique called Transfer Learning. Each image in the training dataset is fed to MobileNet, as pixels, to obtain a list of annotations that are most likely to apply to it. Then, for a new image, we feed it to MobileNet and compare its resulting list of annotations to those from the training dataset. We classify the new image with the same label (such as "fish" or "not fish") as the images from the training set with the most similar results.

Levels 6-8 use a Support-Vector Machine (SVM). We look at each component of the fish (such as eyes, mouth, body) and assemble all of the metadata for the components (such as number of teeth, body shape) into a vector of numbers for each fish. We use these vectors to train the SVM. Based on the training data, the SVM separates the "space" of all possible fish into two parts, which correspond to the classes we are trying to learn (such as "blue" or "not blue").

[Back to top]

View post:

Learn about Artificial Intelligence (AI) | Code.org

Posted in Ai | Comments Off on Learn about Artificial Intelligence (AI) | Code.org

We Should Test AI the Way the FDA Tests Medicines – Harvard Business Review

Posted: at 10:37 pm

Predictive algorithms risk creating self-fulfilling prophecies, reinforcing preexisting biases. This is largely because it does not distinguish between causation and correlation. To prevent this, we should submit new algorithms to randomized controlled trials, similar to those the FDA supervises when approving new drugs. This would enable us to infer whether an AI is making predictions on the basis of causation.

We would never allow a drug to be sold in the market without having gone through rigorous testing not even in the context of a health crisis like the coronavirus pandemic. Then why do we allow algorithms that can be just as damaging as a potent drug to be let loose into the world without having undergone similarly rigorous testing? At the moment, anyone can design an algorithm and use it to make important decisions about people whether they get a loan, or a job, or an apartment, or a prison sentence without any oversight or any kind of evidence-based requirement. The general population is being used as guinea pigs.

Artificial intelligence is a predictive technology. They assess, for example, whether a car is likely to hit an object, whether a supermarket is likely to need more apples this week, and whether a person is likely to pay back a loan, be a good employee, or commit a further offense. Important decisions, including life-and-death ones, are made on the basis of algorithmic predictions.

Predictions try to fill in missing information about the future in order to reduce uncertainty. But predictions are rarely neutral observers they change the state of affairs they predict, to the extent that they become self-fulfilling prophecies. For example, when important institutions such as credit ratings publish negative forecasts about a country, that can result in investors fleeing the country, which in turn can cause an economic crisis.

Self-fulfilling prophecies are a problem when it comes to auditing the accuracy of algorithms. Suppose that a widely used algorithm determines that you are unlikely to be a good employee. Your not getting any jobs should not count as evidence that the algorithm is accurate, because the cause of your not getting jobs may be the algorithm itself.

We want predictive algorithms to be accurate, but not through any means certainly not through creating the reality they are supposed to predict. Too many times we learn that algorithms are defective once they have destroyed lives, as when an algorithm implemented by the Michigan Unemployment Insurance Agency falsely accused 34,000 unemployed people of fraud.

How can we limit the power of predictions to change the future?

One solution is to subject predictive algorithms to randomized controlled trials. The only way to know if, say, an algorithm that assesses job candidates is truly accurate, is to divide prospective employees into an experimental group (which is subjected to the algorithm) and a control group (which is assessed by human beings). The algorithm could assess people in both groups, but its decisions would only be applied to the experimental group. If people who were negatively ranked by the algorithm went on to have successful careers in the control group, then that would be good evidence that the algorithm is faulty.

Randomized controlled trials would also have greatpotential in identifying biases and other unforeseen negative consequences. Algorithms are infamously opaque. Its difficult to understand how they work, and when they have only been tested in a lab, they often act in surprising ways once they get exposed to real-world data. Rigorous trials could ensure that we dont use racist or sexist algorithms. An agency similar to the Food and Drug Administration could be created to make sure algorithms have been tested enough to be used on the public.

One of the reasons randomized controlled trials are considered the golden standard in medicine (as well as economics) is because they are the best evidence we can have of causation. In turn, one of AIs most glaring shortcomings is that it can identify correlations, but it doesnt understand causation, which often leads it astray. For example, when an algorithm decides that male job candidates are likelier to be good employees than female ones it does it because it cannot distinguish between causal features (e.g., most past successful employees have attended university because university is a good way to develop ones skills) and correlative ones (e.g., most past successful employees have been men because society suffers from sexist biases).

Randomized controlled trials have not only been the foundation of the advancement of medicine, they have also prevented countless potential disasters the release of drugs that could have killed us. Such trials could do the same for AI. And if we were to join AIs knack to recognize correlations with the ability of randomized controlled trials to help us infer causation, we would stand a much better chance of developing both a more powerful and a more ethical AI.

Read more here:

We Should Test AI the Way the FDA Tests Medicines - Harvard Business Review

Posted in Ai | Comments Off on We Should Test AI the Way the FDA Tests Medicines – Harvard Business Review

How AI Is Taking Over Our Gadgets – The Wall Street Journal

Posted: at 10:37 pm

If you think of AI as something futuristic and abstract, start thinking different.

Were now witnessing a turning point for artificial intelligence, as more of it comes down from the clouds and into our smartphones and automobiles. While its fair to say that AI that lives on the edgewhere you and I areis still far less powerful than its datacenter-based counterpart, its potentially far more meaningful to our everyday lives.

One key example: This fall, Apples Siri assistant will start processing voice on iPhones. Right now, even your request to set a timer is sent as an audio recording to the cloud, where it is processed, triggering a response thats sent back to the phone. By processing voice on the phone, says Apple, Siri will respond more quickly. This will only work on the iPhone XS and newer models, which have a compatible built-for-AI processor Apple calls a neural engine. People might also feel more secure knowing that their voice recordings arent being sent to unseen computers in faraway places.

Google actually led the way with on-phone processing: In 2019, it introduced a Pixel phone that could transcribe speech to text and perform other tasks without any connection to the cloud. One reason Google decided to build its own phones was that the company saw potential in creating custom hardware tailor-made to run AI, says Brian Rakowski, product manager of the Pixel group at Google.

These so-called edge devices can be pretty much anything with a microchip and some memory, but they tend to be the newest and most sophisticated of smartphones, automobiles, drones, home appliances, and industrial sensors and actuators. Edge AI has the potential to deliver on some of the long-delayed promises of AI, like more responsive smart assistants, better automotive safety systems, new kinds of robots, even autonomous military machines.

More:

How AI Is Taking Over Our Gadgets - The Wall Street Journal

Posted in Ai | Comments Off on How AI Is Taking Over Our Gadgets – The Wall Street Journal

Analytics and AI helps Experian help its customers – CIO

Posted: at 10:37 pm

For the past several years, Experian has been transforming its business with analytics and AI. Shri Santhanam, executive vice president and general manager of global analytics and AI at the consumer credit reporting company, says Experians data transformation has focused on three pillars: internal modernization, creating analytics products and services, and driving commercial impact and business impact for customers.

Despite the impact of the pandemic, weve actually managed to make good progress in the foundations of analytics and AI, Santhanam says. The demand for analytics and AI has dramatically increased. Theres interest and engagement in how data and analytics for clients can help us help them make better decisions in how they run their business.

Ascend Intelligence Services is a prime example of Experians efforts to create analytics products that can revolutionize its clients businesses. As a managed analytics service, Ascend provides lenders with AI-powered modeling and strategy development, management, and deployment. Experian data scientists build a machine learning (ML) custom credit risk model, optimize a decision strategy, and deploy the model in production for clients. The services include Ascend Intelligence Services Challenger, which is a collaborative model development service, and Ascend Intelligence Services Pulse, a proactive model monitoring and validation service.

Midsize lender Atlas Credit recently won a CIO 100 Award in IT Excellence for its work with Experian Ascend Intelligence Services. Ascend helped the Texas-based lender double its credit approval rates while reducing credit losses by up to 20%.

Excerpt from:

Analytics and AI helps Experian help its customers - CIO

Posted in Ai | Comments Off on Analytics and AI helps Experian help its customers – CIO

The future starts with Industrial AI – MIT Technology Review

Posted: at 10:37 pm

Domain expertise is the secret sauce that separates Industrial AI from more generic AI approaches. Industrial AI will guide innovation and efficiency improvements in capital-intensive industries for years to come, said Willie K Chan, CTO of AspenTech. Chan was one of the original members of the MIT ASPEN research program that later became AspenTech in 1981, now celebrating 40 years of innovation.

Incorporating that domain expertise gives Industrial AI applications a built-in understanding of the context, inner workings, and interdependencies of highly complex industrial processes and assets, and takes into account the design characteristics, capacity limits, and safety and regulatory guidelines crucial for real-world industrial operations.

More generic AI approaches may come up with specious correlations between industrial processes and equipment, generating inaccurate insights. Generic AI models are trained on large volumes of plant data that usually does not cover the full range of potential operations. Thats because the plant might be working within a very narrow and limited range of conditions for safety or design reasons. Consequently, these generic AI models cannot be extrapolated to respond to market changes or business opportunities. This further exacerbates the productization hurdles around AI initiatives in the industrial sector.

By contrast, Industrial AI leverages domain expertise specific to industrial processes and real-world engineering based on first principles that account for the laws of physics and chemistry (e.g., mass balance, energy balance) as guardrails for mitigating risks and complying with all the necessary safety, operational, and environmental regulations. This makes for a safe, sustainable, and holistic decision-making process, producing comprehensive results and trusted insights over the long run.

Digitalization in industrial facilities is critical to achieving new levels of safety, sustainability, and profitabilityand Industrial AI is a key enabler for that transformation.

Talking about Industrial AI as a revolutionary paradigm is one thing; actually seeing what it can do in real-life industrial settings is another. Below are a few examples that demonstrate how capital-intensive industries can leverage Industrial AI to overcome digitalization barriers and drive greater productivity, efficiency, and reliability in their operations.

These use cases are by no means exhaustive, but just a few examples of how pervasive, innovative, and broadly applicable Industrial AIs capabilities can be for the industry and for laying the groundwork for the digital plant of the future.

Industrial organizations need to accelerate digital transformation to stay relevant, competitive, and capable of addressing market disruptors. The Self-Optimizing Plant represents the ultimate vision of that journey.

Industrial AI embeds domain-specific know-how alongside the latest AI and machine-learning capabilities, into fit-for-purpose AI-enabled applications. This enables and accelerates the autonomous and semi-autonomous processes that run those operationsrealizing the vision of the Self-Optimizing Plant.

A Self-Optimizing Plant is a self-adapting, self-learning and self-sustaining set of industrial software technologies that work together to anticipate future conditions and act accordingly, adjusting operations within the digital enterprise. A combination of real-time data access and embedded Industrial AI applications empower the Self-Optimizing Plant to constantly improve on itselfdrawing on domain knowledge to optimize industrial processes, make easy-to-execute recommendations, and automate mission-critical workflows.

This will have numerous positive impacts on the business, including the following:

The Self-Optimizing Plant is the ultimate end goal of not just Industrial AI, but the industrial sectors digital transformation journey. By democratizing the application of industrial intelligence, the digital plant of the future drives greater levels of safety, sustainability, and profitability and empowers the next generation of the digital workforcefuture-proofing the business in volatile and complex market conditions. This is the real-world potential of Industrial AI.

To learn more about how Industrial AI is enabling the digital workforce of the future and creating the foundation for the Self-Optimizing Plant, visit http://www.aspentech.com/selfoptimizingplant, http://www.aspentech.com/accelerate, and http://www.aspentech.com/aiot.

This article was written by AspenTech. It was not produced by MIT Technology Reviews editorial staff.

Original post:

The future starts with Industrial AI - MIT Technology Review

Posted in Ai | Comments Off on The future starts with Industrial AI – MIT Technology Review

The first WHO report on AI in healthcare is a mixed bag of horror and delight – The Next Web

Posted: at 10:37 pm

The World Health Organization today issued its first-ever report on the use of artificial intelligence in healthcare.

The report is 165 pages cover-to-cover and it provides a summary assessment of the current state of AI in healthcare while also laying out several opportunities and challenges.

Most of what the report covers boils down to six guiding principles for [AIs] design and use.

Per a WHO blog post, these include:

These bullet points make up the framework for the reports exploration of the current and potential benefits and dangers of using AI in healthcare.

The report focuses a lot of attention on cutting through hype to give analysis on the present capabilities of AI in the healthcare sector. And, according to the report, the most common use for AI in healthcare is as a diagnostic aid.

Per the report:

AI is being considered to support diagnosis in several ways, including in radiology and medical imaging. Such applications, while more widely used than other AI applications, are still relatively novel, and AI is not yet used routinely in clinical decision-making.

The WHO anticipates this will soon change.

Per the report, the WHO expects AI to improve nearly every aspect of healthcare from diagnostic accuracy to improved record-keeping. And theres even hope it could lead to drastically improved outcomes for patients presenting with stroke, heart attack, or other illnesses where early diagnosis is crucial.

Furthermore, AI is a data-based technology. The WHO believes the onset of machine learning technologies in healthcare could help predict the spread of disease and possibly even prevent epidemics in the future.

Its obvious from the report that the WHO is optimistic for the future of AI in healthcare. However, the report also details numerous challenges and risks associated with the wide-scale implementation of AI technologies into the healthcare system.

The report recognizes efforts on behalf of numerous nations to codify the use of AI in healthcare, but it also notes that current policies and regulations arent enough to protect patients and the public at large.

Specifically, the report outlines several areas where AI could make things worse. These include modern day concerns such as handing care of the elderly over to inhuman automated systems. And they also include future concerns:what happens when a human doctor disagrees with a black box AI system? If we cant explain why an AI made a decision, can we defend it if its diagnosis when it matters?

And the report also spends a significant portion of its pages discussing the privacy implications for the full implementation of AI into healthcare.

Per the report:

Collection of data without the informed consent of individuals for the intended uses (commercial or otherwise) undermines the agency, dignity and human rights of those individuals; however, even informed consent may be insufficient to compensate for the power dissymmetry between the collectors of data and the individuals who are the sources.

In other words: Even when everything is transparent, how can anyone be sure patients are giving informed consent when it comes to their medical information? When you consider the circumstances many patients are in when a doctor asks them to consent to a procedure, its hard to imagine a scenario where the intricacies of how artificial intelligence operates matters more than than what their doctor is recommending.

You can read the entire WHO report here.

Read the original post:

The first WHO report on AI in healthcare is a mixed bag of horror and delight - The Next Web

Posted in Ai | Comments Off on The first WHO report on AI in healthcare is a mixed bag of horror and delight – The Next Web

A Company That Uses AI To Fight Malaria Just Won The IBM Watson AI XPrize Competition – Forbes

Posted: at 10:37 pm

Zzapp Malaria, a company that uses artificial intelligence (AI) to fight malaria just won the grand prize in one of the toughest technology competitions to date. The competition is a joint venture between XPrize, the worlds leader in designing and operating incentive competitions to solve humanitys grand challenges, and IBM Watson, which is IBMs flagship AI platform, culminating in a $3 million dollar award for Zzapp.

Zzapps mission is straight-forward: use cutting edge technology to eliminate malaria in an efficient and scalable manner. The technology behind the companys platform is described as a software system that supports the planning and implementation of malaria elimination operations. Zzapp uses artificial intelligence to identify malaria hotspots and optimize interventions for maximum impact. Zzapp's map-based mobile app conveys the AI strategies to field workers as simple instructions, ensuring thorough implementation.

Specifically, the company explains: Malaria transmission takes place where water bodies and human populations converge: water bodies are necessary for mosquito larvae to develop, and humans act as the reservoir for the Plasmodium parasites responsible for malaria, and as a source of blood for mosquitoes [] In collaboration with Zzapp, IBM Watsons AI and Data Science Elite Team has developed a weather analysis module that predicts the abundance of water bodies based on weather data, allowing Zzapp to better time interventions, and more accurately determine the resources required to implement them.

An older video on the companys YouTube channel provides more insight into the process:

The world's largest mosquito net is unveiled 18 April 2000 in Abuja, Nigeria. Malaria causes more ... [+] than one million deaths around the world each year, more than 90 percent of them in Africa. afp photo phillip ojisua /PUE (Photo by - / AFP) (Photo credit should read -/AFP via Getty Images)

Innovation in this space could not come at a better time. Malaria is a devastating disease. Per the Centers for Disease Control (CDC) and Prevention, symptoms of malaria are extensive, entailing fever and flu-like illness, including shaking chills, headache, muscle aches, and tiredness. Nausea, vomiting, and diarrhea may also occur. Moreover, the CDC explains that If [malaria is] not promptly treated, the infection can become severe and may cause kidney failure, seizures, mental confusion, coma, and death.

The World Health Organization (WHO) reports jarring statistics regarding the widespread impact of the disease: In 2019, there were an estimated 229 million cases of malaria worldwide [] The estimated number of malaria deaths stood at 409 000 in 2019. The WHO also states that the African Region carries a disproportionately high share of the global malaria burden, with children under 5 years of age being the most vulnerable group to the disease.

Indeed, initiatives such as Zzapps effort to eradicate Malaria at the source point before it can even spread in a community may potentially add incredible value to the fight against the disease. Additionally, leveraging AI systems to identify the targets to focus on is a relatively new concept, and may become a worthwhile effort if the technology proves to be viable.

Zzapps victory in the competition is undoubtedly prestigious and deserves notable recognition. But perhaps equally if not more important, the victory signifies that the world is paying attention to a disease that is responsible for nearly half a million deaths annually. Indeed, there is still a long road ahead in the war against malaria; however, perhaps this small victory provides hope for a potentially better course ahead.

The content of this article is not implied to be and should not be relied on or substituted for professional medical advice, diagnosis, or treatment by any means, and is not written or intended as such. This content is for information and news purposes only. Consult with a trained medical professional for medical advice.

Follow this link:

A Company That Uses AI To Fight Malaria Just Won The IBM Watson AI XPrize Competition - Forbes

Posted in Ai | Comments Off on A Company That Uses AI To Fight Malaria Just Won The IBM Watson AI XPrize Competition – Forbes

Reinforcement learning could be the link between AI and human-level intelligence – The Next Web

Posted: at 10:37 pm

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward isall you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicatespecific functions of natural intelligencesuch as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, Ill try to disambiguate in simple terms where the line between theory and practice stands.

Credit: George Desipris

In their paper, the DeepMind scientists present the following hypothesis: Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. Im not an expert on the topic, but I suggest readingThe Blind Watchmakerby biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that dont get eliminated.

According to Dawkins, In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, thereasonsfor survival are anything but simple that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organisms survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didnt, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her bookConscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMinds scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

In their paper, DeepMinds scientists make the claim that the reward hypothesis can be implemented withreinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behavior so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agents behavior.

In anonline debate in December, computer scientist Richard Sutton, one of the papers co-authors, said, Reinforcement learning is the first computational theory of intelligence In reinforcement learning, the goal is to maximize an arbitrary reward signal.

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that canoutmatch humansin Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress insome of the most complex problems of science.

The scientists further wrote in their paper, According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximizinga singular reward in a single, complex environment[emphasis mine].

This is where hypothesis separates from practice. The keyword here is complex. The environments that DeepMind (and its quasi-rivalOpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources ofvery wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they cant offer theoretical guarantee on the sample efficiency of reinforcement learning agents.)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First, you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we dont have a fraction of the compute power needed to create quantum-scale simulations of the world.

Lets say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first life-forms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still dont have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power youll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent life-forms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Many will say that you dont need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: In order for a kitchen robot to maximize cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behavior that maximises cleanliness must therefore yield all these abilities in service of that singular goal.

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutors mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of cleanliness as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for cleanliness would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, theres a trade off between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of compute power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Read the original here:

Reinforcement learning could be the link between AI and human-level intelligence - The Next Web

Posted in Ai | Comments Off on Reinforcement learning could be the link between AI and human-level intelligence – The Next Web

New Intel XPU Innovations Target HPC and AI – Business Wire

Posted: at 10:37 pm

SANTA CLARA, Calif.--(BUSINESS WIRE)--At the 2021 International Supercomputing Conference (ISC) Intel is showcasing how the company is extending its lead in high performance computing (HPC) with a range of technology disclosures, partnerships and customer adoptions. Intel processors are the most widely deployed compute architecture in the worlds supercomputers, enabling global medical discoveries and scientific breakthroughs. Intel is announcing advances in its Xeon processor for HPC and AI as well as innovations in memory, software, exascale-class storage, and networking technologies for a range of HPC use cases.

More: Intel Data Center News | Intels HPC GM Trish Damkroger Keynotes 2021 ISC (Keynote Replay) | "Accelerating the Possibilities with HPC" (Keynote Presentation)

To maximize HPC performance we must leverage all the computer resources and technology advancements available to us, said Trish Damkroger, vice president and general manager of High Performance Computing at Intel. Intel is the driving force behind the industrys move toward exascale computing, and the advancements were delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realization.

Advancing HPC Performance Leadership

Earlier this year, Intel extended its leadership position in HPC with the launch of 3rd Gen Intel Xeon Scalable processors. The latest processor delivers up to 53% higher performance across a range of HPC workloads, including life sciences, financial services and manufacturing, as compared to the previous generation processor.

Compared to its closest x86 competitor, the 3rd Gen Intel Xeon Scalable processor delivers better performance across a range of popular HPC workloads. For example, when comparing a Xeon Scalable 8358 processor to an AMD EPYC 7543 processor, NAMD performs 62% better, LAMMPS performs 57% better, RELION performs 68% better, and Binomial Options performs 37% better. In addition, Monte Carlo simulations run more than two times faster, allowing financial firms to achieve pricing results in half the time. Xeon Scalable 8380 processors also outperform AMD EPYC 7763 processors on key AI workloads, with 50% better performance across 20 common benchmarks. HPC labs, supercomputing centers, universities and original equipment manufacturers who have adopted Intels latest compute platform include Dell Technologies, HPE, Korea Meteorological Administration, Lenovo, Max Planck Computing and Data Facility, Oracle, Osaka University and the University of Tokyo.

Integration of High Bandwidth Memory within Next-Gen Intel Xeon Scalable Processors

Workloads such as modeling and simulation (e.g., computational fluid dynamics, climate and weather forecasting, quantum chromodynamics), artificial intelligence (e.g., deep learning training and inferencing), analytics (e.g., big data analytics), in-memory databases, storage and others power humanitys scientific breakthroughs. The next-generation of Intel Xeon Scalable processors (code-named Sapphire Rapids) will offer integrated High Bandwidth Memory (HBM), providing a dramatic boost in memory bandwidth and a significant performance improvement for HPC applications that operate memory bandwidth-sensitive workloads. Users can power through workloads using just High Bandwidth Memory or in combination with DDR5.

Customer momentum is strong for Sapphire Rapids processors with integrated HBM, with early leading wins such as the U.S. Department of Energys Aurora supercomputer at Argonne National Laboratory and the Crossroads supercomputer at Los Alamos National Laboratory.

Achieving results at exascale requires the rapid access and processing of massive amounts of data, said Rick Stevens, associate laboratory director of Computing, Environment and Life Sciences at Argonne National Laboratory. Integrating high-bandwidth memory into Intel Xeon Scalable processors will significantly boost Auroras memory bandwidth and enable us to leverage the power of artificial intelligence and data analytics to perform advanced simulations and 3D modeling.

Charlie Nakhleh, associate laboratory director for Weapons Physics at Los Alamos National Laboratory, said: The Crossroads supercomputer at Los Alamos National Labs is designed to advance the study of complex physical systems for science and national security. Intels next-generation Xeon processor Sapphire Rapids, coupled with High Bandwidth Memory, will significantly improve the performance of memory-intensive workloads in our Crossroads system. The [Sapphire Rapids with HBM] product accelerates the largest complex physics and engineering calculations, enabling us to complete major research and development responsibilities in global security, energy technologies and economic competitiveness.

The Sapphire Rapids-based platform will provide unique capabilities to accelerate HPC, including increased I/O bandwidth with PCI express 5.0 (compared to PCI express 4.0) and Compute Express Link (CXL) 1.1 support, enabling advanced use cases across compute, networking and storage.

In addition to memory and I/O advancements, Sapphire Rapids is optimized for HPC and artificial intelligence (AI) workloads, with a new built-in AI acceleration engine called Intel Advanced Matrix Extensions (AMX). Intel AMX is designed to deliver significant performance increase for deep learning inference and training. Customers already working with Sapphire Rapids include CINECA, Leibniz Supercomputing Centre (LRZ) and Argonne National Lab, as well as the Crossroads system teams at Los Alamos National Lab and Sandia National Lab.

Intel Xe-HPC GPU (Ponte Vecchio) Powered On

Earlier this year, Intel powered on its Xe-HPC-based GPU (code-named Ponte Vecchio) and is in the process of system validation. Ponte Vecchio is an Xe architecture-based GPU optimized for HPC and AI workloads. It will leverage Intels Foveros 3D packaging technology to integrate multiple IPs in-package, including HBM memory and other intellectual property. The GPU is architected with compute, memory, and fabric to meet the evolving needs of the worlds most advanced supercomputers, like Aurora. Ponte Vecchio will be available in an OCP Accelerator Module (OAM) form factor and subsystems, serving the scale-up and scale-out capabilities required for HPC applications.

Extending Intel Ethernet For HPC

At ISC 2021, Intel is also announcing its new High Performance Networking with Ethernet (HPN) solution, which extends Ethernet technology capabilities for smaller clusters in the HPC segment by using standard Intel Ethernet 800 Series Network Adapters and Controllers, switches based on Intel Tofino P4-programmable Ethernet switch ASICs and the Intel Ethernet Fabric suite software. HPN enables application performance comparable to InfiniBand at a lower cost while taking advantage of the ease of use offered by Ethernet.

Commercial Support for DAOS

Intel is introducing commercial support for DAOS (distributed application object storage), an open-source software-defined object store built to optimize data exchange across Intel HPC architectures. DAOS is at the foundation of the Intel Exascale storage stack, previously announced by Argonne National Laboratory, and is being used by Intel customers such as LRZ and JINR (Joint Institute for Nuclear Research).

DAOS support is now available to partners as an L3 support offering, which enables partners to provide a complete turnkey storage solution by combining it with their services. In addition to Intels own data center building blocks, early partners for this new commercial support includes HPE, Lenovo, Supermicro, Brightskies, Croit, Nettrix, Quanta, and RSC Group.

More information about Intels participation at ISC 2021, including a full list of talks and demos, can be found at https://hpcevents.intel.com.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go to newsroom.intel.com and intel.com.

For performance claims, see [43, 47, 108] at http://www.intel.com/3gen-xeon-config.Results may vary.

Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

Follow this link:

New Intel XPU Innovations Target HPC and AI - Business Wire

Posted in Ai | Comments Off on New Intel XPU Innovations Target HPC and AI – Business Wire