Total Partners with CQC to Improve CO2 Capture – Energy Industry Review

Total is stepping up its research into Carbon Capture, Utilization and Storage (CCUS) technologies by signing a multi-year partnership with UK start-up Cambridge Quantum Computing (CQC). This partnership aims to develop new quantum algorithms to improve materials for CO2 capture. Totals ambition is to be a major player in CCUS and the Group currently invests up to 10% of its annual research and development effort in this area.

To improve the capture of CO2, Total is working on nanoporous materials called adsorbents, considered to be among the most promising solutions. These materials could eventually be used to trap the CO2 emitted by the Groups industrial operations or those of other players (cement, steel etc.). The CO2 recovered would then be concentrated and reused or stored permanently. These materials could also be used to capture CO2 directly from the air (Direct Air Capture or DAC).

The quantum algorithms which will be developed in the collaboration between Total and CQC will simulate all the physical and chemical mechanisms in these adsorbents as a function of their size, shape and chemical composition, and therefore make it possible to select the most efficient materials to develop. Currently, such simulations are impossible to perform with a conventional supercomputer, which justifies the use of quantum calculations.

Total is very pleased to be launching this new collaboration with Cambridge Quantum Computing: quantum computing opens up new possibilities for solving extremely complex problems. We are therefore among the first to use quantum computing in our research to design new materials capable of capturing CO2 more efficiently. In this way, Total intends to accelerate the development of the CCUS technologies that are essential to achieve carbon neutrality in 2050, said Marie-Nolle Semeria, Totals CTO.

We are very excited to be working with Total, a demonstrated thought-leader in CCUS technology. Carbon neutrality is one of the most significant topics of our time and incredibly important to the future of the planet. Total has a proven long-term commitment to CCUS solutions. We are hopeful that our work will lead to meaningful contributions and an acceleration on the path to carbon neutrality, Ilyas Khan, CEO of CQC, mentioned.

Total is deploying an ambitious R&D programme, worth nearly USD 1 billion a year. Total R&D relies on a network of more than 4,300 employees in 18 research centres around the world, as well as on numerous partnerships with universities, start-ups and industrial companies. Its investments are mainly devoted to a low-carbon energy mix (40%) as well as to digital, safety and the environment, operational efficiency and new products. It files more than 200 patents every year.

Read more from the original source:
Total Partners with CQC to Improve CO2 Capture - Energy Industry Review

Artificial Intelligence and IP – WIPO

(Photo: WIPO)AI and IP policy

The growth of AI across a range of technical fields raises a number of policy questions with respect to IP. The main focus of those questions is whether the existing IP system needs to be modified to provide balanced protection for machine created works and inventions, AI itself and the data AI relies on to operate. WIPO has started an open process to lead the conversation regarding IP policy implications.

From stories, to reports, news and more, we publish content on the topics most discussed in the field of AI and IP.

In a world in which AI is playing an ever-expanding role, including in the processes of innovation and creativity, Professor Ryan Abbott considers some of the challenges that AI is posing for the IP system.

Saudi inventor HadeelAyoub, founder of the London-based startup, BrightSign, talks about how she cameto develop BrightSign, an AI-based smart glove that allows sign language users tocommunicate directly with others without the assistance of an interpreter.

How big data, artificial intelligence, and other technologies are changing healthcare.

British-born computer scientist, Andrew Ng, leading thinker on AI, discusses the transformative power of AI, and the measures required to ensure that AI benefits everyone.

AI is set to transform our lives. But what exactly is AI, and what are the techniques and applications driving innovation in this area?

David Hanson, maker of Sophia the Robot and CEO and Founder of Hanson Robotics, shares his vision of a future built around super intelligence.

Read the original here:
Artificial Intelligence and IP - WIPO

Business Applications for Artificial Intelligence: An …

Discussion of artificial intelligence (AI) elicits a wide range of feelings. On one end of the spectrum is fear of job loss spurred by a bot revolution. On the opposite is excitement about the overblown prospects of what people can achieve with machine augmentation.

But Dr. Mark Esposito wants to root the conversation in reality. Esposito is the co-founder of Nexus Frontier Tech and instructor of Harvards Artificial Intelligence in Business: Creating Value with Machine Learning, a two-day intensive program.

Rather than thinking about what could be, he says businesses looking to adopt AI should look at what already exists.

AI has become the latest tech buzzword everywhere from Silicon Valley to China. But the first piece of AI, the artificial neuron, was developed in 1943 by scientist William McCulloch and logician Walter Pitts. Since then, weve come a long way in our understanding and development of models capable of comprehension, prediction, and analysis.

Artificial intelligence is already widely used in business applications, including automation, data analytics, and natural language processing. Across industries, these three fields of AI are streamlining operations and improving efficiencies.

Automation alleviates repetitive or even dangerous tasks. Data analytics provides businesses with insights never before possible. Natural language processing allows for intelligent search engines, helpful chatbots, and better accessibility for people who are visually impaired.

Other common uses for AI in business include:

Indeed, many experts note that the business applications of AI have advanced to such an extent that we live and work alongside it every day without even realizing it.

In 2018, Harvard Business Review predicted that AI stands to make the greatest impact in marketing services, supply chain management, and manufacturing.

Two years on, we are watching these predictions play out in real time. The rapid growth of AI-powered social media marketing, for instance, makes it easier than ever for brands to personalize the customer experience, connect with their customers, and track the success of their marketing efforts.

Supply chain management is also poised to make major AI-based advances in the next several years. Increasingly, process intelligence technologies will provide companies with accurate and comprehensive insight to monitor and improve operations in real-time.

Other areas where we can expect to see significant AI-based advancements include the healthcare industry and data transparency and security.

On the patient side of the healthcare business, we are likely to see AI help with everything from early detection and immediate diagnoses. On the physician side, AI is likely to play a larger role in streamlining scheduling processes and helping to secure patient records.

Data transparency and security is another area where AI is expected to make a significant difference in the coming years. As customers become aware of just how much data companies are collecting, the demand for greater transparency into what data is collected, how it is used, and how it is secured will only grow.

Additionally, as Esposito notes, there continues to be significant opportunity to grow the use of AI in finance and banking, two sectors with vast quantities of data and tremendous potential for AI-based modernization, but which still rely heavily on antiquated processes.

For some industries, the widespread rollout of AI hinges on ethical considerations to ensure public safety.

While cybersecurity has long been a concern in the tech world, some businesses must now also consider physical threats to the public. In transportation, this is a particularly pressing concern.

For instance, how autonomous vehicles should respond in a scenario in which an accident is imminent is a big topic of debate. Tools like MITs Moral Machine have been designed to gauge public opinion on how self-driving cars should operate when human harm cannot be avoided.

But the ethics question goes well beyond how to mitigate damage. It leads developers to question if its moral to place one humans life above another, to ask whether factors like age, occupation, and criminal history should determine when a person is spared in an accident.

Problems like these are why Esposito is calling for a global response to ethics in AI.

Given the need for specificity in designing decision-making algorithms, it stands to reason that an international body will be needed to set the standards according to which moral and ethical dilemmas are resolved, Esposito says in his World Economic Forum post.

Its important to stress the global aspect of these standards. Countries around the world are engaging in an AI arms race, quickly developing powerful systems. Perhaps too quickly.

If the race to develop artificial intelligence results in negligence to create ethical algorithms, the damage could be great. International standards can give developers guidelines and parameters that ensure machine systems mitigate risk and damage as well as a human, if not better.

According to Esposito, theres a lot of misunderstanding in the business world about AIs current capabilities and future potential. At Nexus, he and his partners work with startups and small businesses to adopt AI solutions that can streamline operations or solve problems.

Esposito discovered early on that many business owners assume AI can do everything a person can do, and more. A better approach involves identifying specific use cases.

The more you learn about the technology, the more you understand that AI is very powerful, Esposito says. But it needs to be very narrowly defined. If you dont have a narrow scope, it doesnt work.

For companies looking to leverage AI, Esposito says the first step is to look at which parts of your current operations can be digitized. Rather than dreaming up a magic-bullet solution, businesses should consider existing tech that can free up resources or provide new insights.

The low-hanging fruit is recognizing where in the value chain they can improve operations, Esposito says. AI doesnt start with AI. It starts at the company level.

For instance, companies that have already digitized payroll will find that theyre collecting a lot of data that could help forecast future costs. This allows businesses to hire and operate with more predictability, as well as streamline tasks for accounting.

One company thats successfully integrated AI tech into multiple aspects of its business is Unilever, a consumer goods corporation. In addition to streamlining hiring and onboarding, AI is helping Unilever get the most out of its vast amounts of data.

Data informs much of what Unilever does, from demand forecasts to marketing analytics. The company observed that their data sources were coming from varying interfaces and APIs, according to Diginomica. This both hindered access and made the data unreliable.

In response, Unilever developed its own platforms to store the data and make it easily accessible for its employees. Augmented with Microsofts Power BI tool, Unilevers platform collects data from both internal and external sources. It stores the data in a universal data lake where its preservedto be used indefinitely for anything from business logistics to product development.

Amazon is another early adopter. Even before its virtual assistant Alexa was in every other home in America, Amazon was an innovator in using machine learning to optimize inventory management and delivery.

With a fully robust, AI-empowered system in place, Amazon was able to make a successful foray into the food industry via its acquisition of Whole Foods, which now uses Amazon delivery services.

Esposito says this kind of scalability is key for companies looking to develop new AI products. They can then apply the tech to new markets or acquired businesses, which is essential for the tech to gain traction.

Both Unilever and Amazon are exemplary because theyre solving current problems with technology thats already available. And theyre predicting industry disruption so they can stay ahead of the pack.

Of course, these two examples are large corporations with deep pockets. But Esposito believes that most businesses thinking about AI realistically and strategically can achieve their goals.

Looking ahead from 2020, it is increasingly clear that AI will only work in conjunction with people, not instead of people.

Every major place where we have multiple dynamics happening can really be improved by these technologies, Esposito says. And I want to reinforce the fact that we want these technologies to improve society, not displace workers.

To ease fears over job loss, Esposito says business owners can frame the conversation around creating new, more functional jobs. As technologies improve efficiencies and create new insights, new jobs that build on those improvements are sure to arise.

Jobs are created by understanding what we do and what we can do better, Esposito says.

Additionally, developers should focus on creating tech that is probabilistic, as opposed to deterministic. In a probabilistic scenario, AI could predict how likely a person is to pay back a loan based on their history, then give the lender a recommendation. Deterministic AI would simply make that decision, ignoring any uncertainty.

There needs to be cooperation between machines and people, Esposito says. But we will never invite machines to make a decision on behalf of people.

See more here:
Business Applications for Artificial Intelligence: An ...

MS in Artificial Intelligence | Artificial Intelligence

The Master of Science in Artificial Intelligence (M.S.A.I.) degree program is offered by the interdisciplinary Institute for Artificial Intelligence. Areas of specialization include automated reasoning, cognitive modeling, neural networks, genetic algorithms, expert databases, expert systems, knowledge representation, logic programming, and natural-language processing. Microelectronics and robotics were added in 2000.

Admission is possible in every semester, but Fall admission is preferable. Applicants seeking financial assistance should apply before February 15, but assistantships are sometimes awarded at other times. Applicants must include a completed application form, three letters of recommendation, official transcripts, Graduate Record Examinations (GRE) scores, and a sample of your scholarly writing on any subject (in English). Only the General Test of the GRE is required for the M.S.A.I. program. International students must also submit results of the TOEFL and a statement of financial support. Applications must be completed at least six weeks before the proposed registration date.

No specific undergraduate major is required for admission, but admission is competitive. We are looking for students with a strong preparation in one or more relevant background areas (psychology, philosophy, linguistics, computer science, logic, engineering, or the like), a demonstrated ability to handle all types of academic work (from humanities to mathematics), and an excellent command of written and spoken English.

For more information regarding applications, please vist theMS Program AdmissionsandInformation for International Studentspages.

Requirements for the M.S.A.I. degree include: interdisciplinary foundational courses in computer science, logic, philosophy, psychology, and linguistics; courses and seminars in artificial intelligence programming techniques, computational intelligence, logic and logic programming, natural-language processing, and knowledge-based systems; and a thesis. There is a final examination covering the program of study and a defense of the written thesis.

For further information on course and thesis requirements, please visit theCourse & Thesis Requirementspage.

The Artificial Intelligence Laboratories serve as focal points for the M.S.A.I. program. AI students have regular access to PCs running current Windows technology, and a wireless network is available for students with laptops and other devices. The Institute also features facilities for robotics experimentation and a microelectronics lab. The University of Georgia libraries began building strong AI and computer science collections long before the inception of these degree programs. Relevant books and journals are located in the Main and Science libraries (the Science library is conveniently located in the same building complex as the Institute for Artificial Intelligence and the Computer Science Department). The University's library holdings total more than 3 million volumes.

Graduate assistantships, which include a monthly stipend and remission of tuition, are available. Assistantships require approximately 13-15 hours of work per week and permit the holder to carry a full academic program of graduate work. In addition, graduate assistants pay a matriculation fee and all student fees per semester.

For an up to date description of Tuition and Fees for both in-state and out-of-state students, please visit the site of theBursar's Office.

On-campus housing, including a full range of University-owned married student housing, is available to students. Student fees include use of a campus-wide bus system and some city bus routes. More information regarding housing is available here:University of Georgia Housing.

The University of Georgia has an enrollment of over 34,000, including approximately 8,000 graduate students. Students are enrolled from all 50 states and more than 100 countries. Currently, there is a very diverse group of students in the AI program. Women and international students are well represented.

Additional information about the Institute and the MSAI program, including policies for current students, can be found in the AI Student Handbook.

Excerpt from:
MS in Artificial Intelligence | Artificial Intelligence

What is Artificial Intelligence? | Azure Blog and Updates …

It has been said that Artificial Intelligence will define the next generation of software solutions. If you are even remotely involved with technology, you will almost certainly have heard the term with increasing regularity over the last few years. It is likely that you will also have heard different definitions for Artificial Intelligence offered, such as:

The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. Encyclopedia Britannica

Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Wikipedia

How useful are these definitions? What exactly are tasks commonly associated with intelligent beings? For many people, such definitions can seem too broad or nebulous. After all, there are many tasks that we can associate with human beings! What exactly do we mean by intelligence in the context of machines, and how is this different from the tasks that many traditional computer systems are able to perform, some of which may already seem to have some level of intelligence in their sophistication? What exactly makes the Artificial Intelligence systems of today different from sophisticated software systems of the past?

It could be argued that any attempt to try to define Artificial Intelligence is somewhat futile, since we would first have to properly define intelligence, a word which conjures a wide variety of connotations. Nonetheless, this article attempts to offer a more accessible definition for what passes as Artificial Intelligence in the current vernacular, as well as some commentary on the nature of todays AI systems, and why they might be more aptly referred to as intelligent than previous incarnations.

Firstly, it is interesting and important to note that the technical difference between what used to be referred to as Artificial Intelligence over 20 years ago and traditional computer systems, is close to zero. Prior attempts to create intelligent systems known as expert systems at the time, involved the complex implementation of exhaustive rules that were intended to approximate intelligent behavior. For all intents and purposes, these systems did not differ from traditional computers in any drastic way other than having many thousands more lines of code. The problem with trying to replicate human intelligence in this way was that it requires far too many rules and ignores something very fundamental to the way intelligent beings make decisions, which is very different from the way traditional computers process information.

Let me illustrate with a simple example. Suppose I walk into your office and I say the words Good Weekend? Your immediate response is likely to be something like yes or fine thanks. This may seem like very trivial behavior, but in this simple action you will have immediately demonstrated a behavior that a traditional computer system is completely incapable of. In responding to my question, you have effectively dealt with ambiguity by making a prediction about the correct way to respond. It is not certain that by saying Good Weekend I actually intended to ask you whether you had a good weekend. Here are just a few possible intents behind that utterance:

And more.

The most likely intended meaning may seem obvious, but suppose that when you respond with yes, I had responded with No, I mean it was a good football game at the weekend, wasnt it?. It would have been a surprise, but without even thinking, you will absorb that information into a mental model, correlate the fact that there was an important game last weekend with the fact that I said Good Weekend? and adjust the probability of the expected response for next time accordingly so that you can respond correctly next time you are asked the same question. Granted, those arent the thoughts that will pass through your head! You happen to have a neural network (aka your brain) that will absorb this information automatically and learn to respond differently next time.

The key point is that even when you do respond next time, you will still be making a prediction about the correct way in which to respond. As before, you wont be certain, but if your prediction fails again, you will gather new data, which leads to my suggested definition of Artificial Intelligence, as it stands today:

Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.

This is a somewhat appropriate definition of Artificial Intelligence because it is exactly what AI systems today are doing, and more importantly, it reflects an important characteristic of human beings which separates us from traditional computer systems: human beings are prediction machines. We deal with ambiguity all day long, from very trivial scenarios such as the above, to more convoluted scenarios that involve playing the odds on a larger scale. This is in one sense the essence of reasoning. We very rarely know whether the way we respond to different scenarios is absolutely correct, but we make reasonable predictions based on past experience.

Just for fun, lets illustrate the earlier example with some code in R! If you are not familiar with R, but would like to follow along, see the instructions on installation. First, lets start with some data that represents information in your mind about when a particular person has said good weekend? to you.

In this example, we are saying that GoodWeekendResponse is our score label (i.e. it denotes the appropriate response that we want to predict). For modelling purposes, there have to be at least two possible values in this case yes and no. For brevity, the response in most cases is yes.

We can fit the data to a logistic regression model:

Now what happens if we try to make a prediction on that model, where the expected response is different than we have previously recorded? In this case, I am expecting the response to be Go England!. Below, some more code to add the prediction. For illustration we just hardcode the new input data, output is shown in bold:

The initial prediction yes was wrong, but note that in addition to predicting against the new data, we also incorporated the actual response back into our existing model. Also note, that the new response value Go England! has been learnt, with a probability of 50 percent based on current data. If we run the same piece of code again, the probability that Go England! is the right response based on prior data increases, so this time our model chooses to respond with Go England!, because it has finally learnt that this is most likely the correct response!

Do we have Artificial Intelligence here? Well, clearly there are different levels of intelligence, just as there are with human beings. There is, of course, a good deal of nuance that may be missing here, but nonetheless this very simple program will be able to react, with limited accuracy, to data coming in related to one very specific topic, as well as learn from its mistakes and make adjustments based on predictions, without the need to develop exhaustive rules to account for different responses that are expected for different combinations of data. This is this same principle that underpins many AI systems today, which, like human beings, are mostly sophisticated prediction machines. The more sophisticated the machine, the more it is able to make accurate predictions based on a complex array of data used to train various models, and the most sophisticated AI systems of all are able to continually learn from faulty assertions in order to improve the accuracy of their predictions, thus exhibiting something approximating human intelligence.

You may be wondering, based on this definition, what the difference is between machine learning and Artificial intelligence? After all, isnt this exactly what machine learning algorithms do, make predictions based on data using statistical models? This very much depends on the definition of machine learning, but ultimately most machine learning algorithms are trained on static data sets to produce predictive models, so machine learning algorithms only facilitate part of the dynamic in the definition of AI offered above. Additionally, machine learning algorithms, much like the contrived example above typically focus on specific scenarios, rather than working together to create the ability to deal with ambiguity as part of an intelligent system. In many ways, machine learning is to AI what neurons are to the brain. A building block of intelligence that can perform a discreet task, but that may need to be part of a composite system of predictive models in order to really exhibit the ability to deal with ambiguity across an array of behaviors that might approximate to intelligent behavior.

There are a number of practical advantages in building AI systems, but as discussed and illustrated above, many of these advantages are pivoted around time to market. AI systems enable the embedding of complex decision making without the need to build exhaustive rules, which traditionally can be very time consuming to procure, engineer and maintain. Developing systems that can learn and build their own rules can significantly accelerate organizational growth.

Microsofts Azure cloud platform offers an array of discreet and granular services in the AI and Machine Learning domain, that allow AI developers and Data Engineers to avoid re-inventing wheels, and consume re-usable APIs. These APIs allow AI developers to build systems which display the type of intelligent behavior discussed above.

If you want to dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and the Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI, and Cognitive Toolkit, visit AI School.

See the article here:
What is Artificial Intelligence? | Azure Blog and Updates ...

What Are the Advantages of Artificial Intelligence …

The general benefit of artificial intelligence, or AI, is that it replicates decisions and actions of humans without human shortcomings, such as fatigue, emotion and limited time. Machines driven by AI technology are able to perform consistent, repetitious actions without getting tired. It is also easier for companies to get consistent performance across multiple AI machines than it is across multiple human workers.

Companies incorporate AI into production and service-based processes. In a manufacturing business, AI machines can churn out a high, consistent level of production without needing a break or taking time off like people. This efficiency improves the cost-basis and earning potential for many companies. Mobile devices use intuitive, voice-activated AI applications to offer users assistance in completing tasks. For example, users of certain mobile phones can ask for directions or information and receive a vocal response.

The premise of AI is that it models human intelligence. Though imperfections exist, there is often a benefit to AI machines making decisions that humans struggle with. AI machines are often programmed to follow statistical models in making decisions. Humans may struggle with personal implications and emotions when making similar decisions. Famous scientist Stephen Hawking uses AI to communicate with a machine, despite suffering from a motor neuron disease.

View original post here:
What Are the Advantages of Artificial Intelligence ...

AI Tutorial | Artificial Intelligence Tutorial – Javatpoint

The Artificial Intelligence tutorial provides an introduction to AI which will help you to understand the concepts behind Artificial Intelligence. In this tutorial, we have also discussed various popular topics such as History of AI, applications of AI, deep learning, machine learning, natural language processing, Reinforcement learning, Q-learning, Intelligent agents, Various search algorithms, etc.

Our AI tutorial is prepared from an elementary level so you can easily understand the complete tutorial from basic concepts to the high-level concepts.

In today's world, technology is growing very fast, and we are getting in touch with different new technologies day by day.

Here, one of the booming technologies of computer science is Artificial Intelligence which is ready to create a new revolution in the world by making intelligent machines.The Artificial Intelligence is now all around us. It is currently working with a variety of subfields, ranging from general to specific, such as self-driving cars, playing chess, proving theorems, playing music, Painting, etc.

AI is one of the fascinating and universal fields of Computer science which has a great scope in future. AI holds a tendency to cause a machine to work as a human.

Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."

So, we can define AI as:

Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and solving problems

With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that you can create a machine with programmed algorithms which can work with own intelligence, and that is the awesomeness of AI.

It is believed that AI is not a new technology, and some people says that as per Greek myth, there were Mechanical men in early days which can work and behave like humans.

Before Learning about Artificial Intelligence, we should know that what is the importance of AI and why should we learn it. Following are some main reasons to learn about AI:

Following are the main goals of Artificial Intelligence:

Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.

To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:

Following are some main advantages of Artificial Intelligence:

Every technology has some disadvantages, and thesame goes for Artificial intelligence. Being so advantageous technology still, it has some disadvantages which we need to keep in our mind while creating an AI system. Following are the disadvantages of AI:

Before learning about Artificial Intelligence, you must have the fundamental knowledge of following so that you can understand the concepts easily:

Our AI tutorial is designed specifically for beginners and also included some high-level concepts for professionals.

We assure you that you will not find any difficulty while learning our AI tutorial. But if there any mistake, kindly post the problem in the contact form.

Read the original post:
AI Tutorial | Artificial Intelligence Tutorial - Javatpoint

It’s Called Artificial Intelligencebut What Is Intelligence? – WIRED

Elizabeth Spelke, a cognitive psychologist at Harvard, has spent her career testing the worlds most sophisticated learning systemthe mind of a baby.

Gurgling infants might seem like no match for artificial intelligence. They are terrible at labeling images, hopeless at mining text, and awful at videogames. Then again, babies can do things beyond the reach of any AI. By just a few months old, they've begun to grasp the foundations of language, such as grammar. They've started to understand how the physical world works, how to adapt to unfamiliar situations.

Yet even experts like Spelke don't understand precisely how babiesor adults, for that matterlearn. That gap points to a puzzle at the heart of modern artificial intelligence: We're not sure what to aim for.

Consider one of the most impressive examples of AI, AlphaZero, a program that plays board games with superhuman skill. After playing thousands of games against itself at hyperspeed, and learning from winning positions, AlphaZero independently discovered several famous chess strategies and even invented new ones. It certainly seems like a machine eclipsing human cognitive abilities. But AlphaZero needs to play millions more games than a person during practice to learn a game. Most tellingly, it cannot take what it has learned from the game and apply it to another area.

To some members of the AI priesthood, that calls for a new approach. What makes human intelligence special is its adaptabilityits power to generalize to never-seen-before situations, says Franois Chollet, a well-known AI engineer and the creator of Keras, a widely used framework for deep learning. In a November research paper, he argued that it's misguided to measure machine intelligence solely according to its skills at specific tasks. Humans don't start out with skills; they start out with a broad ability to acquire new skills, he says. What a strong human chess player is demonstrating isn't the ability to play chess per se, but the potential to acquire any task of a similar difficulty. That's a very different capability.

Chollet posed a set of problems designed to test an AI program's ability to learn in a more generalized way. Each problem requires arranging colored squares on a grid based on just a few prior examples. It's not hard for a person. But modern machine-learning programstrained on huge amounts of datacannot learn from so few examples. As of late April, more than 650 teams had signed up to tackle the challenge; the best AI systems were getting about 12 percent correct.

A self-driving car cannot intuit from common sense what will happen if a truck spills its load.

It isn't yet clear how humans solve these problems, but Spelke's work offers a few clues. For one thing, it suggests that humans are born with an innate ability to quickly learn certain things, like what a smile means or what happens when you drop something. It also suggests we learn a lot from each other. One recent experiment showed that 3-month-olds appear puzzled when someone grabs a ball in an inefficient way, suggesting that they already appreciate that people cause changes in their environment. Even the most sophisticated and powerful AI systems on the market can't grasp such concepts. A self-driving car, for instance, cannot intuit from common sense what will happen if a truck spills its load.

Josh Tenenbaum, a professor in MIT's Center for Brains, Minds & Machines, works closely with Spelke and uses insights from cognitive science as inspiration for his programs. He says much of modern AI misses the bigger picture, likening it to a Victorian-era satire about a two-dimensional world inhabited by simple geometrical people. We're sort of exploring Flatlandonly some dimensions of basic intelligence, he says. Tenenbaum believes that, just as evolution has given the human brain certain capabilities, AI programs will need a basic understanding of physics and psychology in order to acquire and use knowledge as efficiently as a baby. And to apply this knowledge to new situations, he says, they'll need to learn in new waysfor example, by drawing causal inferences rather than simply finding patterns. At some pointyou know, if you're intelligentyou realize maybe there's something else out there, he says.

This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Special Series: The Future of Thinking Machines

Here is the original post:
It's Called Artificial Intelligencebut What Is Intelligence? - WIRED

Powering the Artificial Intelligence Revolution – HPCwire

It has been observed by many that we are at the dawn of the next industrial revolution: The Artificial Intelligence (AI) revolution. The benefits delivered by this intelligence revolution will be many: in medicine, improved diagnostics and precision treatment, better weather forecasting, and self-driving vehicles to name a few. However, one of the costs of this revolution is going to be increased electrical consumption by the data centers that will power it. Data center power usage is projected to double over the next 10 years and is on track to consume 11% of worldwide electricity by 2030. Beyond AI adoption, other drivers of this trend are the movement to the cloud and increased power usage of CPUs, GPUs and other server components, which are becoming more powerful and smart.

AIs two basic elements, training and inference, each consume power differently. Training involves computationally intensive matrix operations over very large data sets, often measured in terabytes to petabytes. Examples of these data sets can range from online sales data to captured video feeds to ultra-high-resolution images of tumors. AI inference is computationally much lighter in nature, but can run indefinitely as a service, which draws a lot of power when hit with a large number of requests. Think of a facial recognition application for security in an office building. It runs continuously but would stress the compute and storage resources at 8:00am and again at 5:00pm as people come and go to work.

However, getting a good handle on power usage in AI is difficult. Energy consumption is not part of standard metrics tracked by job schedulers and while it can be set up, it is complicated and vendor dependent. This means that most users are flying blind when it comes to energy usage.

To map out AI energy requirements, Dr. Miro Hodak led a team of Lenovo engineers and researchers, which looked at the energy cost of an often-used AI workload. The study, Towards Power Efficiency in Deep Learning on Data Center Hardware, (registration required) was recently presented at the 2019 IEEE International Conference on Big Data and was published in the conference proceedings. This work looks at the energy cost of training ResNet50 neural net with ImageNet dataset of more than 1.3 million images on a Lenovo ThinkSystem SR670 server equipped with 4 Nvidia V100 GPUs. AC data from the servers power supply, indicates that 6.3 kWh of energy, enough to power an average home for six hours, is needed to fully train this AI model. In practice, trainings like these are repeated multiple times to tune the resulting models, resulting in energy costs that are actually several times higher.

The study breaks down the total energy into its components as shown in Fig. 1. As expected, the bulk of the energy is consumed by the GPUs. However, given that the GPUs handle all of the computationally intensive parts, the 65% share of energy is lower than expected. This shows that simplistic estimates of AI energy costs using only GPU power are inaccurate and miss significant contributions from the rest of the system. Besides GPUs, CPU and memory account for almost quarter of the energy use and 9% of energy is spent on AC to DC power conversion (this is within line of 80 PLUS Platinum certification of SR670 PSUs).

The study also investigated ways to decrease energy cost by system tuning without changing the AI workload. We found that two types of system settings make most difference: UEFI settings and GPU OS-level settings. ThinkSystem servers provides four UEFI running modes: Favor Performance, Favor Energy, Maximum Performance and Minimum Power. As shown in Table 1, the last option is the best and provides up to 5% energy savings. On the GPU side, 16% of energy can be saved by capping V100 frequency to 1005 MHz as shown in Figure 2. Taking together, our study showed that system tunings can decrease energy usage by 22% while increasing runtime by 14%. Alternatively, if this runtime cost is unacceptable, a second set of tunings, which save 18% of energy while increasing time by only 4%, was also identified. This demonstrates that there is lot of space on system side for improvements in energy efficiency.

Energy usage in HPC has been a visible challenge for over a decade, and Lenovo has long been a leader in energy efficient computing. Whether through our innovative Neptune liquid-cooled system designs, or through Energy-Aware Runtime (EAR) software, a technology developed in collaboration with Barcelona Supercomputing Center (BSC). EAR analyzes user applications to find optimum CPU frequencies to run them at. For now, EAR is CPU-only, but investigations into extending it to GPUs are ongoing. Results of our study show that that is a very promising way to bring energy savings to both HPC and AI.

Enterprises are not used to grappling with the large power profiles that AI requires, the way HPC users have become accustomed. Scaling out these AI solutions will only make that problem more acute. The industry is beginning to respond. MLPerf, currently the leading collaborative project for AI performance evaluation, is preparing new specifications for power efficiency. For now, it is limited to inference workloads and will most likely be voluntary, but it represents a step in the right direction.

So, in order to enjoy those precise weather forecasts and self-driven cars, well need to solve the power challenges they create. Today, as the power profile of CPUs and GPUs surges ever upward, enterprise customers face a choice between three factors: system density (the number of servers in a rack), performance and energy efficiency. Indeed, many enterprises are accustomed to filling up rack after rack with low cost, adequately performing systems that have limited to no impact on the electric bill. Unfortunately, until the power dilemma is solved, those users must be content with choosing only two of those three factors.

Read the original:
Powering the Artificial Intelligence Revolution - HPCwire

An AI future set to take over post-Covid world – The Indian Express

Updated: May 18, 2020 10:03:39 pm

Written by Seuj Saikia

Rabindranath Tagore once said, Faith is the bird that feels the light when the dawn is still dark. The darkness that looms over the world at this moment is the curse of the COVID-19 pandemic, while the bird of human freedom finds itself caged under lockdown, unable to fly. Enthused by the beacon of hope, human beings will soon start picking up the pieces of a shared future for humanity, but perhaps, it will only be to find a new, unfamiliar world order with far-reaching consequences for us that transcend society, politics and economy.

Crucially, a technology that had till now been crawling or at best, walking slowly will now start sprinting. In fact, a paradigm shift in the economic relationship of mankind is going to be witnessed in the form of accelerated adoption of artificial intelligence (AI) technologies in the modes of production of goods and services. A fourth Industrial Revolution as the AI-era is referred to has already been experienced before the pandemic with the backward linkages of cloud computing and big data. However, the imperative of continued social distancing has made an AI-driven economic world order todays reality.

Setting aside the oft-discussed prophecies of the Robo-Human tussle, even if we simply focus on the present pandemic context, we will see millions of students accessing their education through ed-tech apps, mothers buying groceries on apps too and making cashless payments through fintech platforms, and employees attending video conferences on relevant apps as well: All this isnt new phenomena, but the scale at which they are happening is unparalleled in human history. The alternate universe of AI, machine learning, cloud computing, big data, 5G and automation is getting closer to us every day. And so is a clash between humans (labour) and robots (plant and machinery).

This clash might very well be fuelled by automation. Any Luddite will recall the misadventures of the 19th-century textile mills. However, the automation that we are talking about now is founded on the citadel of artificially intelligent robots. Eventually, this might merge the two factors of production into one, thereby making labour irrelevant. As factories around the world start to reboot post COVID-19, there will be hard realities to contend with: Shortage of migrant labourers in the entire gamut of the supply chain, variations of social distancing induced by the fears of a second virus wave and the overall health concerns of humans at work. All this combined could end up sparking the fire of automation, resulting in subsequent job losses and possible reallocation/reskilling of human resources.

In this context, a potential counter to such employment upheavals is the idea of cash transfers to the population in the form of Universal Basic Income (UBI). As drastic changes in the production processes lead to a more cost-effective and efficient modern industrial landscape, the surplus revenue that is subsequently earned by the state would act as a major source of funds required by the government to run UBI. Variants of basic income transfer schemes have existed for a long time and have been deployed to unprecedented levels during this pandemic. Keynesian macroeconomic measures are increasingly being seen as the antidote to the bedridden economies around the world, suffering from near-recession due to the sudden ban on economic activities. Governments would have to be innovative enough to pump liquidity into the system to boost demand without harming the fiscal discipline. But what separates UBI from all these is its universality, while others remain targeted.

This new economic world order would widen the cracks of existing geopolitical fault lines particularly between US and China, two behemoths of the AI realm. Datanomics has taken such a high place in the valuation spectre that the most valued companies of the world are the tech giants like Apple, Google, Facebook, Alibaba, Tencent etc. Interestingly, they are also the ones who are at the forefront of AI innovations. Data has become the new oil. What transports data are not pipelines but fibre optic cables and associated communication technologies. The ongoing fight over the introduction of 5G technology central to automation and remote command-control architecture might see a new phase of hostility, especially after the controversial role played by the secretive Chinese state in the COVID-19 crisis.

The issues affecting common citizens privacy, national security, rising inequality will take on newer dimensions. It is pertinent to mention that AI is not all bad: As an imperative change that the human civilisation is going to experience, it has its advantages. Take the COVID-19 crisis as an example. Amidst all the chaos, big data has enabled countries to do contact tracing effectively, and 3D printers produced the much-needed PPEs at local levels in the absence of the usual supply chains. That is why the World Economic Forum (WEF) argues that agility, scalability and automation will be the buzzwords for this new era of business, and those who have these capabilities will be the winners.

But there are losers in this, too. In this case, the developing world would be the biggest loser. The problem of inequality, which has already reached epic proportions, could be further worsened in an AI-driven economic order. The need of the hour is to prepare ourselves and develop strategies that would mitigate such risks and avert any impending humanitarian disaster. To do so, in the words of computer scientist and entrepreneur Kai-Fu Lee, the author of AI Superpowers, we have to give centrality to our heart and focus on the care economy which is largely unaccounted for in the national narrative.

(The writer is assistant commissioner of income tax, IRS. Views are personal)

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Opinion News, download Indian Express App.

Continued here:
An AI future set to take over post-Covid world - The Indian Express